DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/50] add features for host-based flow management
@ 2020-06-12 13:28 Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 01/50] net/bnxt: Basic infrastructure support for VF representors Somnath Kotur
                   ` (50 more replies)
  0 siblings, 51 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

This patchset introduces support for VF representors, flow
counters and on-chip exact match flows.
Also implements the driver hook for the rte_flow_query API.

Jay Ding (5):
  net/bnxt: implement support for TCAM access
  net/bnxt: support two level priority for TCAMs
  net/bnxt: add external action alloc and free
  net/bnxt: implement IF tables set and get
  net/bnxt: add global config set and get APIs

Kishore Padmanabha (8):
  net/bnxt: integrate with the latest tf_core library
  net/bnxt: add support for if table processing
  net/bnxt: disable vector mode in tx direction when truflow is enabled
  net/bnxt: add index opcode and index operand mapper table
  net/bnxt: add support for global resource templates
  net/bnxt: add support for internal exact match entries
  net/bnxt: add support for conditional execution of mapper tables
  net/bnxt: add support for vf rep and stat templates

Lance Richardson (1):
  net/bnxt: initialize parent PF information

Michael Wildt (7):
  net/bnxt: add multi device support
  net/bnxt: update multi device design support
  net/bnxt: multiple device implementation
  net/bnxt: update identifier with remap support
  net/bnxt: update RM with residual checker
  net/bnxt: update table get to use new design
  net/bnxt: add TF register and unregister

Mike Baucom (1):
  net/bnxt: add support for internal encap records

Pete Spreadborough (6):
  net/bnxt: add support for Exact Match
  net/bnxt: modify EM insert and delete to use HWRM direct
  net/bnxt: add HCAPI interface support
  net/bnxt: support EM and TCAM lookup with table scope
  net/bnxt: update RM to support HCAPI only
  net/bnxt: remove table scope from session

Peter Spreadborough (1):
  net/bnxt: add support for EEM System memory

Randy Schacher (2):
  net/bnxt: add core changes for EM and EEM lookups
  net/bnxt: align CFA resources with RM

Shahaji Bhosle (2):
  net/bnxt: support bulk table get and mirror
  net/bnxt: support two-level priority for TCAMs

Somnath Kotur (7):
  net/bnxt: Basic infrastructure support for VF representors
  net/bnxt: Infrastructure support for VF-reps data path
  net/bnxt: add support to get FID,default vnic ID and svif of VF-Rep
    Endpoint
  net/bnxt: fix to parse representor along with other dev-args
  net/bnxt: create default flow rules for the VF-rep conduit
  net/bnxt: support for ULP Flow counter Manager
  net/bnxt: Add support for flow query with action_type COUNT

Venkat Duvvuru (10):
  net/bnxt: modify ulp_port_db_dev_port_intf_update prototype
  net/bnxt: get port & function related information
  net/bnxt: add support for bnxt_hwrm_port_phy_qcaps
  net/bnxt: modify port_db to store & retrieve more info
  net/bnxt: enable HWRM_PORT_MAC_QCFG for trusted vf
  net/bnxt: fixes for port db
  net/bnxt: fix for VF to VFR conduit
  net/bnxt: fill mapper parameters with default rules info
  net/bnxt: add ingress & egress port default rules
  net/bnxt: fill cfa_action in the tx buffer descriptor properly

 config/common_base                              |    5 +-
 drivers/net/bnxt/Makefile                       |    8 +-
 drivers/net/bnxt/bnxt.h                         |  121 +-
 drivers/net/bnxt/bnxt_ethdev.c                  |  519 ++-
 drivers/net/bnxt/bnxt_hwrm.c                    |  122 +-
 drivers/net/bnxt/bnxt_hwrm.h                    |    7 +
 drivers/net/bnxt/bnxt_reps.c                    |  773 ++++
 drivers/net/bnxt/bnxt_reps.h                    |   45 +
 drivers/net/bnxt/bnxt_rxr.c                     |   38 +-
 drivers/net/bnxt/bnxt_rxr.h                     |    1 +
 drivers/net/bnxt/bnxt_txq.h                     |    2 +
 drivers/net/bnxt/bnxt_txr.c                     |   18 +-
 drivers/net/bnxt/hcapi/Makefile                 |   10 +
 drivers/net/bnxt/hcapi/cfa_p40_hw.h             |  781 ++++
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h            |  303 ++
 drivers/net/bnxt/hcapi/hcapi_cfa.h              |  273 ++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h         |  669 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c           |  411 ++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h           |  467 ++
 drivers/net/bnxt/hsi_struct_def_dpdk.h          | 3095 ++++++++++++--
 drivers/net/bnxt/meson.build                    |   21 +-
 drivers/net/bnxt/tf_core/Makefile               |   29 +-
 drivers/net/bnxt/tf_core/bitalloc.c             |  107 +
 drivers/net/bnxt/tf_core/bitalloc.h             |    5 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h   |  293 ++
 drivers/net/bnxt/tf_core/hwrm_tf.h              |  995 +----
 drivers/net/bnxt/tf_core/ll.c                   |   52 +
 drivers/net/bnxt/tf_core/ll.h                   |   46 +
 drivers/net/bnxt/tf_core/lookup3.h              |    1 -
 drivers/net/bnxt/tf_core/stack.c                |   10 +-
 drivers/net/bnxt/tf_core/stack.h                |   10 +
 drivers/net/bnxt/tf_core/tf_common.h            |   43 +
 drivers/net/bnxt/tf_core/tf_core.c              | 1495 +++++--
 drivers/net/bnxt/tf_core/tf_core.h              |  874 +++-
 drivers/net/bnxt/tf_core/tf_device.c            |  271 ++
 drivers/net/bnxt/tf_core/tf_device.h            |  650 +++
 drivers/net/bnxt/tf_core/tf_device_p4.c         |  147 +
 drivers/net/bnxt/tf_core/tf_device_p4.h         |  104 +
 drivers/net/bnxt/tf_core/tf_em.c                |  515 ---
 drivers/net/bnxt/tf_core/tf_em.h                |  492 ++-
 drivers/net/bnxt/tf_core/tf_em_common.c         | 1050 +++++
 drivers/net/bnxt/tf_core/tf_em_common.h         |  134 +
 drivers/net/bnxt/tf_core/tf_em_host.c           |  532 +++
 drivers/net/bnxt/tf_core/tf_em_internal.c       |  352 ++
 drivers/net/bnxt/tf_core/tf_em_system.c         |  538 +++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h   |   16 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c        |  199 +
 drivers/net/bnxt/tf_core/tf_global_cfg.h        |  170 +
 drivers/net/bnxt/tf_core/tf_identifier.c        |  186 +
 drivers/net/bnxt/tf_core/tf_identifier.h        |  147 +
 drivers/net/bnxt/tf_core/tf_if_tbl.c            |  178 +
 drivers/net/bnxt/tf_core/tf_if_tbl.h            |  236 +
 drivers/net/bnxt/tf_core/tf_msg.c               | 1681 ++++----
 drivers/net/bnxt/tf_core/tf_msg.h               |  409 +-
 drivers/net/bnxt/tf_core/tf_resources.h         |  531 ---
 drivers/net/bnxt/tf_core/tf_rm.c                | 3840 ++++-------------
 drivers/net/bnxt/tf_core/tf_rm.h                |  554 ++-
 drivers/net/bnxt/tf_core/tf_session.c           |  776 ++++
 drivers/net/bnxt/tf_core/tf_session.h           |  565 ++-
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c        |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h        |  240 ++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c       |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h       |  239 ++
 drivers/net/bnxt/tf_core/tf_tbl.c               | 1935 ++-------
 drivers/net/bnxt/tf_core/tf_tbl.h               |  469 +-
 drivers/net/bnxt/tf_core/tf_tcam.c              |  430 ++
 drivers/net/bnxt/tf_core/tf_tcam.h              |  360 ++
 drivers/net/bnxt/tf_core/tf_util.c              |  176 +
 drivers/net/bnxt/tf_core/tf_util.h              |   98 +
 drivers/net/bnxt/tf_core/tfp.c                  |   33 +-
 drivers/net/bnxt/tf_core/tfp.h                  |  153 +-
 drivers/net/bnxt/tf_ulp/Makefile                |    2 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h        |   16 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c              |  129 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h              |   35 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c         |   85 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c         |  385 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c            |  596 +++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h            |  163 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c           |   42 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            |  482 ++-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |    6 +-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |   10 +
 drivers/net/bnxt/tf_ulp/ulp_port_db.c           |  234 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.h           |  122 +-
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c        |   30 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c   |  433 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_class.c | 5217 +++++++++++++++++------
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h  |  537 +--
 drivers/net/bnxt/tf_ulp/ulp_template_db_field.h |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c   |   84 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |   23 +-
 drivers/net/bnxt/tf_ulp/ulp_utils.c             |    2 +-
 93 files changed, 28024 insertions(+), 11253 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 01/50] net/bnxt: Basic infrastructure support for VF representors
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 02/50] net/bnxt: Infrastructure support for VF-reps data path Somnath Kotur
                   ` (49 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Defines data structures and code to init/uninit
VF representors during pci_probe and pci_remove
respectively.
Most of the dev_ops for the VF representor are just
stubs for now and will be will be filled out in next patch.

To create a representor using testpmd:
testpmd -c 0xff -wB:D.F,representor=1 -- -i
testpmd -c 0xff -w05:02.0,representor=[1] -- -i

To create a representor using ovs-dpdk:
1. Firt add the trusted VF port to a bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0
2. Add the representor port to the bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0,representor=1

Reviewed-by: Kalesh Anakkur Purayil <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/Makefile      |   2 +
 drivers/net/bnxt/bnxt.h        |  64 ++++++++-
 drivers/net/bnxt/bnxt_ethdev.c | 225 +++++++++++++++++++++++++-------
 drivers/net/bnxt/bnxt_reps.c   | 287 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_reps.h   |  35 +++++
 drivers/net/bnxt/meson.build   |   1 +
 6 files changed, 566 insertions(+), 48 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index a375299..3656274 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -14,6 +14,7 @@ LIB = librte_pmd_bnxt.a
 EXPORT_MAP := rte_pmd_bnxt_version.map
 
 CFLAGS += -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 CFLAGS += $(WERROR_FLAGS)
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
@@ -38,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_reps.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c
 ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index d455f8d..9b7b87c 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -220,6 +220,7 @@ struct bnxt_child_vf_info {
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
+#define BNXT_MAX_VF_REPS	64
 #define BNXT_TOTAL_VFS(bp)	((bp)->pf->total_vfs)
 #define BNXT_FIRST_VF_FID	128
 #define BNXT_PF_RINGS_USED(bp)	bnxt_get_num_queues(bp)
@@ -492,6 +493,10 @@ struct bnxt_mark_info {
 	bool		valid;
 };
 
+struct bnxt_rep_info {
+	struct rte_eth_dev	*vfr_eth_dev;
+};
+
 /* address space location of register */
 #define BNXT_FW_STATUS_REG_TYPE_MASK	3
 /* register is located in PCIe config space */
@@ -515,6 +520,40 @@ struct bnxt_mark_info {
 #define BNXT_FW_STATUS_HEALTHY		0x8000
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
+#define BNXT_ETH_RSS_SUPPORT (	\
+	ETH_RSS_IPV4 |		\
+	ETH_RSS_NONFRAG_IPV4_TCP |	\
+	ETH_RSS_NONFRAG_IPV4_UDP |	\
+	ETH_RSS_IPV6 |		\
+	ETH_RSS_NONFRAG_IPV6_TCP |	\
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
+				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_CKSUM | \
+				     DEV_TX_OFFLOAD_UDP_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_TSO | \
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_QINQ_INSERT | \
+				     DEV_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
+				     DEV_RX_OFFLOAD_VLAN_STRIP | \
+				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_UDP_CKSUM | \
+				     DEV_RX_OFFLOAD_TCP_CKSUM | \
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
+				     DEV_RX_OFFLOAD_KEEP_CRC | \
+				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
+				     DEV_RX_OFFLOAD_TCP_LRO | \
+				     DEV_RX_OFFLOAD_SCATTER | \
+				     DEV_RX_OFFLOAD_RSS_HASH)
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -682,6 +721,9 @@ struct bnxt {
 #define BNXT_MAX_RINGS(bp) \
 	(RTE_MIN((((bp)->max_cp_rings - BNXT_NUM_ASYNC_CPR(bp)) / 2U), \
 		 BNXT_MAX_TX_RINGS(bp)))
+
+#define BNXT_MAX_VF_REP_RINGS	8
+
 	uint16_t		max_nq_rings;
 	uint16_t		max_l2_ctx;
 	uint16_t		max_rx_em_flows;
@@ -711,7 +753,9 @@ struct bnxt {
 
 	uint16_t		fw_reset_min_msecs;
 	uint16_t		fw_reset_max_msecs;
-
+	uint16_t		switch_domain_id;
+	uint16_t		num_reps;
+	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -732,6 +776,18 @@ struct bnxt {
 
 #define BNXT_FC_TIMER	1 /* Timer freq in Sec Flow Counters */
 
+/**
+ * Structure to store private data for each VF representor instance
+ */
+struct bnxt_vf_representor {
+	uint16_t switch_domain_id;
+	uint16_t vf_id;
+	/* Private data store of associated PF/Trusted VF */
+	struct bnxt	*parent_priv;
+	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
 int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 		     bool exp_link_status);
@@ -744,7 +800,13 @@ void bnxt_schedule_fw_health_check(struct bnxt *bp);
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
 bool bnxt_stratus_device(struct bnxt *bp);
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete);
+
 extern const struct rte_flow_ops bnxt_flow_ops;
+
 #define bnxt_acquire_flow_lock(bp) \
 	pthread_mutex_lock(&(bp)->flow_lock)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7022f6d..4911745 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -18,6 +18,7 @@
 #include "bnxt_filter.h"
 #include "bnxt_hwrm.h"
 #include "bnxt_irq.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxq.h"
 #include "bnxt_rxr.h"
@@ -93,40 +94,6 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-#define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
-				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_VLAN_STRIP | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
-
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
@@ -163,7 +130,6 @@ static int bnxt_devarg_max_num_kflow_invalid(uint16_t max_num_kflows)
 }
 
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev);
 static int bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev);
@@ -198,7 +164,7 @@ static uint16_t bnxt_rss_ctxts(const struct bnxt *bp)
 				    BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
-static uint16_t  bnxt_rss_hash_tbl_size(const struct bnxt *bp)
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 {
 	if (!BNXT_CHIP_THOR(bp))
 		return HW_HASH_INDEX_SIZE;
@@ -1047,7 +1013,7 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	return -ENOSPC;
 }
 
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 {
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
@@ -1273,6 +1239,12 @@ static int bnxt_dev_set_link_down_op(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static void bnxt_free_switch_domain(struct bnxt *bp)
+{
+	if (bp->switch_domain_id)
+		rte_eth_switch_domain_free(bp->switch_domain_id);
+}
+
 /* Unload the driver, release resources */
 static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
@@ -1341,6 +1313,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
+	bnxt_free_switch_domain(bp);
+
 	bnxt_uninit_resources(bp, false);
 
 	bnxt_free_leds_info(bp);
@@ -1522,8 +1496,8 @@ int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 	return rc;
 }
 
-static int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
-			       int wait_to_complete)
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete)
 {
 	return bnxt_link_update(eth_dev, wait_to_complete, ETH_LINK_UP);
 }
@@ -5477,8 +5451,26 @@ bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
 	rte_kvargs_free(kvlist);
 }
 
+static int bnxt_alloc_switch_domain(struct bnxt *bp)
+{
+	int rc = 0;
+
+	if (BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) {
+		rc = rte_eth_switch_domain_alloc(&bp->switch_domain_id);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Failed to alloc switch domain: %d\n", rc);
+		else
+			PMD_DRV_LOG(INFO,
+				    "Switch domain allocated %d\n",
+				    bp->switch_domain_id);
+	}
+
+	return rc;
+}
+
 static int
-bnxt_dev_init(struct rte_eth_dev *eth_dev)
+bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	static int version_printed;
@@ -5557,6 +5549,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	if (rc)
 		goto error_free;
 
+	bnxt_alloc_switch_domain(bp);
+
 	/* Pass the information to the rte_eth_dev_close() that it should also
 	 * release the private port resources.
 	 */
@@ -5689,25 +5683,162 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *bp = eth_dev->data->dev_private;
+	struct rte_eth_dev *vf_rep_eth_dev;
+	int ret = 0, i;
+
+	if (!bp)
+		return -EINVAL;
+
+	for (i = 0; i < bp->num_reps; i++) {
+		vf_rep_eth_dev = bp->rep_info[i].vfr_eth_dev;
+		if (!vf_rep_eth_dev)
+			continue;
+		rte_eth_dev_destroy(vf_rep_eth_dev, bnxt_vf_representor_uninit);
+	}
+	ret = rte_eth_dev_destroy(eth_dev, bnxt_dev_uninit);
+
+	return ret;
+}
+
 static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct bnxt),
-		bnxt_dev_init);
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
+	uint16_t num_rep;
+	int i, ret = 0;
+	struct bnxt *backing_bp;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+	}
+
+	if (num_rep > BNXT_MAX_VF_REPS) {
+		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
+			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
+		ret = -EINVAL;
+		return ret;
+	}
+
+	/* probe representor ports now */
+	if (!backing_eth_dev)
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = -ENODEV;
+		return ret;
+	}
+	backing_bp = backing_eth_dev->data->dev_private;
+
+	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
+		PMD_DRV_LOG(ERR,
+			    "Not a PF or trusted VF. No Representor support\n");
+		/* Returning an error is not an option.
+		 * Applications are not handling this correctly
+		 */
+		return ret;
+	}
+
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		struct bnxt_vf_representor representor = {
+			.vf_id = eth_da.representor_ports[i],
+			.switch_domain_id = backing_bp->switch_domain_id,
+			.parent_priv = backing_bp
+		};
+
+		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
+			PMD_DRV_LOG(ERR, "VF-Rep id %d >= %d MAX VF ID\n",
+				    representor.vf_id, BNXT_MAX_VF_REPS);
+			continue;
+		}
+
+		/* representor port net_bdf_port */
+		snprintf(name, sizeof(name), "net_%s_representor_%d",
+			 pci_dev->device.name, eth_da.representor_ports[i]);
+
+		ret = rte_eth_dev_create(&pci_dev->device, name,
+					 sizeof(struct bnxt_vf_representor),
+					 NULL, NULL,
+					 bnxt_vf_representor_init,
+					 &representor);
+
+		if (!ret) {
+			vf_rep_eth_dev = rte_eth_dev_allocated(name);
+			if (!vf_rep_eth_dev) {
+				PMD_DRV_LOG(ERR, "Failed to find the eth_dev"
+					    " for VF-Rep: %s.", name);
+				bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+				ret = -ENODEV;
+				return ret;
+			}
+			backing_bp->rep_info[representor.vf_id].vfr_eth_dev =
+				vf_rep_eth_dev;
+			backing_bp->num_reps++;
+		} else {
+			PMD_DRV_LOG(ERR, "failed to create bnxt vf "
+				    "representor %s.", name);
+			bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+		}
+	}
+
+	return ret;
 }
 
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
-		return rte_eth_dev_pci_generic_remove(pci_dev,
-				bnxt_dev_uninit);
-	else
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!eth_dev)
+		return ENODEV; /* Invoked typically only by OVS-DPDK, by the
+				* time it comes here the eth_dev is already
+				* deleted by rte_eth_dev_close(), so returning
+				* +ve value will atleast help in proper cleanup
+				*/
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_vf_representor_uninit);
+		else
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_dev_uninit);
+	} else {
 		return rte_eth_dev_pci_generic_remove(pci_dev, NULL);
+	}
 }
 
 static struct rte_pci_driver bnxt_rte_pmd = {
 	.id_table = bnxt_pci_id_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+			RTE_PCI_DRV_PROBE_AGAIN, /* Needed in case of VF-REPs
+						  * and OVS-DPDK
+						  */
 	.probe = bnxt_pci_probe,
 	.remove = bnxt_pci_remove,
 };
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
new file mode 100644
index 0000000..7033d62
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_ring.h"
+#include "bnxt_reps.h"
+#include "hsi_struct_def_dpdk.h"
+
+static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
+	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
+	.dev_configure = bnxt_vf_rep_dev_configure_op,
+	.dev_start = bnxt_vf_rep_dev_start_op,
+	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.link_update = bnxt_vf_rep_link_update_op,
+	.dev_close = bnxt_vf_rep_dev_close_op,
+	.dev_stop = bnxt_vf_rep_dev_stop_op
+};
+
+static uint16_t
+bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
+		     __rte_unused struct rte_mbuf **rx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
+		     __rte_unused struct rte_mbuf **tx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
+{
+	struct bnxt_vf_representor *vf_rep_bp = eth_dev->data->dev_private;
+	struct bnxt_vf_representor *rep_params =
+				 (struct bnxt_vf_representor *)params;
+	struct rte_eth_link *link;
+	struct bnxt *parent_bp;
+
+	vf_rep_bp->vf_id = rep_params->vf_id;
+	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
+	vf_rep_bp->parent_priv = rep_params->parent_priv;
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+	eth_dev->data->representor_id = rep_params->vf_id;
+
+	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
+	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
+	       sizeof(vf_rep_bp->mac_addr));
+	eth_dev->data->mac_addrs =
+		(struct rte_ether_addr *)&vf_rep_bp->mac_addr;
+	eth_dev->dev_ops = &bnxt_vf_rep_dev_ops;
+
+	/* No data-path, but need stub Rx/Tx functions to avoid crash
+	 * when testing with ovs-dpdk
+	 */
+	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
+	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
+	/* Link state. Inherited from PF or trusted VF */
+	parent_bp = vf_rep_bp->parent_priv;
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+
+	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
+	bnxt_print_link_info(eth_dev);
+
+	/* Pass the information to the rte_eth_dev_close() that it should also
+	 * release the private port resources.
+	 */
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+	PMD_DRV_LOG(INFO,
+		    "Switch domain id %d: Representor Device %d init done\n",
+		    vf_rep_bp->switch_domain_id, vf_rep_bp->vf_id);
+
+	return 0;
+}
+
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+
+	uint16_t vf_id;
+
+	eth_dev->data->mac_addrs = NULL;
+
+	parent_bp = rep->parent_priv;
+	if (parent_bp) {
+		parent_bp->num_reps--;
+		vf_id = rep->vf_id;
+		if (parent_bp->rep_info) {
+			memset(&parent_bp->rep_info[vf_id], 0,
+			       sizeof(parent_bp->rep_info[vf_id]));
+			/* mark that this representor has been freed */
+		}
+	}
+	eth_dev->dev_ops = NULL;
+	return 0;
+}
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+	struct rte_eth_link *link;
+	int rc;
+
+	parent_bp = rep->parent_priv;
+	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
+
+	/* Link state. Inherited from PF or trusted VF */
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+	bnxt_print_link_info(eth_dev);
+
+	return rc;
+}
+
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_rep_link_update_op(eth_dev, 1);
+
+	return 0;
+}
+
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
+{
+	eth_dev = eth_dev;
+}
+
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_representor_uninit(eth_dev);
+}
+
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp;
+	uint16_t max_vnics, i, j, vpool, vrxq;
+	unsigned int max_rx_rings;
+	int rc = 0;
+
+	/* MAC Specifics */
+	parent_bp = rep_bp->parent_priv;
+	if (!parent_bp) {
+		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
+		return rc;
+	}
+	PMD_DRV_LOG(DEBUG, "Representor dev_info_get_op\n");
+	dev_info->max_mac_addrs = parent_bp->max_l2_ctx;
+	dev_info->max_hash_mac_addrs = 0;
+
+	max_rx_rings = BNXT_MAX_VF_REP_RINGS;
+	/* For the sake of symmetry, max_rx_queues = max_tx_queues */
+	dev_info->max_rx_queues = max_rx_rings;
+	dev_info->max_tx_queues = max_rx_rings;
+	dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
+	dev_info->hash_key_size = 40;
+	max_vnics = parent_bp->max_vnics;
+
+	/* MTU specifics */
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+	dev_info->max_mtu = BNXT_MAX_MTU;
+
+	/* Fast path specifics */
+	dev_info->min_rx_bufsize = 1;
+	dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+
+	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
+	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
+		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
+	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
+
+	/* *INDENT-OFF* */
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 0,
+		},
+		.rx_free_thresh = 32,
+		/* If no descriptors available, pkts are dropped by default */
+		.rx_drop_en = 1,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = 32,
+			.hthresh = 0,
+			.wthresh = 0,
+		},
+		.tx_free_thresh = 32,
+		.tx_rs_thresh = 32,
+	};
+	eth_dev->data->dev_conf.intr_conf.lsc = 1;
+
+	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+	dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC;
+	dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC;
+
+	/* *INDENT-ON* */
+
+	/*
+	 * TODO: default_rxconf, default_txconf, rx_desc_lim, and tx_desc_lim
+	 *       need further investigation.
+	 */
+
+	/* VMDq resources */
+	vpool = 64; /* ETH_64_POOLS */
+	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	for (i = 0; i < 4; vpool >>= 1, i++) {
+		if (max_vnics > vpool) {
+			for (j = 0; j < 5; vrxq >>= 1, j++) {
+				if (dev_info->max_rx_queues > vrxq) {
+					if (vpool > vrxq)
+						vpool = vrxq;
+					goto found;
+				}
+			}
+			/* Not enough resources to support VMDq */
+			break;
+		}
+	}
+	/* Not enough resources to support VMDq */
+	vpool = 0;
+	vrxq = 0;
+found:
+	dev_info->max_vmdq_pools = vpool;
+	dev_info->vmdq_queue_num = vrxq;
+
+	dev_info->vmdq_pool_base = 0;
+	dev_info->vmdq_queue_base = 0;
+
+	return 0;
+}
+
+int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
+{
+	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	return 0;
+}
+
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
+
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
new file mode 100644
index 0000000..f4c033a
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_REPS_H_
+#define _BNXT_REPS_H_
+
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info);
+int bnxt_vf_rep_dev_configure_op(struct rte_eth_dev *eth_dev);
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl);
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp);
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf);
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+#endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 4306c60..5c7859c 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -21,6 +21,7 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+	'bnxt_reps.c',
 
 	'tf_core/tf_core.c',
 	'tf_core/bitalloc.c',
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 02/50] net/bnxt: Infrastructure support for VF-reps data path
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 01/50] net/bnxt: Basic infrastructure support for VF representors Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 03/50] net/bnxt: add support to get FID, default vnic ID and svif of VF-Rep Endpoint Somnath Kotur
                   ` (48 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Added code to support Tx/Rx from a VF representor port.
The VF-reps use the RX/TX rings of the Trusted VF/PF.
For each VF-rep, the Trusted VF/PF driver issues a VFR_ALLOC FW cmd that
returns "cfa_code" and "cfa_action" values.
The FW sets up the filter tables in such a way that VF traffic by
default (in absence of other rules) gets punted to the parent function
i.e. either the Trusted VF or the PF.
The cfa_code value in the RX-compl informs the driver of the source VF.
For traffic being transmitted from the VF-rep, the TX BD is tagged with
a cfa_action value that informs the HW to punt it to the corresponding
VF.

Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  30 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 150 +++++++++----
 drivers/net/bnxt/bnxt_reps.c   | 476 ++++++++++++++++++++++++++++++++++++++---
 drivers/net/bnxt/bnxt_reps.h   |  11 +
 drivers/net/bnxt/bnxt_rxr.c    |  22 +-
 drivers/net/bnxt/bnxt_rxr.h    |   1 +
 drivers/net/bnxt/bnxt_txq.h    |   1 +
 drivers/net/bnxt/bnxt_txr.c    |   4 +-
 8 files changed, 616 insertions(+), 79 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 9b7b87c..443d9fe 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -495,6 +495,7 @@ struct bnxt_mark_info {
 
 struct bnxt_rep_info {
 	struct rte_eth_dev	*vfr_eth_dev;
+	pthread_mutex_t		vfr_lock;
 };
 
 /* address space location of register */
@@ -755,7 +756,8 @@ struct bnxt {
 	uint16_t		fw_reset_max_msecs;
 	uint16_t		switch_domain_id;
 	uint16_t		num_reps;
-	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
+	struct bnxt_rep_info	*rep_info;
+	uint16_t                *cfa_code_map;
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -780,12 +782,28 @@ struct bnxt {
  * Structure to store private data for each VF representor instance
  */
 struct bnxt_vf_representor {
-	uint16_t switch_domain_id;
-	uint16_t vf_id;
+	uint16_t		switch_domain_id;
+	uint16_t		vf_id;
+	uint16_t		tx_cfa_action;
+	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
-	struct bnxt	*parent_priv;
-	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
-	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct rte_eth_dev	*parent_dev;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t			dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct bnxt_rx_queue	**rx_queues;
+	unsigned int		rx_nr_rings;
+	unsigned int		tx_nr_rings;
+	uint64_t                tx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                tx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_bytes[BNXT_MAX_VF_REP_RINGS];
+};
+
+struct bnxt_vf_rep_tx_queue {
+	struct bnxt_tx_queue *txq;
+	struct bnxt_vf_representor *bp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4911745..4202904 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -137,6 +137,7 @@ static void bnxt_cancel_fw_health_check(struct bnxt *bp);
 static int bnxt_restore_vlan_filters(struct bnxt *bp);
 static void bnxt_dev_recover(void *arg);
 static void bnxt_free_error_recovery_info(struct bnxt *bp);
+static void bnxt_free_rep_info(struct bnxt *bp);
 
 int is_bnxt_in_error(struct bnxt *bp)
 {
@@ -5243,7 +5244,7 @@ bnxt_init_locks(struct bnxt *bp)
 
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 {
-	int rc;
+	int rc = 0;
 
 	rc = bnxt_init_fw(bp);
 	if (rc)
@@ -5642,6 +5643,8 @@ bnxt_uninit_locks(struct bnxt *bp)
 {
 	pthread_mutex_destroy(&bp->flow_lock);
 	pthread_mutex_destroy(&bp->def_cp_lock);
+	if (bp->rep_info)
+		pthread_mutex_destroy(&bp->rep_info->vfr_lock);
 }
 
 static int
@@ -5664,6 +5667,7 @@ bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev)
 
 	bnxt_uninit_locks(bp);
 	bnxt_free_flow_stats_info(bp);
+	bnxt_free_rep_info(bp);
 	rte_free(bp->ptp_cfg);
 	bp->ptp_cfg = NULL;
 	return rc;
@@ -5703,56 +5707,73 @@ static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static void bnxt_free_rep_info(struct bnxt *bp)
 {
-	char name[RTE_ETH_NAME_MAX_LEN];
-	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
-	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
-	uint16_t num_rep;
-	int i, ret = 0;
-	struct bnxt *backing_bp;
+	rte_free(bp->rep_info);
+	bp->rep_info = NULL;
+	rte_free(bp->cfa_code_map);
+	bp->cfa_code_map = NULL;
+}
 
-	if (pci_dev->device.devargs) {
-		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
-					    &eth_da);
-		if (ret)
-			return ret;
-	}
+static int bnxt_init_rep_info(struct bnxt *bp)
+{
+	int i = 0, rc;
 
-	num_rep = eth_da.nb_representor_ports;
-	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
-		    num_rep);
+	if (bp->rep_info)
+		return 0;
 
-	/* We could come here after first level of probe is already invoked
-	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
-	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
-	 */
-	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
-					 sizeof(struct bnxt),
-					 eth_dev_pci_specific_init, pci_dev,
-					 bnxt_dev_init, NULL);
+	bp->rep_info = rte_zmalloc("bnxt_rep_info",
+				   sizeof(bp->rep_info[0]) * BNXT_MAX_VF_REPS,
+				   0);
+	if (!bp->rep_info) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for rep info\n");
+		return -ENOMEM;
+	}
+	bp->cfa_code_map = rte_zmalloc("bnxt_cfa_code_map",
+				       sizeof(*bp->cfa_code_map) *
+				       BNXT_MAX_CFA_CODE, 0);
+	if (!bp->cfa_code_map) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for cfa_code_map\n");
+		bnxt_free_rep_info(bp);
+		return -ENOMEM;
+	}
 
-		if (ret || !num_rep)
-			return ret;
+	for (i = 0; i < BNXT_MAX_CFA_CODE; i++)
+		bp->cfa_code_map[i] = BNXT_VF_IDX_INVALID;
+
+	rc = pthread_mutex_init(&bp->rep_info->vfr_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Unable to initialize vfr_lock\n");
+		bnxt_free_rep_info(bp);
+		return rc;
 	}
+	return rc;
+}
+
+static int bnxt_rep_port_probe(struct rte_pci_device *pci_dev,
+			       struct rte_eth_devargs eth_da,
+			       struct rte_eth_dev *backing_eth_dev)
+{
+	struct rte_eth_dev *vf_rep_eth_dev;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct bnxt *backing_bp;
+	uint16_t num_rep;
+	int i, ret = 0;
 
+	num_rep = eth_da.nb_representor_ports;
 	if (num_rep > BNXT_MAX_VF_REPS) {
 		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
-			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
-		ret = -EINVAL;
-		return ret;
+			    num_rep, BNXT_MAX_VF_REPS);
+		return -EINVAL;
 	}
 
-	/* probe representor ports now */
-	if (!backing_eth_dev)
-		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = -ENODEV;
-		return ret;
+	if (num_rep > RTE_MAX_ETHPORTS) {
+		PMD_DRV_LOG(ERR,
+			    "nb_representor_ports = %d > %d MAX ETHPORTS\n",
+			    num_rep, RTE_MAX_ETHPORTS);
+		return -EINVAL;
 	}
+
 	backing_bp = backing_eth_dev->data->dev_private;
 
 	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
@@ -5761,14 +5782,17 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		/* Returning an error is not an option.
 		 * Applications are not handling this correctly
 		 */
-		return ret;
+		return 0;
 	}
 
-	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+	if (bnxt_init_rep_info(backing_bp))
+		return 0;
+
+	for (i = 0; i < num_rep; i++) {
 		struct bnxt_vf_representor representor = {
 			.vf_id = eth_da.representor_ports[i],
 			.switch_domain_id = backing_bp->switch_domain_id,
-			.parent_priv = backing_bp
+			.parent_dev = backing_eth_dev
 		};
 
 		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
@@ -5809,6 +5833,48 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return ret;
 }
 
+static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			  struct rte_pci_device *pci_dev)
+{
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev;
+	uint16_t num_rep;
+	int ret = 0;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	}
+
+	/* probe representor ports now */
+	ret = bnxt_rep_port_probe(pci_dev, eth_da, backing_eth_dev);
+
+	return ret;
+}
+
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct rte_eth_dev *eth_dev;
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 7033d62..39f3d23 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -6,6 +6,11 @@
 #include "bnxt.h"
 #include "bnxt_ring.h"
 #include "bnxt_reps.h"
+#include "bnxt_rxq.h"
+#include "bnxt_rxr.h"
+#include "bnxt_txq.h"
+#include "bnxt_txr.h"
+#include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
@@ -13,25 +18,128 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_configure = bnxt_vf_rep_dev_configure_op,
 	.dev_start = bnxt_vf_rep_dev_start_op,
 	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.rx_queue_release = bnxt_vf_rep_rx_queue_release_op,
 	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.tx_queue_release = bnxt_vf_rep_tx_queue_release_op,
 	.link_update = bnxt_vf_rep_link_update_op,
 	.dev_close = bnxt_vf_rep_dev_close_op,
-	.dev_stop = bnxt_vf_rep_dev_stop_op
+	.dev_stop = bnxt_vf_rep_dev_stop_op,
+	.stats_get = bnxt_vf_rep_stats_get_op,
+	.stats_reset = bnxt_vf_rep_stats_reset_op,
 };
 
-static uint16_t
-bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
-		     __rte_unused struct rte_mbuf **rx_pkts,
-		     __rte_unused uint16_t nb_pkts)
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf)
 {
+	struct bnxt_sw_rx_bd *prod_rx_buf;
+	struct bnxt_rx_ring_info *rep_rxr;
+	struct bnxt_rx_queue *rep_rxq;
+	struct rte_eth_dev *vfr_eth_dev;
+	struct bnxt_vf_representor *vfr_bp;
+	uint16_t vf_id;
+	uint16_t mask;
+	uint8_t que;
+
+	vf_id = bp->cfa_code_map[cfa_code];
+	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
+	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
+		return 1;
+	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	if (!vfr_eth_dev)
+		return 1;
+	vfr_bp = vfr_eth_dev->data->dev_private;
+	if (vfr_bp->rx_cfa_code != cfa_code) {
+		/* cfa_code not meant for this VF rep!!?? */
+		return 1;
+	}
+	/* If rxq_id happens to be > max rep_queue, use rxq0 */
+	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
+	rep_rxq = vfr_bp->rx_queues[que];
+	rep_rxr = rep_rxq->rx_ring;
+	mask = rep_rxr->rx_ring_struct->ring_mask;
+
+	/* Put this mbuf on the RxQ of the Representor */
+	prod_rx_buf =
+		&rep_rxr->rx_buf_ring[rep_rxr->rx_prod++ & mask];
+	if (!prod_rx_buf->mbuf) {
+		prod_rx_buf->mbuf = mbuf;
+		vfr_bp->rx_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_pkts[que]++;
+	} else {
+		vfr_bp->rx_drop_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_drop_pkts[que]++;
+		rte_free(mbuf); /* Representor Rx ring full, drop pkt */
+	}
+
 	return 0;
 }
 
 static uint16_t
-bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
-		     __rte_unused struct rte_mbuf **tx_pkts,
+bnxt_vf_rep_rx_burst(void *rx_queue,
+		     struct rte_mbuf **rx_pkts,
+		     uint16_t nb_pkts)
+{
+	struct bnxt_rx_queue *rxq = rx_queue;
+	struct bnxt_sw_rx_bd *cons_rx_buf;
+	struct bnxt_rx_ring_info *rxr;
+	uint16_t nb_rx_pkts = 0;
+	uint16_t mask, i;
+
+	if (!rxq)
+		return 0;
+
+	rxr = rxq->rx_ring;
+	mask = rxr->rx_ring_struct->ring_mask;
+	for (i = 0; i < nb_pkts; i++) {
+		cons_rx_buf = &rxr->rx_buf_ring[rxr->rx_cons & mask];
+		if (!cons_rx_buf->mbuf)
+			return nb_rx_pkts;
+		rx_pkts[nb_rx_pkts] = cons_rx_buf->mbuf;
+		rx_pkts[nb_rx_pkts]->port = rxq->port_id;
+		cons_rx_buf->mbuf = NULL;
+		nb_rx_pkts++;
+		rxr->rx_cons++;
+	}
+
+	return nb_rx_pkts;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
 		     __rte_unused uint16_t nb_pkts)
 {
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+	struct bnxt_tx_queue *ptxq;
+	struct bnxt *parent;
+	struct  bnxt_vf_representor *vf_rep_bp;
+	int qid;
+	int rc;
+	int i;
+
+	if (!vfr_txq)
+		return 0;
+
+	qid = vfr_txq->txq->queue_id;
+	vf_rep_bp = vfr_txq->bp;
+	parent = vf_rep_bp->parent_dev->data->dev_private;
+	pthread_mutex_lock(&parent->rep_info->vfr_lock);
+	ptxq = parent->tx_queues[qid];
+
+	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+
+	for (i = 0; i < nb_pkts; i++) {
+		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
+		vf_rep_bp->tx_pkts[qid]++;
+	}
+
+	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
+	ptxq->tx_cfa_action = 0;
+	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
+
+	return rc;
+
 	return 0;
 }
 
@@ -45,7 +153,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
-	vf_rep_bp->parent_priv = rep_params->parent_priv;
+	vf_rep_bp->parent_dev = rep_params->parent_dev;
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	eth_dev->data->representor_id = rep_params->vf_id;
@@ -63,7 +171,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
 	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
 	/* Link state. Inherited from PF or trusted VF */
-	parent_bp = vf_rep_bp->parent_priv;
+	parent_bp = vf_rep_bp->parent_dev->data->dev_private;
 	link = &parent_bp->eth_dev->data->dev_link;
 
 	eth_dev->data->dev_link.link_speed = link->link_speed;
@@ -94,18 +202,18 @@ int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
 	uint16_t vf_id;
 
 	eth_dev->data->mac_addrs = NULL;
-
-	parent_bp = rep->parent_priv;
-	if (parent_bp) {
-		parent_bp->num_reps--;
-		vf_id = rep->vf_id;
-		if (parent_bp->rep_info) {
-			memset(&parent_bp->rep_info[vf_id], 0,
-			       sizeof(parent_bp->rep_info[vf_id]));
-			/* mark that this representor has been freed */
-		}
-	}
 	eth_dev->dev_ops = NULL;
+
+	parent_bp = rep->parent_dev->data->dev_private;
+	if (!parent_bp)
+		return 0;
+
+	parent_bp->num_reps--;
+	vf_id = rep->vf_id;
+	if (parent_bp->rep_info)
+		memset(&parent_bp->rep_info[vf_id], 0,
+		       sizeof(parent_bp->rep_info[vf_id]));
+		/* mark that this representor has been freed */
 	return 0;
 }
 
@@ -117,7 +225,7 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	struct rte_eth_link *link;
 	int rc;
 
-	parent_bp = rep->parent_priv;
+	parent_bp = rep->parent_dev->data->dev_private;
 	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
 
 	/* Link state. Inherited from PF or trusted VF */
@@ -132,16 +240,134 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
+static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already allocated in FW */
+	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * Alloc VF rep rules in CFA after default VNIC is created.
+	 * Otherwise the FW will create the VF-rep rules with
+	 * default drop action.
+	 */
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
+				     vfr->vf_id,
+				     &vfr->tx_cfa_action,
+				     &vfr->rx_cfa_code);
+	 */
+	if (!rc) {
+		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
+			    vfr->vf_id);
+	} else {
+		PMD_DRV_LOG(ERR,
+			    "Failed to alloc representor %d in FW\n",
+			    vfr->vf_id);
+	}
+
+	return rc;
+}
+
+static void bnxt_vf_rep_free_rx_mbufs(struct bnxt_vf_representor *rep_bp)
+{
+	struct bnxt_rx_queue *rxq;
+	unsigned int i;
+
+	for (i = 0; i < rep_bp->rx_nr_rings; i++) {
+		rxq = rep_bp->rx_queues[i];
+		bnxt_rx_queue_release_mbufs(rxq);
+	}
+}
+
 int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 {
-	bnxt_vf_rep_link_update_op(eth_dev, 1);
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int rc;
 
-	return 0;
+	rc = bnxt_vfr_alloc(rep_bp);
+
+	if (!rc) {
+		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
+		eth_dev->tx_pkt_burst = &bnxt_vf_rep_tx_burst;
+
+		bnxt_vf_rep_link_update_op(eth_dev, 1);
+	} else {
+		eth_dev->data->dev_link.link_status = 0;
+		bnxt_vf_rep_free_rx_mbufs(rep_bp);
+	}
+
+	return rc;
+}
+
+static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already freed in FW */
+	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
+				    vfr->vf_id);
+	 */
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to free representor %d in FW\n",
+			    vfr->vf_id);
+		return rc;
+	}
+
+	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
+	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
+		    vfr->vf_id);
+	vfr->tx_cfa_action = 0;
+	vfr->rx_cfa_code = 0;
+
+	return rc;
 }
 
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *vfr_bp = eth_dev->data->dev_private;
+
+	/* Avoid crashes as we are about to free queues */
+	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
+	eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+
+	bnxt_vfr_free(vfr_bp);
+
+	if (eth_dev->data->dev_started)
+		eth_dev->data->dev_link.link_status = 0;
+
+	bnxt_vf_rep_free_rx_mbufs(vfr_bp);
 }
 
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
@@ -159,7 +385,7 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	int rc = 0;
 
 	/* MAC Specifics */
-	parent_bp = rep_bp->parent_priv;
+	parent_bp = rep_bp->parent_dev->data->dev_private;
 	if (!parent_bp) {
 		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
 		return rc;
@@ -257,7 +483,13 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
 {
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+
 	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	rep_bp->rx_queues = (void *)eth_dev->data->rx_queues;
+	rep_bp->tx_nr_rings = eth_dev->data->nb_tx_queues;
+	rep_bp->rx_nr_rings = eth_dev->data->nb_rx_queues;
+
 	return 0;
 }
 
@@ -269,9 +501,94 @@ int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  rx_conf,
 				  __rte_unused struct rte_mempool *mp)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_rx_queue *parent_rxq;
+	struct bnxt_rx_queue *rxq;
+	struct bnxt_sw_rx_bd *buf_ring;
+	int rc = 0;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Rx ring %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_RX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid\n", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_rxq = parent_bp->rx_queues[queue_idx];
+	if (!parent_rxq) {
+		PMD_DRV_LOG(ERR, "Parent RxQ has not been configured yet\n");
+		return -EINVAL;
+	}
+
+	if (nb_desc != parent_rxq->nb_rx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent rxq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->rx_queues) {
+		rxq = eth_dev->data->rx_queues[queue_idx];
+		if (rxq)
+			bnxt_rx_queue_release_op(rxq);
+	}
+
+	rxq = rte_zmalloc_socket("bnxt_vfr_rx_queue",
+				 sizeof(struct bnxt_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_rx_queue allocation failed!\n");
+		return -ENOMEM;
+	}
+
+	rxq->nb_rx_desc = nb_desc;
+
+	rc = bnxt_init_rx_ring_struct(rxq, socket_id);
+	if (rc)
+		goto out;
+
+	buf_ring = rte_zmalloc_socket("bnxt_rx_vfr_buf_ring",
+				      sizeof(struct bnxt_sw_rx_bd) *
+				      rxq->rx_ring->rx_ring_struct->ring_size,
+				      RTE_CACHE_LINE_SIZE, socket_id);
+	if (!buf_ring) {
+		PMD_DRV_LOG(ERR, "bnxt_rx_vfr_buf_ring allocation failed!\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rxq->rx_ring->rx_buf_ring = buf_ring;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = eth_dev->data->port_id;
+	eth_dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
+
+out:
+	if (rxq)
+		bnxt_rx_queue_release_op(rxq);
+
+	return rc;
+}
+
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue)
+{
+	struct bnxt_rx_queue *rxq = (struct bnxt_rx_queue *)rx_queue;
+
+	if (!rxq)
+		return;
+
+	bnxt_rx_queue_release_mbufs(rxq);
+
+	bnxt_free_ring(rxq->rx_ring->rx_ring_struct);
+	bnxt_free_ring(rxq->rx_ring->ag_ring_struct);
+	bnxt_free_ring(rxq->cp_ring->cp_ring_struct);
+
+	rte_free(rxq);
 }
 
 int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
@@ -281,7 +598,112 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_tx_queue *parent_txq, *txq;
+	struct bnxt_vf_rep_tx_queue *vfr_txq;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Tx rings %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_TX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_txq = parent_bp->tx_queues[queue_idx];
+	if (!parent_txq) {
+		PMD_DRV_LOG(ERR, "Parent TxQ has not been configured yet\n");
+		return -EINVAL;
+	}
 
+	if (nb_desc != parent_txq->nb_tx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent txq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->tx_queues) {
+		vfr_txq = eth_dev->data->tx_queues[queue_idx];
+		bnxt_vf_rep_tx_queue_release_op(vfr_txq);
+		vfr_txq = NULL;
+	}
+
+	vfr_txq = rte_zmalloc_socket("bnxt_vfr_tx_queue",
+				     sizeof(struct bnxt_vf_rep_tx_queue),
+				     RTE_CACHE_LINE_SIZE, socket_id);
+	if (!vfr_txq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_tx_queue allocation failed!");
+		return -ENOMEM;
+	}
+	txq = rte_zmalloc_socket("bnxt_tx_queue",
+				 sizeof(struct bnxt_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!");
+		rte_free(vfr_txq);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->queue_id = queue_idx;
+	txq->port_id = eth_dev->data->port_id;
+	vfr_txq->txq = txq;
+	vfr_txq->bp = rep_bp;
+	eth_dev->data->tx_queues[queue_idx] = vfr_txq;
+
+	return 0;
+}
+
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue)
+{
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+
+	if (!vfr_txq)
+		return;
+
+	rte_free(vfr_txq->txq);
+	rte_free(vfr_txq);
+}
+
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		stats->obytes += rep_bp->tx_bytes[i];
+		stats->opackets += rep_bp->tx_pkts[i];
+		stats->ibytes += rep_bp->rx_bytes[i];
+		stats->ipackets += rep_bp->rx_pkts[i];
+		stats->imissed += rep_bp->rx_drop_pkts[i];
+
+		stats->q_ipackets[i] = rep_bp->rx_pkts[i];
+		stats->q_ibytes[i] = rep_bp->rx_bytes[i];
+		stats->q_opackets[i] = rep_bp->tx_pkts[i];
+		stats->q_obytes[i] = rep_bp->tx_bytes[i];
+		stats->q_errors[i] = rep_bp->rx_drop_pkts[i];
+	}
+
+	return 0;
+}
+
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		rep_bp->tx_pkts[i] = 0;
+		rep_bp->tx_bytes[i] = 0;
+		rep_bp->rx_pkts[i] = 0;
+		rep_bp->rx_bytes[i] = 0;
+		rep_bp->rx_drop_pkts[i] = 0;
+	}
 	return 0;
 }
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index f4c033a..c8a3c7d 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -9,6 +9,12 @@
 #include <rte_malloc.h>
 #include <rte_ethdev.h>
 
+#define BNXT_MAX_CFA_CODE               65536
+#define BNXT_VF_IDX_INVALID             0xffff
+
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
@@ -30,6 +36,11 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused unsigned int socket_id,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf);
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue);
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue);
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats);
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev);
 #endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 3bcc624..0ecf199 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -12,6 +12,7 @@
 #include <rte_memory.h>
 
 #include "bnxt.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxr.h"
 #include "bnxt_rxq.h"
@@ -538,7 +539,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
-			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
+		       struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
@@ -734,6 +735,20 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
+	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
+	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
+		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
+				   mbuf)) {
+			/* Now return an error so that nb_rx_pkts is not
+			 * incremented.
+			 * This packet was meant to be given to the representor.
+			 * So no need to account the packet and give it to
+			 * parent Rx burst function.
+			 */
+			rc = -ENODEV;
+		}
+	}
+
 next_rx:
 
 	*raw_cons = tmp_raw_cons;
@@ -750,6 +765,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint32_t raw_cons = cpr->cp_raw_cons;
 	uint32_t cons;
 	int nb_rx_pkts = 0;
+	int nb_rep_rx_pkts = 0;
 	struct rx_pkt_cmpl *rxcmp;
 	uint16_t prod = rxr->rx_prod;
 	uint16_t ag_prod = rxr->ag_prod;
@@ -783,6 +799,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				nb_rx_pkts++;
 			if (rc == -EBUSY)	/* partial completion */
 				break;
+			if (rc == -ENODEV)	/* completion for representor */
+				nb_rep_rx_pkts++;
 		} else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
 			evt =
 			bnxt_event_hwrm_resp_handler(rxq->bp,
@@ -801,7 +819,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 
 	cpr->cp_raw_cons = raw_cons;
-	if (!nb_rx_pkts && !evt) {
+	if (!nb_rx_pkts && !nb_rep_rx_pkts && !evt) {
 		/*
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 811dcd8..e60c97f 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -188,6 +188,7 @@ struct bnxt_sw_rx_bd {
 struct bnxt_rx_ring_info {
 	uint16_t		rx_prod;
 	uint16_t		ag_prod;
+	uint16_t                rx_cons; /* Needed for representor */
 	struct bnxt_db_info     rx_db;
 	struct bnxt_db_info     ag_db;
 
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 37a3f95..69ff89a 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -29,6 +29,7 @@ struct bnxt_tx_queue {
 	struct bnxt		*bp;
 	int			index;
 	int			tx_wake_thresh;
+	uint32_t                tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 1602140..d7e193d 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT))
+				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +184,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = 0;
+		cfa_action = txq->tx_cfa_action;
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 03/50] net/bnxt: add support to get FID, default vnic ID and svif of VF-Rep Endpoint
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 01/50] net/bnxt: Basic infrastructure support for VF representors Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 02/50] net/bnxt: Infrastructure support for VF-reps data path Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 04/50] net/bnxt: initialize parent PF information Somnath Kotur
                   ` (47 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Use 'first_vf_id' and the 'vf_id' that is input as part of adding
a representor to obtain the PCI function ID(FID) of the VF(VFR endpoint).
Use the FID as an input to FUNC_QCFG HWRM cmd to obtain the default
vnic ID of the VF.
Along with getting the default vNIC ID by supplying the FW FID of
the VF-rep endpoint to HWRM_FUNC_QCFG, obtain and store it's
function svif.

Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |  3 +++
 drivers/net/bnxt/bnxt_hwrm.c | 27 +++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h |  4 ++++
 drivers/net/bnxt/bnxt_reps.c | 12 ++++++++++++
 4 files changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 443d9fe..7afbd5c 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -784,6 +784,9 @@ struct bnxt {
 struct bnxt_vf_representor {
 	uint16_t		switch_domain_id;
 	uint16_t		vf_id;
+	uint16_t		fw_fid;
+	uint16_t		dflt_vnic_id;
+	uint16_t		svif;
 	uint16_t		tx_cfa_action;
 	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 945bc90..ed42e58 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,33 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t svif_info;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	req.fid = rte_cpu_to_le_16(fid);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	if (vnic_id)
+		*vnic_id = rte_le_to_cpu_16(resp->dflt_vnic_id);
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif && (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID))
+		*svif = svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 {
 	struct hwrm_port_mac_qcfg_input req = {0};
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 58b414d..8d19998 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -270,4 +270,8 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 				 enum bnxt_flow_dir dir,
 				 uint16_t cntr,
 				 uint16_t num_entries);
+int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
+			       uint16_t *vnic_id);
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif);
 #endif
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 39f3d23..b6964ab 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -150,6 +150,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 				 (struct bnxt_vf_representor *)params;
 	struct rte_eth_link *link;
 	struct bnxt *parent_bp;
+	int rc = 0;
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
@@ -179,6 +180,17 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_link.link_status = link->link_status;
 	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
 
+	vf_rep_bp->fw_fid = rep_params->vf_id + parent_bp->first_vf_id;
+	PMD_DRV_LOG(INFO, "vf_rep->fw_fid = %d\n", vf_rep_bp->fw_fid);
+	rc = bnxt_hwrm_get_dflt_vnic_svif(parent_bp, vf_rep_bp->fw_fid,
+					  &vf_rep_bp->dflt_vnic_id,
+					  &vf_rep_bp->svif);
+	if (rc)
+		PMD_DRV_LOG(ERR, "Failed to get default vnic id of VF\n");
+	else
+		PMD_DRV_LOG(INFO, "vf_rep->dflt_vnic_id = %d\n",
+			    vf_rep_bp->dflt_vnic_id);
+
 	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
 	bnxt_print_link_info(eth_dev);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 04/50] net/bnxt: initialize parent PF information
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (2 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 03/50] net/bnxt: add support to get FID, default vnic ID and svif of VF-Rep Endpoint Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 05/50] net/bnxt: modify ulp_port_db_dev_port_intf_update prototype Somnath Kotur
                   ` (46 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Lance Richardson <lance.richardson@broadcom.com>

Add support to query parent PF information (MAC address,
function ID, port ID and default VNIC) from firmware.

Current firmware returns zero for parent default vnic,
a temporary Wh+-specific workaround is included until
that can be fixed.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kalesh Anakkur Purayil <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  9 +++++++++
 drivers/net/bnxt/bnxt_ethdev.c | 23 +++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 75 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 7afbd5c..2b87899 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -217,6 +217,14 @@ struct bnxt_child_vf_info {
 	bool			persist_stats;
 };
 
+struct bnxt_parent_info {
+#define	BNXT_PF_FID_INVALID	0xFFFF
+	uint16_t		fid;
+	uint16_t		vnic;
+	uint16_t		port_id;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
@@ -738,6 +746,7 @@ struct bnxt {
 #define BNXT_OUTER_TPID_BD_SHFT	16
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
+	struct bnxt_parent_info	*parent;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4202904..bf018be 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -97,6 +97,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+
 static const char *const bnxt_dev_args[] = {
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
@@ -173,6 +174,11 @@ uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 	return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
+static void bnxt_free_parent_info(struct bnxt *bp)
+{
+	rte_free(bp->parent);
+}
+
 static void bnxt_free_pf_info(struct bnxt *bp)
 {
 	rte_free(bp->pf);
@@ -223,6 +229,16 @@ static void bnxt_free_mem(struct bnxt *bp, bool reconfig)
 	bp->grp_info = NULL;
 }
 
+static int bnxt_alloc_parent_info(struct bnxt *bp)
+{
+	bp->parent = rte_zmalloc("bnxt_parent_info",
+				 sizeof(struct bnxt_parent_info), 0);
+	if (bp->parent == NULL)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int bnxt_alloc_pf_info(struct bnxt *bp)
 {
 	bp->pf = rte_zmalloc("bnxt_pf_info", sizeof(struct bnxt_pf_info), 0);
@@ -1322,6 +1338,7 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	bnxt_free_cos_queues(bp);
 	bnxt_free_link_info(bp);
 	bnxt_free_pf_info(bp);
+	bnxt_free_parent_info(bp);
 
 	eth_dev->dev_ops = NULL;
 	eth_dev->rx_pkt_burst = NULL;
@@ -5210,6 +5227,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_port_mac_qcfg(bp);
 
+	bnxt_hwrm_parent_pf_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
@@ -5528,6 +5547,10 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 	if (rc)
 		goto error_free;
 
+	rc = bnxt_alloc_parent_info(bp);
+	if (rc)
+		goto error_free;
+
 	rc = bnxt_alloc_hwrm_resources(bp);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ed42e58..347e1c7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,48 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	int rc;
+
+	if (!BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	if (!bp->parent)
+		return -EINVAL;
+
+	bp->parent->fid = BNXT_PF_FID_INVALID;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+
+	req.fid = rte_cpu_to_le_16(0xfffe); /* Request parent PF information. */
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	memcpy(bp->parent->mac_addr, resp->mac_address, RTE_ETHER_ADDR_LEN);
+	bp->parent->vnic = rte_le_to_cpu_16(resp->dflt_vnic_id);
+	bp->parent->fid = rte_le_to_cpu_16(resp->fid);
+	bp->parent->port_id = rte_le_to_cpu_16(resp->port_id);
+
+	/* FIXME: Temporary workaround - remove when firmware issue is fixed. */
+	if (bp->parent->vnic == 0) {
+		PMD_DRV_LOG(ERR, "Error: parent VNIC unavailable.\n");
+		/* Use hard-coded values appropriate for current Wh+ fw. */
+		if (bp->parent->fid == 2)
+			bp->parent->vnic = 0x100;
+		else
+			bp->parent->vnic = 1;
+	}
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 8d19998..ef89975 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -274,4 +274,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 			       uint16_t *vnic_id);
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 05/50] net/bnxt: modify ulp_port_db_dev_port_intf_update prototype
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (3 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 04/50] net/bnxt: initialize parent PF information Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 06/50] net/bnxt: get port & function related information Somnath Kotur
                   ` (45 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Modify ulp_port_db_dev_port_intf_update prototype to take
"struct rte_eth_dev *" as the second parameter.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    | 4 ++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 5 +++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.h | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 0c3c638..c7281ab 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -548,7 +548,7 @@ bnxt_ulp_init(struct bnxt *bp)
 		}
 
 		/* update the port database */
-		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 		if (rc) {
 			BNXT_TF_DBG(ERR,
 				    "Failed to update port database\n");
@@ -584,7 +584,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	}
 
 	/* update the port database */
-	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to update port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index e3b9242..66b5840 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -104,10 +104,11 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp)
+					 struct rte_eth_dev *eth_dev)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint32_t port_id = bp->eth_dev->data->port_id;
+	struct bnxt *bp = eth_dev->data->dev_private;
+	uint32_t port_id = eth_dev->data->port_id;
 	uint32_t ifindex;
 	struct ulp_interface_info *intf;
 	int32_t rc;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 271c29a..929a5a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -71,7 +71,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp);
+					 struct rte_eth_dev *eth_dev);
 
 /*
  * Api to get the ulp ifindex for a given device port.
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 06/50] net/bnxt: get port & function related information
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (4 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 05/50] net/bnxt: modify ulp_port_db_dev_port_intf_update prototype Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 07/50] net/bnxt: add support for bnxt_hwrm_port_phy_qcaps Somnath Kotur
                   ` (44 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

add helper functions to get port & function related information
like parif, physical port id & vport id.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kalesh Anakkur Purayil <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  8 +++++
 drivers/net/bnxt/bnxt_ethdev.c           | 58 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h | 10 ++++++
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    | 10 ------
 4 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2b87899..0bdf8f5 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -855,6 +855,9 @@ extern const struct rte_flow_ops bnxt_flow_ops;
 	} \
 } while (0)
 
+#define	BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)	\
+		((eth_dev)->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+
 extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG_RAW(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
@@ -870,6 +873,11 @@ void bnxt_ulp_deinit(struct bnxt *bp);
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 uint16_t bnxt_get_fw_func_id(uint16_t port);
+uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_phy_port_id(uint16_t port);
+uint16_t bnxt_get_vport(uint16_t port);
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port);
 
 void bnxt_cancel_fc_thread(struct bnxt *bp);
 void bnxt_flow_cnt_alarm_cb(void *arg);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index bf018be..af88b36 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -28,6 +28,7 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
+#include "bnxt_tf_common.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -5101,6 +5102,63 @@ bnxt_get_fw_func_id(uint16_t port)
 	return bp->fw_fid;
 }
 
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev))
+		return BNXT_ULP_INTF_TYPE_VF_REP;
+
+	bp = eth_dev->data->dev_private;
+	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
+			   : BNXT_ULP_INTF_TYPE_VF;
+}
+
+uint16_t
+bnxt_get_phy_port_id(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->pf->port_id : bp->parent->port_id;
+}
+
+uint16_t
+bnxt_get_parif(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->fw_fid - 1 : bp->parent->fid - 1;
+}
+
+uint16_t
+bnxt_get_vport(uint16_t port_id)
+{
+	return (1 << bnxt_get_phy_port_id(port_id));
+}
+
 static void bnxt_alloc_error_recovery_info(struct bnxt *bp)
 {
 	struct bnxt_error_recovery_info *info = bp->recovery_info;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f417579..f772d49 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -44,6 +44,16 @@ enum ulp_direction_type {
 	ULP_DIR_EGRESS,
 };
 
+/* enumeration of the interface types */
+enum bnxt_ulp_intf_type {
+	BNXT_ULP_INTF_TYPE_INVALID = 0,
+	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_VF,
+	BNXT_ULP_INTF_TYPE_PF_REP,
+	BNXT_ULP_INTF_TYPE_VF_REP,
+	BNXT_ULP_INTF_TYPE_LAST
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 929a5a5..604c438 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -10,16 +10,6 @@
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
 
-/* enumeration of the interface types */
-enum bnxt_ulp_intf_type {
-	BNXT_ULP_INTF_TYPE_INVALID = 0,
-	BNXT_ULP_INTF_TYPE_PF = 1,
-	BNXT_ULP_INTF_TYPE_VF,
-	BNXT_ULP_INTF_TYPE_PF_REP,
-	BNXT_ULP_INTF_TYPE_VF_REP,
-	BNXT_ULP_INTF_TYPE_LAST
-};
-
 /* Structure for the Port database resource information. */
 struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 07/50] net/bnxt: add support for bnxt_hwrm_port_phy_qcaps
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (5 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 06/50] net/bnxt: get port & function related information Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 08/50] net/bnxt: modify port_db to store & retrieve more info Somnath Kotur
                   ` (43 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_PHY_QCAPS to the firmware to get the physical
port count of the device.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh Anakkur Purayil <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c |  2 ++
 drivers/net/bnxt/bnxt_hwrm.c   | 22 ++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0bdf8f5..65862ab 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -747,6 +747,7 @@ struct bnxt {
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
 	struct bnxt_parent_info	*parent;
+	uint8_t			port_cnt;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index af88b36..697cd66 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5287,6 +5287,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_parent_pf_qcfg(bp);
 
+	bnxt_hwrm_port_phy_qcaps(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 347e1c7..e6a28d0 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1330,6 +1330,28 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	return rc;
 }
 
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp)
+{
+	int rc = 0;
+	struct hwrm_port_phy_qcaps_input req = {0};
+	struct hwrm_port_phy_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCAPS, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	bp->port_cnt = resp->port_cnt;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static bool bnxt_find_lossy_profile(struct bnxt *bp)
 {
 	int i = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index ef89975..87cd407 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -275,4 +275,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 08/50] net/bnxt: modify port_db to store & retrieve more info
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (6 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 07/50] net/bnxt: add support for bnxt_hwrm_port_phy_qcaps Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 09/50] net/bnxt: add support for Exact Match Somnath Kotur
                   ` (42 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Apart from func_svif, func_id & vnic, port_db now stores and
retrieves func_spif, func_parif, phy_port_id, port_svif, port_spif,
port_parif, port_vport. New helper functions have been added to
support the same.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 145 ++++++++++++++++++++++++++++------
 drivers/net/bnxt/tf_ulp/ulp_port_db.h |  72 +++++++++++++----
 2 files changed, 179 insertions(+), 38 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 66b5840..ea27ef4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -106,13 +106,12 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 					 struct rte_eth_dev *eth_dev)
 {
-	struct bnxt_ulp_port_db *port_db;
-	struct bnxt *bp = eth_dev->data->dev_private;
 	uint32_t port_id = eth_dev->data->port_id;
-	uint32_t ifindex;
+	struct ulp_phy_port_info *port_data;
+	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	uint32_t ifindex;
 	int32_t rc;
-	struct bnxt_vnic_info *vnic;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db) {
@@ -133,22 +132,22 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 
 	/* update the interface details */
 	intf = &port_db->ulp_intf_list[ifindex];
-	if (BNXT_PF(bp) || BNXT_VF(bp)) {
-		if (BNXT_PF(bp)) {
-			intf->type = BNXT_ULP_INTF_TYPE_PF;
-			intf->port_svif = bp->port_svif;
-		} else {
-			intf->type = BNXT_ULP_INTF_TYPE_VF;
-		}
-		intf->func_id = bp->fw_fid;
-		intf->func_svif = bp->func_svif;
-		vnic = BNXT_GET_DEFAULT_VNIC(bp);
-		if (vnic)
-			intf->default_vnic = vnic->fw_vnic_id;
-		intf->bp = bp;
-		memcpy(intf->mac_addr, bp->mac_addr, sizeof(intf->mac_addr));
-	} else {
-		BNXT_TF_DBG(ERR, "Invalid interface type\n");
+
+	intf->type = bnxt_get_interface_type(port_id);
+
+	intf->func_id = bnxt_get_fw_func_id(port_id);
+	intf->func_svif = bnxt_get_svif(port_id, 1);
+	intf->func_spif = bnxt_get_phy_port_id(port_id);
+	intf->func_parif = bnxt_get_parif(port_id);
+	intf->default_vnic = bnxt_get_vnic_id(port_id);
+	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+
+	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
+		port_data = &port_db->phy_port_list[intf->phy_port_id];
+		port_data->port_svif = bnxt_get_svif(port_id, 0);
+		port_data->port_spif = bnxt_get_phy_port_id(port_id);
+		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_vport = bnxt_get_vport(port_id);
 	}
 
 	return 0;
@@ -209,7 +208,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 }
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -225,16 +224,88 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS)
+	if (dir == ULP_DIR_EGRESS) {
 		*svif = port_db->ulp_intf_list[ifindex].func_svif;
-	else
-		*svif = port_db->ulp_intf_list[ifindex].port_svif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*svif = port_db->phy_port_list[phy_port_id].port_svif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *spif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*spif = port_db->phy_port_list[phy_port_id].port_spif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *parif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*parif = port_db->phy_port_list[phy_port_id].port_parif;
+	}
+
 	return 0;
 }
 
@@ -262,3 +333,29 @@ ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
 	return 0;
 }
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint16_t *vport)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+	*vport = port_db->phy_port_list[phy_port_id].port_vport;
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 604c438..87de3bc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -15,11 +15,17 @@ struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
 	uint16_t		func_id;
 	uint16_t		func_svif;
-	uint16_t		port_svif;
+	uint16_t		func_spif;
+	uint16_t		func_parif;
 	uint16_t		default_vnic;
-	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
-	/* back pointer to the bnxt driver, it is null for rep ports */
-	struct bnxt		*bp;
+	uint16_t		phy_port_id;
+};
+
+struct ulp_phy_port_info {
+	uint16_t	port_svif;
+	uint16_t	port_spif;
+	uint16_t	port_parif;
+	uint16_t	port_vport;
 };
 
 /* Structure for the Port database */
@@ -29,6 +35,7 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
 };
 
 /*
@@ -74,8 +81,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
-				  uint32_t port_id,
-				  uint32_t *ifindex);
+				  uint32_t port_id, uint32_t *ifindex);
 
 /*
  * Api to get the function id for a given ulp ifindex.
@@ -88,11 +94,10 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex,
-			    uint16_t *func_id);
+			    uint32_t ifindex, uint16_t *func_id);
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -103,9 +108,36 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
-		     uint32_t ifindex,
-		     uint32_t dir,
-		     uint16_t *svif);
+		     uint32_t ifindex, uint32_t dir, uint16_t *svif);
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex, uint32_t dir, uint16_t *spif);
+
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint32_t dir, uint16_t *parif);
 
 /*
  * Api to get the vnic id for a given ulp ifindex.
@@ -118,7 +150,19 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex,
-			     uint16_t *vnic);
+			     uint32_t ifindex, uint16_t *vnic);
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex,	uint16_t *vport);
 
 #endif /* _ULP_PORT_DB_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 09/50] net/bnxt: add support for Exact Match
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (7 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 08/50] net/bnxt: modify port_db to store & retrieve more info Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 10/50] net/bnxt: modify EM insert and delete to use HWRM direct Somnath Kotur
                   ` (41 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add Exact Match support
- Create EM table pool of memory indices
- Insert exact match internal entry API
- Sends EM internal insert and delete request to firmware

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3095 +++++++++++++++++++++----
 drivers/net/bnxt/tf_core/hwrm_tf.h            |    9 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_core.c            |  144 +-
 drivers/net/bnxt/tf_core/tf_core.h            |  383 +--
 drivers/net/bnxt/tf_core/tf_em.c              |   98 +-
 drivers/net/bnxt/tf_core/tf_em.h              |   31 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   16 +
 drivers/net/bnxt/tf_core/tf_msg.c             |   86 +-
 drivers/net/bnxt/tf_core/tf_msg.h             |   13 +
 drivers/net/bnxt/tf_core/tf_session.h         |   18 +
 drivers/net/bnxt/tf_core/tf_tbl.c             |  123 +-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   57 +-
 drivers/net/bnxt/tf_core/tfp.h                |  123 +-
 16 files changed, 3511 insertions(+), 704 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index 7e30c9f..e51f42f 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -611,6 +611,10 @@ struct cmd_nums {
 	#define HWRM_FUNC_VF_BW_QCFG                      UINT32_C(0x196)
 	/* Queries pf ids belong to specified host(s) */
 	#define HWRM_FUNC_HOST_PF_IDS_QUERY               UINT32_C(0x197)
+	/* Queries extended stats per function */
+	#define HWRM_FUNC_QSTATS_EXT                      UINT32_C(0x198)
+	/* Queries exteded statitics context */
+	#define HWRM_STAT_EXT_CTX_QUERY                   UINT32_C(0x199)
 	/* Experimental */
 	#define HWRM_SELFTEST_QLIST                       UINT32_C(0x200)
 	/* Experimental */
@@ -647,41 +651,49 @@ struct cmd_nums {
 	/* Experimental */
 	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
 	/* Experimental */
-	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	#define HWRM_TF_SESSION_REGISTER                  UINT32_C(0x2c8)
 	/* Experimental */
-	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	#define HWRM_TF_SESSION_UNREGISTER                UINT32_C(0x2c9)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2ca)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2cb)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2cc)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cd)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2ce)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cf)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2da)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2db)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2e4)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2e5)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2e6)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2e7)
 	/* Experimental */
-	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2e8)
 	/* Experimental */
-	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2e9)
 	/* Experimental */
-	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	#define HWRM_TF_EM_INSERT                         UINT32_C(0x2ea)
 	/* Experimental */
-	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	#define HWRM_TF_EM_DELETE                         UINT32_C(0x2eb)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2f8)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2f9)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2fa)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2fb)
 	/* Experimental */
 	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
@@ -715,6 +727,13 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
 	/* Send driver debug information to firmware */
 	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
+	/* Query debug capabilities of firmware */
+	#define HWRM_DBG_QCAPS                            UINT32_C(0xff20)
+	/* Retrieve debug settings of firmware */
+	#define HWRM_DBG_QCFG                             UINT32_C(0xff21)
+	/* Set destination parameters for crashdump medium */
+	#define HWRM_DBG_CRASHDUMP_MEDIUM_CFG             UINT32_C(0xff22)
+	#define HWRM_NVM_REQ_ARBITRATION                  UINT32_C(0xffed)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -914,8 +933,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 30
-#define HWRM_VERSION_STR "1.10.1.30"
+#define HWRM_VERSION_RSVD 45
+#define HWRM_VERSION_STR "1.10.1.45"
 
 /****************
  * hwrm_ver_get *
@@ -2293,6 +2312,35 @@ struct cmpl_base {
 	 */
 	#define CMPL_BASE_TYPE_TX_L2             UINT32_C(0x0)
 	/*
+	 * NO-OP completion:
+	 * Completion of NO-OP. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_NO_OP             UINT32_C(0x1)
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of coalesced TX packet. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_COAL        UINT32_C(0x2)
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of PTP TX packet. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_PTP         UINT32_C(0x3)
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_TPA_START_V2   UINT32_C(0xd)
+	/*
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_L2_V2          UINT32_C(0xf)
+	/*
 	 * RX L2 completion:
 	 * Completion of and L2 RX packet. Length = 32B
 	 */
@@ -2322,6 +2370,24 @@ struct cmpl_base {
 	 */
 	#define CMPL_BASE_TYPE_STAT_EJECT        UINT32_C(0x1a)
 	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by
+	 * the Primate and processed by the VEE hardware to ensure that
+	 * all completions on a VEE function have been processed by the
+	 * VEE hardware before FLR process is completed.
+	 */
+	#define CMPL_BASE_TYPE_VEE_FLUSH         UINT32_C(0x1c)
+	/*
+	 * Mid Path Short Completion :
+	 * Completion of a Mid Path Command. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_SHORT    UINT32_C(0x1e)
+	/*
+	 * Mid Path Long Completion :
+	 * Completion of a Mid Path Command. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_LONG     UINT32_C(0x1f)
+	/*
 	 * HWRM Command Completion:
 	 * Completion of an HWRM command.
 	 */
@@ -2398,7 +2464,9 @@ struct tx_cmpl {
 	uint16_t	unused_0;
 	/*
 	 * This is a copy of the opaque field from the first TX BD of this
-	 * transmitted packet.
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
 	 */
 	uint32_t	opaque;
 	uint16_t	errors_v;
@@ -2407,58 +2475,352 @@ struct tx_cmpl {
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	#define TX_CMPL_V                              UINT32_C(0x1)
-	#define TX_CMPL_ERRORS_MASK                    UINT32_C(0xfffe)
-	#define TX_CMPL_ERRORS_SFT                     1
+	#define TX_CMPL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_ERRORS_SFT                         1
 	/*
 	 * This error indicates that there was some sort of problem
 	 * with the BDs for the packet.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK        UINT32_C(0xe)
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT             1
 	/* No error */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR      (UINT32_C(0x0) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
 	/*
 	 * Bad Format:
 	 * BDs were not formatted correctly.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT       (UINT32_C(0x2) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
 	#define TX_CMPL_ERRORS_BUFFER_ERROR_LAST \
 		TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT
 	/*
 	 * When this bit is '1', it indicates that the length of
 	 * the packet was zero. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT          UINT32_C(0x10)
+	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
 	/*
 	 * When this bit is '1', it indicates that the packet
 	 * was longer than the programmed limit in TDI. No
 	 * packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH      UINT32_C(0x20)
+	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
 	/*
 	 * When this bit is '1', it indicates that one or more of the
 	 * BDs associated with this packet generated a PCI error.
 	 * This probably means the address was not valid.
 	 */
-	#define TX_CMPL_ERRORS_DMA_ERROR                UINT32_C(0x40)
+	#define TX_CMPL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
 	/*
 	 * When this bit is '1', it indicates that the packet was longer
 	 * than indicated by the hint. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_HINT_TOO_SHORT           UINT32_C(0x80)
+	#define TX_CMPL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
 	/*
 	 * When this bit is '1', it indicates that the packet was
 	 * dropped due to Poison TLP error on one or more of the
 	 * TLPs in the PXP completion.
 	 */
-	#define TX_CMPL_ERRORS_POISON_TLP_ERROR         UINT32_C(0x100)
+	#define TX_CMPL_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such completions
+	 * are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
 	/* unused2 is 16 b */
 	uint16_t	unused_1;
 	/* unused3 is 32 b */
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* tx_cmpl_coal (size:128b/16B) */
+struct tx_cmpl_coal {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_COAL_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_COAL_TYPE_SFT        0
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of TX packet. Length = 16B
+	 */
+	#define TX_CMPL_COAL_TYPE_TX_L2_COAL   UINT32_C(0x2)
+	#define TX_CMPL_COAL_TYPE_LAST        TX_CMPL_COAL_TYPE_TX_L2_COAL
+	#define TX_CMPL_COAL_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_COAL_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_COAL_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_COAL_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of the packet
+	 * which corresponds with the reported sq_cons_idx. Note that, with
+	 * coalesced completions, completions are generated for only some of the
+	 * packets. The driver will see the opaque field for only those packets.
+	 * Note that, if the packet was described by a short CSO or short CSO
+	 * inline BD, then the 16-bit opaque field from the short CSO BD will
+	 * appear in the bottom 16 bits of this field. For TX rings with
+	 * completion coalescing enabled (which would use the coalesced
+	 * completion record), it is suggested that the driver populate the
+	 * opaque field to indicate the specific TX ring with which the
+	 * completion is associated, then utilize the opaque and sq_cons_idx
+	 * fields in the coalesced completion record to determine the specific
+	 * packets that are to be completed on that ring.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_COAL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_COAL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define TX_CMPL_COAL_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_COAL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_COAL_ERRORS_POISON_TLP_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_COAL_ERRORS_INTERNAL_ERROR \
+		UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_COAL_ERRORS_TIMESTAMP_INVALID_ERROR \
+		UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	uint32_t	sq_cons_idx;
+	/*
+	 * This value is SQ index for the start of the packet following the
+	 * last completed packet.
+	 */
+	#define TX_CMPL_COAL_SQ_CONS_IDX_MASK UINT32_C(0xffffff)
+	#define TX_CMPL_COAL_SQ_CONS_IDX_SFT 0
+} __rte_packed;
+
+/* tx_cmpl_ptp (size:128b/16B) */
+struct tx_cmpl_ptp {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_PTP_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_PTP_TYPE_SFT        0
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of TX packet. Length = 32B
+	 */
+	#define TX_CMPL_PTP_TYPE_TX_L2_PTP    UINT32_C(0x2)
+	#define TX_CMPL_PTP_TYPE_LAST        TX_CMPL_PTP_TYPE_TX_L2_PTP
+	#define TX_CMPL_PTP_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_PTP_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_PTP_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_PTP_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of this
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_PTP_V                                  UINT32_C(0x1)
+	#define TX_CMPL_PTP_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_PTP_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_PTP_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_PTP_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped due
+	 * to a transient internal error in TDC. The packet or LSO can be
+	 * retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_PTP_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_PTP_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint32_t	timestamp_lo;
+} __rte_packed;
+
+/* tx_cmpl_ptp_hi (size:128b/16B) */
+struct tx_cmpl_ptp_hi {
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint16_t	timestamp_hi[3];
+	uint16_t	reserved16;
+	uint64_t	v2;
+	/*
+	 * This value is written by the NIC such that it will be different for
+	 * each pass through the completion queue.The even passes will write 1.
+	 * The odd passes will write 0
+	 */
+	#define TX_CMPL_PTP_HI_V2     UINT32_C(0x1)
+} __rte_packed;
+
 /* rx_pkt_cmpl (size:128b/16B) */
 struct rx_pkt_cmpl {
 	uint16_t	flags_type;
@@ -3003,12 +3365,8 @@ struct rx_pkt_cmpl_hi {
 	#define RX_PKT_CMPL_REORDER_SFT 0
 } __rte_packed;
 
-/*
- * This TPA completion structure is used on devices where the
- * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
- */
-/* rx_tpa_start_cmpl (size:128b/16B) */
-struct rx_tpa_start_cmpl {
+/* rx_pkt_v2_cmpl (size:128b/16B) */
+struct rx_pkt_v2_cmpl {
 	uint16_t	flags_type;
 	/*
 	 * This field indicates the exact type of the completion.
@@ -3017,84 +3375,143 @@ struct rx_tpa_start_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	#define RX_PKT_V2_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_PKT_V2_CMPL_TYPE_SFT                       0
 	/*
-	 * RX L2 TPA Start Completion:
-	 * Completion at the beginning of a TPA operation.
-	 * Length = 32B
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
-	#define RX_TPA_START_CMPL_TYPE_LAST \
-		RX_TPA_START_CMPL_TYPE_RX_TPA_START
-	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_START_CMPL_FLAGS_SFT                6
-	/* This bit will always be '0' for TPA start completions. */
-	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_PKT_V2_CMPL_TYPE_RX_L2_V2                    UINT32_C(0xf)
+	#define RX_PKT_V2_CMPL_TYPE_LAST \
+		RX_PKT_V2_CMPL_TYPE_RX_L2_V2
+	#define RX_PKT_V2_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_PKT_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Normal:
+	 * Packet was placed using normal algorithm.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_NORMAL \
+		(UINT32_C(0x0) << 7)
 	/*
 	 * Jumbo:
-	 * TPA Packet was placed using jumbo algorithm. This means
-	 * that the first buffer will be filled with data before
-	 * moving to aggregation buffers. Each aggregation buffer
-	 * will be filled before moving to the next aggregation
-	 * buffer.
+	 * Packet was placed using jumbo algorithm.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
 		(UINT32_C(0x1) << 7)
 	/*
 	 * Header/Data Separation:
 	 * Packet was placed using Header/Data separation algorithm.
 	 * The separation location is indicated by the itype field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
 	/*
-	 * GRO/Jumbo:
-	 * Packet will be placed using GRO/Jumbo where the first
-	 * packet is filled with data. Subsequent packets will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * Truncation:
+	 * Packet was placed using truncation algorithm. The
+	 * placed (truncated) length is indicated in the payload_offset
+	 * field. The original length is indicated in the len field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
-		(UINT32_C(0x5) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION \
+		(UINT32_C(0x3) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_PKT_V2_CMPL_FLAGS_RSS_VALID                 UINT32_C(0x400)
 	/*
-	 * GRO/Header-Data Separation:
-	 * Packet will be placed using GRO/HDS where the header
-	 * is in the first packet.
-	 * Payload of each packet will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
-		(UINT32_C(0x6) << 7)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* This bit is '1' if the RSS field in this completion is valid. */
-	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
-	/* unused is 1 b */
-	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	#define RX_PKT_V2_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_MASK                UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * Not Known:
+	 * Indicates that the packet type was not known.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_NOT_KNOWN \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * IP Packet:
+	 * Indicates that the packet was an IP packet, but further
+	 * classification was not possible.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_IP \
+		(UINT32_C(0x1) << 12)
 	/*
 	 * TCP Packet:
 	 * Indicates that the packet was IP and TCP.
+	 * This indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_TCP \
 		(UINT32_C(0x2) << 12)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
-		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
 	/*
-	 * This value indicates the amount of packet data written to the
-	 * buffer the opaque field in this completion corresponds to.
+	 * UDP Packet:
+	 * Indicates that the packet was IP and UDP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_UDP \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * FCoE Packet:
+	 * Indicates that the packet was recognized as a FCoE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_FCOE \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * RoCE Packet:
+	 * Indicates that the packet was recognized as a RoCE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ROCE \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * ICMP Packet:
+	 * Indicates that the packet was recognized as ICMP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ICMP \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * PtP packet wo/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_WO_TIMESTAMP \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * PtP packet w/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet and that a timestamp was taken for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP \
+		(UINT32_C(0x9) << 12)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP
+	/*
+	 * This is the length of the data for the packet stored in the
+	 * buffer(s) identified by the opaque value. This includes
+	 * the packet BD and any associated buffer BDs. This does not include
+	 * the length of any data places in aggregation BDs.
 	 */
 	uint16_t	len;
 	/*
@@ -3102,19 +3519,597 @@ struct rx_tpa_start_cmpl {
 	 * corresponds to.
 	 */
 	uint32_t	opaque;
+	uint8_t	agg_bufs_v1;
 	/*
 	 * This value is written by the NIC such that it will be different
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	uint8_t	v1;
+	#define RX_PKT_V2_CMPL_V1           UINT32_C(0x1)
 	/*
-	 * This value is written by the NIC such that it will be different
-	 * for each pass through the completion queue. The even passes
-	 * will write 1. The odd passes will write 0.
+	 * This value is the number of aggregation buffers that follow this
+	 * entry in the completion ring that are a part of this packet.
+	 * If the value is zero, then the packet is completely contained
+	 * in the buffer space provided for the packet in the RX ring.
 	 */
-	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
-	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
+	#define RX_PKT_V2_CMPL_AGG_BUFS_MASK UINT32_C(0x3e)
+	#define RX_PKT_V2_CMPL_AGG_BUFS_SFT 1
+	/* unused1 is 2 b */
+	#define RX_PKT_V2_CMPL_UNUSED1_MASK UINT32_C(0xc0)
+	#define RX_PKT_V2_CMPL_UNUSED1_SFT  6
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	uint16_t	metadata1_payload_offset;
+	/*
+	 * This is data from the CFA as indicated by the meta_format field.
+	 * If truncation placement is not used, this value indicates the offset
+	 * in bytes from the beginning of the packet where the inner payload
+	 * starts. This value is valid for TCP, UDP, FCoE, and RoCE packets. If
+	 * truncation placement is used, this value represents the placed
+	 * (truncated) length of the packet.
+	 */
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_MASK    UINT32_C(0x1ff)
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_SFT     0
+	/* This is data from the CFA as indicated by the meta_format field. */
+	#define RX_PKT_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC. When vee_cmpl_mode
+	 * is set in VNIC context, this is the lower 32b of the host address
+	 * from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/* Last 16 bytes of RX Packet V2 Completion Record */
+/* rx_pkt_v2_cmpl_hi (size:128b/16B) */
+struct rx_pkt_v2_cmpl_hi {
+	uint32_t	flags2;
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_LAST \
+		RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_PKT_V2_CMPL_HI_V2 \
+		UINT32_C(0x1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_SFT                               1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packet that was found after part of the
+	 * packet was already placed. The packet should be treated as
+	 * invalid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_SFT                   1
+	/* No buffer error */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit: Packet did not fit into packet buffer provided.
+	 * For regular placement, this means the packet did not fit in
+	 * the buffer provided. For HDS and jumbo placement, this means
+	 * that the packet could not be placed into 8 physical buffers
+	 * (if fixed-size buffers are used), or that the packet could
+	 * not be placed in the number of physical buffers configured
+	 * for the VNIC (if variable-size buffers are used)
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Not On Chip: All BDs needed for the packet were not on-chip
+	 * when the packet arrived. For regular placement, this error is
+	 * not valid. For HDS and jumbo placement, this means that not
+	 * enough agg BDs were posted to place the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
+		(UINT32_C(0x2) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This indicates that there was an error in the outer tunnel
+	 * portion of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_MASK \
+		UINT32_C(0x70)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_SFT                   4
+	/*
+	 * No additional error occurred on the outer tunnel portion
+	 * of the packet or the packet does not have a outer tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the outer tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * Indicates that header length is out of range in the outer
+	 * tunnel header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the outer tunnel l3 header length. Valid for IPv4, or
+	 * IPv6 outer tunnel packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the outer tunnel UDP header length for a outer
+	 * tunnel UDP packet that is not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 4)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have
+	 * failed (e.g. TTL = 0) in the outer tunnel header. Valid for
+	 * IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_TTL \
+		(UINT32_C(0x5) << 4)
+	/*
+	 * Indicates that the IP checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_CS_ERROR \
+		(UINT32_C(0x6) << 4)
+	/*
+	 * Indicates that the L4 checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR \
+		(UINT32_C(0x7) << 4)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR
+	/*
+	 * This indicates that there was a CRC error on either an FCoE
+	 * or RoCE packet. The itype indicates the packet type.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_CRC_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that there was an error in the tunnel portion
+	 * of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_MASK \
+		UINT32_C(0xe00)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_SFT                    9
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * of the packet or the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 9)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 9)
+	/*
+	 * Indicates that header length is out of range in the tunnel
+	 * header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 9)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the tunnel l3 header length. Valid for IPv4, or IPv6 tunnel
+	 * packet packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 9)
+	/*
+	 * Indicates that the physical packet is shorter than that claimed
+	 * by the tunnel UDP header length for a tunnel UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 9)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have failed
+	 * (e.g. TTL = 0) in the tunnel header. Valid for IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \
+		(UINT32_C(0x5) << 9)
+	/*
+	 * Indicates that the IP checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \
+		(UINT32_C(0x6) << 9)
+	/*
+	 * Indicates that the L4 checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \
+		(UINT32_C(0x7) << 9)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR
+	/*
+	 * This indicates that there was an error in the inner
+	 * portion of the packet when this
+	 * field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_MASK \
+		UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_SFT                      12
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * or the packet of the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * Indicates that IP header version does not match
+	 * expectation from L2 Ethertype for IPv4 and IPv6 or that
+	 * option other than VFT was parsed on
+	 * FCoE packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 12)
+	/*
+	 * indicates that header length is out of range. Valid for
+	 * IPv4 and RoCE
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 12)
+	/*
+	 * indicates that the IPv4 TTL or IPv6 hop limit check
+	 * have failed (e.g. TTL = 0). Valid for IPv4, and IPv6
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_TTL \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * Indicates that physical packet is shorter than that
+	 * claimed by the l3 header length. Valid for IPv4,
+	 * IPv6 packet or RoCE packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the UDP header length for a UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_UDP_TOTAL_ERROR \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * Indicates that TCP header length > IP payload. Valid for
+	 * TCP packets only.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN \
+		(UINT32_C(0x6) << 12)
+	/* Indicates that TCP header length < 5. Valid for TCP. */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN_TOO_SMALL \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * Indicates that TCP option headers result in a TCP header
+	 * size that does not match data offset in TCP header. Valid
+	 * for TCP.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * Indicates that the IP checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \
+		(UINT32_C(0x9) << 12)
+	/*
+	 * Indicates that the L4 checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \
+		(UINT32_C(0xa) << 12)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format=1, this value is the VLAN VID. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_SFT 0
+	/* When meta_format=1, this value is the VLAN DE. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format=1, this value is the VLAN PRI. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_SFT 13
+	/*
+	 * The timestamp field contains the 32b timestamp for the packet from
+	 * the MAC.
+	 */
+	uint32_t	timestamp;
+} __rte_packed;
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_cmpl (size:128b/16B) */
+struct rx_tpa_start_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
+	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	/*
+	 * RX L2 TPA Start Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 */
+	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
+	#define RX_TPA_START_CMPL_TYPE_LAST \
+		RX_TPA_START_CMPL_TYPE_RX_TPA_START
+	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
+	#define RX_TPA_START_CMPL_FLAGS_SFT                6
+	/* This bit will always be '0' for TPA start completions. */
+	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
+	/* unused is 1 b */
+	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
 	/*
 	 * This is the RSS hash type for the packet. The value is packed
 	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
@@ -3288,6 +4283,430 @@ struct rx_tpa_start_cmpl_hi {
 /*
  * This TPA completion structure is used on devices where the
  * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ * RX L2 TPA Start V2 Completion Record (32 bytes split to 2 16-byte
+ * struct)
+ */
+/* rx_tpa_start_v2_cmpl (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define RX_TPA_START_V2_CMPL_TYPE_SFT                       0
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2 \
+		UINT32_C(0xd)
+	#define RX_TPA_START_V2_CMPL_TYPE_LAST \
+		RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2
+	#define RX_TPA_START_V2_CMPL_FLAGS_MASK \
+		UINT32_C(0xffc0)
+	#define RX_TPA_START_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an error
+	 * of some type. Type of error is indicated in error_flags.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ERROR \
+		UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_MASK \
+		UINT32_C(0x380)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_RSS_VALID \
+		UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement. It
+	 * starts at the first 32B boundary after the end of the header for
+	 * HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PKT_METADATA_PRESENT \
+		UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to. If the VNIC is configured to not use an Rx BD for
+	 * the TPA Start completion, then this is a copy of the opaque field
+	 * from the first BD used to place the TPA Start packet.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_LAST RX_TPA_START_V2_CMPL_V1
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	uint16_t	agg_id;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	#define RX_TPA_START_V2_CMPL_AGG_ID_MASK            UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_AGG_ID_SFT             0
+	#define RX_TPA_START_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN valid. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC.
+	 * When vee_cmpl_mode is set in VNIC context, this is the lower
+	 * 32b of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/*
+ * Last 16 bytes of RX L2 TPA Start V2 Completion Record
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_v2_cmpl_hi (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl_hi {
+	uint32_t	flags2;
+	/* This indicates that the aggregation was done using GRO rules. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_AGG_GRO \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet in the affregation.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 * CS status for TPA packets is always valid. This means that "all_ok"
+	 * status will always be set. The ok count status will be set
+	 * appropriately for the packet header, such that all existing CS
+	 * values are ok.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set. For TPA Start completions,
+	 * the complete checksum is calculated for the first packet in the
+	 * aggregation only.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V2 \
+		UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_SFT                     1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packetThe packet should be treated as
+	 * invalid.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	/* No buffer error */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit:
+	 * Packet did not fit into packet buffer provided. This means
+	 * that the TPA Start packet was too big to be placed into the
+	 * per-packet maximum number of physical buffers configured for
+	 * the VNIC, or that it was too big to be placed into the
+	 * per-aggregation maximum number of physical buffers configured
+	 * for the VNIC. This error only occurs when the VNIC is
+	 * configured for variable size receive buffers.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_LAST \
+		RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format != 0, this value is the VLAN VID. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_SFT 0
+	/* When meta_format != 0, this value is the VLAN DE. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format != 0, this value is the VLAN PRI. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_SFT 13
+	/*
+	 * This field contains the outer_l3_offset, inner_l2_offset,
+	 * inner_l3_offset, and inner_l4_size.
+	 *
+	 * hdr_offsets[8:0] contains the outer_l3_offset.
+	 * hdr_offsets[17:9] contains the inner_l2_offset.
+	 * hdr_offsets[26:18] contains the inner_l3_offset.
+	 * hdr_offsets[31:27] contains the inner_l4_size.
+	 */
+	uint32_t	hdr_offsets;
+} __rte_packed;
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
  */
 /* rx_tpa_end_cmpl (size:128b/16B) */
 struct rx_tpa_end_cmpl {
@@ -3299,27 +4718,27 @@ struct rx_tpa_end_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_END_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_END_CMPL_TYPE_SFT                 0
+	#define RX_TPA_END_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_TPA_END_CMPL_TYPE_SFT                       0
 	/*
 	 * RX L2 TPA End Completion:
 	 * Completion at the end of a TPA operation.
 	 * Length = 32B
 	 */
-	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END            UINT32_C(0x15)
+	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END                  UINT32_C(0x15)
 	#define RX_TPA_END_CMPL_TYPE_LAST \
 		RX_TPA_END_CMPL_TYPE_RX_TPA_END
-	#define RX_TPA_END_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_END_CMPL_FLAGS_SFT                6
+	#define RX_TPA_END_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_TPA_END_CMPL_FLAGS_SFT                      6
 	/*
 	 * When this bit is '1', it indicates a packet that has an
 	 * error of some type. Type of error is indicated in
 	 * error_flags.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_TPA_END_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT             7
 	/*
 	 * Jumbo:
 	 * TPA Packet was placed using jumbo algorithm. This means
@@ -3338,6 +4757,15 @@ struct rx_tpa_end_cmpl {
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
 	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
+	/*
 	 * GRO/Jumbo:
 	 * Packet will be placed using GRO/Jumbo where the first
 	 * packet is filled with data. Subsequent packets will be
@@ -3358,11 +4786,28 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS \
 		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* unused is 2 b */
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_MASK         UINT32_C(0xc00)
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_SFT          10
+		RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* unused is 1 b */
+	#define RX_TPA_END_CMPL_FLAGS_UNUSED                    UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
@@ -3372,8 +4817,9 @@ struct rx_tpa_end_cmpl {
 	 *     field is valid and contains the TCP checksum.
 	 *     This also indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT                 12
 	/*
 	 * This value is zero for TPA End completions.
 	 * There is no data in the buffer that corresponds to the opaque
@@ -4243,6 +5689,52 @@ struct rx_abuf_cmpl {
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* VEE FLUSH Completion Record (16 bytes) */
+/* vee_flush (size:128b/16B) */
+struct vee_flush {
+	uint32_t	downstream_path_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define VEE_FLUSH_TYPE_MASK           UINT32_C(0x3f)
+	#define VEE_FLUSH_TYPE_SFT            0
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by the Primate and processed
+	 * by the VEE hardware to ensure that all completions on a VEE
+	 * function have been processed by the VEE hardware before FLR
+	 * process is completed.
+	 */
+	#define VEE_FLUSH_TYPE_VEE_FLUSH        UINT32_C(0x1c)
+	#define VEE_FLUSH_TYPE_LAST            VEE_FLUSH_TYPE_VEE_FLUSH
+	/* downstream_path is 1 b */
+	#define VEE_FLUSH_DOWNSTREAM_PATH     UINT32_C(0x40)
+	/* This completion is associated with VEE Transmit */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_TX    (UINT32_C(0x0) << 6)
+	/* This completion is associated with VEE Receive */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_RX    (UINT32_C(0x1) << 6)
+	#define VEE_FLUSH_DOWNSTREAM_PATH_LAST VEE_FLUSH_DOWNSTREAM_PATH_RX
+	/*
+	 * This is an opaque value that is passed through the completion
+	 * to the VEE handler SW and is used to indicate what VEE VQ or
+	 * function has completed FLR processing.
+	 */
+	uint32_t	opaque;
+	uint32_t	v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes will
+	 * write 1. The odd passes will write 0.
+	 */
+	#define VEE_FLUSH_V     UINT32_C(0x1)
+	/* unused3 is 32 b */
+	uint32_t	unused_3;
+} __rte_packed;
+
 /* eject_cmpl (size:128b/16B) */
 struct eject_cmpl {
 	uint16_t	type;
@@ -6562,7 +8054,7 @@ struct hwrm_async_event_cmpl_deferred_response {
 	/*
 	 * The PF's mailbox is clear to issue another command.
 	 * A command with this seq_id is still in progress
-	 * and will return a regular HWRM completion when done.
+	 * and will return a regualr HWRM completion when done.
 	 * 'event_data1' field, if non-zero, contains the estimated
 	 * execution time for the command.
 	 */
@@ -7476,6 +8968,8 @@ struct hwrm_func_qcaps_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -7730,6 +9224,12 @@ struct hwrm_func_qcaps_output {
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
 		UINT32_C(0x40000000)
 	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_QCAPS command
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_DBG_QCAPS_CMD_SUPPORTED \
+		UINT32_C(0x80000000)
+	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
 	 * MAC address is currently configured.
@@ -7854,6 +9354,19 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
 		UINT32_C(0x2)
+	/*
+	 * If 1, the device can report extended hw statistics (including
+	 * additional tpa statistics).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_EXT_HW_STATS_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then the core firmware has support to enable/
+	 * disable hot reset support for interface dynamically through
+	 * HWRM_FUNC_CFG.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x8)
 	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -7904,6 +9417,8 @@ struct hwrm_func_qcfg_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -8014,6 +9529,15 @@ struct hwrm_func_qcfg_output {
 	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x100)
 	/*
+	 * If set to 1, then the firmware and all currently registered driver
+	 * instances support hot reset. The hot reset support will be updated
+	 * dynamically based on the driver interface advertisement.
+	 * If set to 0, then the adapter is not currently able to initiate
+	 * hot reset.
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_HOT_RESET_ALLOWED \
+		UINT32_C(0x200)
+	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
 	 * MAC address is currently configured.
@@ -8565,6 +10089,17 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x2000000)
+	/*
+	 * If this bit is set to 0, then the interface does not support hot
+	 * reset capability which it advertised with the hot_reset_support
+	 * flag in HWRM_FUNC_DRV_RGTR. If any of the function has set this
+	 * flag to 0, adapter cannot do the hot reset. In this state, if the
+	 * firmware receives a hot reset request, firmware must fail the
+	 * request. If this bit is set to 1, then interface is renabling the
+	 * hot reset capability.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS_HOT_RESET_IF_EN_DIS \
+		UINT32_C(0x4000000)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the mtu field to be
@@ -8705,6 +10240,12 @@ struct hwrm_func_cfg_input {
 	#define HWRM_FUNC_CFG_INPUT_ENABLES_ADMIN_LINK_STATE \
 		UINT32_C(0x400000)
 	/*
+	 * This bit must be '1' for the hot_reset_if_en_dis field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x800000)
+	/*
 	 * The maximum transmission unit of the function.
 	 * The HWRM should make sure that the mtu of
 	 * the function does not exceed the mtu of the physical
@@ -9036,15 +10577,21 @@ struct hwrm_func_qstats_input {
 	/* This flags indicates the type of statistics request. */
 	uint8_t	flags;
 	/* This value is not used to avoid backward compatibility issues. */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED    UINT32_C(0x0)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
 	/*
 	 * flags should be set to 1 when request is for only RoCE statistics.
 	 * This will be honored only if the caller_fid is a privileged PF.
 	 * In all other cases FID and caller_fid should be the same.
 	 */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY UINT32_C(0x1)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
 	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_LAST \
-		HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY
+		HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK
 	uint8_t	unused_0[5];
 } __rte_packed;
 
@@ -9130,6 +10677,132 @@ struct hwrm_func_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/************************
+ * hwrm_func_qstats_ext *
+ ************************/
+
+
+/* hwrm_func_qstats_ext_input (size:192b/24B) */
+struct hwrm_func_qstats_ext_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being queried.
+	 * 0xFF... (All Fs) if the query is for the requesting
+	 * function.
+	 * A privileged PF can query for other function's statistics.
+	 */
+	uint16_t	fid;
+	/* This flags indicates the type of statistics request. */
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * flags should be set to 1 when request is for only RoCE statistics.
+	 * This will be honored only if the caller_fid is a privileged PF.
+	 * In all other cases FID and caller_fid should be the same.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
+} __rte_packed;
+
+/* hwrm_func_qstats_ext_output (size:1472b/184B) */
+struct hwrm_func_qstats_ext_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on received path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***********************
  * hwrm_func_clr_stats *
  ***********************/
@@ -10116,7 +11789,7 @@ struct hwrm_func_backing_store_qcaps_output {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -11039,7 +12712,7 @@ struct hwrm_func_backing_store_cfg_input {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -14403,7 +16076,7 @@ struct hwrm_port_phy_qcfg_output {
 	/* Module is not inserted. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTINSERTED \
 		UINT32_C(0x4)
-	/* Module is powered down because of over current fault. */
+	/* Module is powered down becuase of over current fault. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_CURRENTFAULT \
 		UINT32_C(0x5)
 	/* Module status is not applicable. */
@@ -16149,7 +17822,18 @@ struct hwrm_port_qstats_input {
 	uint64_t	resp_addr;
 	/* Port ID of port that is being queried. */
 	uint16_t	port_id;
-	uint8_t	unused_0[6];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -16382,7 +18066,7 @@ struct rx_port_stats_ext {
  * Port Rx Statistics extended PFC WatchDog Format.
  * StormDetect and StormRevert event determination is based
  * on an integration period and a percentage threshold.
- * StormDetect event - when percentage of XOFF frames received
+ * StormDetect event - when percentage of XOFF frames receieved
  * within an integration period exceeds the configured threshold.
  * StormRevert event - when percentage of XON frames received
  * within an integration period exceeds the configured threshold.
@@ -16843,7 +18527,18 @@ struct hwrm_port_qstats_ext_input {
 	 * statistics block in bytes
 	 */
 	uint16_t	rx_stat_size;
-	uint8_t	unused_0[2];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for the counter mask,
+	 * representing width of each of the stats counters, rather than
+	 * counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0;
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -19283,7 +20978,7 @@ struct hwrm_port_phy_mdio_bus_acquire_input {
 	 * Timeout in milli seconds, MDIO BUS will be released automatically
 	 * after this time, if another mdio acquire command is not received
 	 * within the timeout window from the same client.
-	 * A 0xFFFF will hold the bus until this bus is released.
+	 * A 0xFFFF will hold the bus untill this bus is released.
 	 */
 	uint16_t	mdio_bus_timeout;
 	uint8_t	unused_0[2];
@@ -25312,95 +27007,104 @@ struct hwrm_ring_free_input {
 	/* Ring Type. */
 	uint8_t	ring_type;
 	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	/* TX Ring (TR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	/* RX Ring (RR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	/* RoCE Notification Completion Ring (ROCE_CR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	/* RX Aggregation Ring */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
+	/* Notification Queue */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
+		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
+	uint8_t	unused_0;
+	/* Physical number of ring allocated. */
+	uint16_t	ring_id;
+	uint8_t	unused_1[4];
+} __rte_packed;
+
+/* hwrm_ring_free_output (size:128b/16B) */
+struct hwrm_ring_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/*******************
+ * hwrm_ring_reset *
+ *******************/
+
+
+/* hwrm_ring_reset_input (size:192b/24B) */
+struct hwrm_ring_reset_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Ring Type. */
+	uint8_t	ring_type;
+	/* L2 Completion Ring (CR) */
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL     UINT32_C(0x0)
 	/* TX Ring (TR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX          UINT32_C(0x1)
 	/* RX Ring (RR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX          UINT32_C(0x2)
 	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
-	/* RX Aggregation Ring */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
-	/* Notification Queue */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
-		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
-	uint8_t	unused_0;
-	/* Physical number of ring allocated. */
-	uint16_t	ring_id;
-	uint8_t	unused_1[4];
-} __rte_packed;
-
-/* hwrm_ring_free_output (size:128b/16B) */
-struct hwrm_ring_free_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL   UINT32_C(0x3)
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * Rx Ring Group.  This is to reset rx and aggregation in an atomic
+	 * operation. Completion ring associated with this ring group is
+	 * not reset.
 	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/*******************
- * hwrm_ring_reset *
- *******************/
-
-
-/* hwrm_ring_reset_input (size:192b/24B) */
-struct hwrm_ring_reset_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
-	 */
-	uint64_t	resp_addr;
-	/* Ring Type. */
-	uint8_t	ring_type;
-	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
-	/* TX Ring (TR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX        UINT32_C(0x1)
-	/* RX Ring (RR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX        UINT32_C(0x2)
-	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP UINT32_C(0x6)
 	#define HWRM_RING_RESET_INPUT_RING_TYPE_LAST \
-		HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL
+		HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP
 	uint8_t	unused_0;
-	/* Physical number of the ring. */
+	/*
+	 * Physical number of the ring. When ring type is rx_ring_grp, ring id
+	 * actually refers to ring group id.
+	 */
 	uint16_t	ring_id;
 	uint8_t	unused_1[4];
 } __rte_packed;
@@ -25615,7 +27319,18 @@ struct hwrm_ring_cmpl_ring_qaggint_params_input {
 	uint64_t	resp_addr;
 	/* Physical number of completion ring. */
 	uint16_t	ring_id;
-	uint8_t	unused_0[6];
+	uint16_t	flags;
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_MASK \
+		UINT32_C(0x3)
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_SFT 0
+	/*
+	 * Set this flag to 1 when querying parameters on a notification
+	 * queue. Set this flag to 0 when querying parameters on a
+	 * completion queue or completion ring.
+	 */
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
+		UINT32_C(0x4)
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_ring_cmpl_ring_qaggint_params_output (size:256b/32B) */
@@ -25652,19 +27367,19 @@ struct hwrm_ring_cmpl_ring_qaggint_params_output {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA when in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
+	 * Maximum wait time spent aggregating
 	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
@@ -25738,7 +27453,7 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	/*
 	 * Set this flag to 1 when configuring parameters on a
 	 * notification queue. Set this flag to 0 when configuring
-	 * parameters on a completion queue.
+	 * parameters on a completion queue or completion ring.
 	 */
 	#define HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
 		UINT32_C(0x4)
@@ -25753,20 +27468,20 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA while in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
-	 * cmpls before signaling the interrupt after the
+	 * Maximum wait time spent aggregating
+	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
 	uint16_t	int_lat_tmr_max;
@@ -33339,78 +35054,246 @@ struct hwrm_tf_version_get_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-} __rte_packed;
-
-/* hwrm_tf_version_get_output (size:128b/16B) */
-struct hwrm_tf_version_get_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	/* Version Major number. */
-	uint8_t	major;
-	/* Version Minor number. */
-	uint8_t	minor;
-	/* Version Update number. */
-	uint8_t	update;
-	/* unused. */
-	uint8_t	unused0[4];
+} __rte_packed;
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_open_output (size:192b/24B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware.
+	 */
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the first client on
+	 * the newly created session.
+	 */
+	uint32_t	fw_session_client_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* unused. */
+	uint8_t	unused1[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/****************************
+ * hwrm_tf_session_register *
+ ****************************/
+
+
+/* hwrm_tf_session_register_input (size:704b/88B) */
+struct hwrm_tf_session_register_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field is
-	 * written last.
-	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/************************
- * hwrm_tf_session_open *
- ************************/
-
-
-/* hwrm_tf_session_open_input (size:640b/80B) */
-struct hwrm_tf_session_open_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * Unique session identifier for the session that the
+	 * register request want to create a new client on. This
+	 * value originates from the first open request.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
 	 */
-	uint64_t	resp_addr;
-	/* Name of the session. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session client. */
+	uint8_t	session_client_name[64];
 } __rte_packed;
 
-/* hwrm_tf_session_open_output (size:128b/16B) */
-struct hwrm_tf_session_open_output {
+/* hwrm_tf_session_register_output (size:128b/16B) */
+struct hwrm_tf_session_register_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33420,12 +35303,11 @@ struct hwrm_tf_session_open_output {
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
 	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session.
+	 * Unique session client identifier for the session created
+	 * by the firmware. It includes the session the client it
+	 * attached to and session client info.
 	 */
-	uint32_t	fw_session_id;
+	uint32_t	fw_session_client_id;
 	/* unused. */
 	uint8_t	unused0[3];
 	/*
@@ -33439,13 +35321,13 @@ struct hwrm_tf_session_open_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/**************************
- * hwrm_tf_session_attach *
- **************************/
+/******************************
+ * hwrm_tf_session_unregister *
+ ******************************/
 
 
-/* hwrm_tf_session_attach_input (size:704b/88B) */
-struct hwrm_tf_session_attach_input {
+/* hwrm_tf_session_unregister_input (size:192b/24B) */
+struct hwrm_tf_session_unregister_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -33475,24 +35357,19 @@ struct hwrm_tf_session_attach_input {
 	 */
 	uint64_t	resp_addr;
 	/*
-	 * Unique session identifier for the session that the attach
-	 * request want to attach to. This value originates from the
-	 * shared session memory that the attach request opened by
-	 * way of the 'attach name' that was passed in to the core
-	 * attach API.
-	 * The fw_session_id of the attach session includes PCIe bus
-	 * info to distinguish the PF and session info to identify
-	 * the associated TruFlow session.
+	 * Unique session identifier for the session that the
+	 * unregister request want to close a session client on.
 	 */
-	uint32_t	attach_fw_session_id;
-	/* unused. */
-	uint32_t	unused0;
-	/* Name of the session it self. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the session that the
+	 * unregister request want to close.
+	 */
+	uint32_t	fw_session_client_id;
 } __rte_packed;
 
-/* hwrm_tf_session_attach_output (size:128b/16B) */
-struct hwrm_tf_session_attach_output {
+/* hwrm_tf_session_unregister_output (size:128b/16B) */
+struct hwrm_tf_session_unregister_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33501,16 +35378,8 @@ struct hwrm_tf_session_attach_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session. This fw_session_id is unique to the attach
-	 * request.
-	 */
-	uint32_t	fw_session_id;
 	/* unused. */
-	uint8_t	unused0[3];
+	uint8_t	unused0[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33746,15 +35615,17 @@ struct hwrm_tf_session_resc_qcaps_input {
 	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * Defines the size of the provided qcaps_addr array
 	 * buffer. The size should be set to the Resource Manager
-	 * provided max qcaps value that is device specific. This is
-	 * the max size possible.
+	 * provided max number of qcaps entries which is device
+	 * specific. Resource Manager gets the max size from HCAPI
+	 * RM.
 	 */
-	uint16_t	size;
+	uint16_t	qcaps_size;
 	/*
-	 * This is the DMA address for the qcaps output data
-	 * array. Array is of tf_rm_cap type and is device specific.
+	 * This is the DMA address for the qcaps output data array
+	 * buffer. Array is of tf_rm_resc_req_entry type and is
+	 * device specific.
 	 */
 	uint64_t	qcaps_addr;
 } __rte_packed;
@@ -33772,29 +35643,28 @@ struct hwrm_tf_session_resc_qcaps_output {
 	/* Control flags. */
 	uint32_t	flags;
 	/* Session reservation strategy. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_SFT \
 		0
 	/* Static partitioning. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_STATIC \
 		UINT32_C(0x0)
 	/* Strategy 1. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_1 \
 		UINT32_C(0x1)
 	/* Strategy 2. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_2 \
 		UINT32_C(0x2)
 	/* Strategy 3. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3 \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
-		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3
 	/*
-	 * Size of the returned tf_rm_cap data array. The value
-	 * cannot exceed the size defined by the input msg. The data
-	 * array is returned using the qcaps_addr specified DMA
-	 * address also provided by the input msg.
+	 * Size of the returned qcaps_addr data array buffer. The
+	 * value cannot exceed the size defined by the input msg,
+	 * qcaps_size.
 	 */
 	uint16_t	size;
 	/* unused. */
@@ -33817,7 +35687,7 @@ struct hwrm_tf_session_resc_qcaps_output {
  ******************************/
 
 
-/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+/* hwrm_tf_session_resc_alloc_input (size:320b/40B) */
 struct hwrm_tf_session_resc_alloc_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -33860,16 +35730,25 @@ struct hwrm_tf_session_resc_alloc_input {
 	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided num_addr
-	 * buffer.
+	 * Defines the array size of the provided req_addr and
+	 * resv_addr array buffers. Should be set to the number of
+	 * request entries.
 	 */
-	uint16_t	size;
+	uint16_t	req_size;
+	/*
+	 * This is the DMA address for the request input data array
+	 * buffer. Array is of tf_rm_resc_req_entry type. Size of the
+	 * array buffer is provided by the 'req_size' field in this
+	 * message.
+	 */
+	uint64_t	req_addr;
 	/*
-	 * This is the DMA address for the num input data array
-	 * buffer. Array is of tf_rm_num type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * This is the DMA address for the resc output data array
+	 * buffer. Array is of tf_rm_resc_entry type. Size of the array
+	 * buffer is provided by the 'req_size' field in this
+	 * message.
 	 */
-	uint64_t	num_addr;
+	uint64_t	resc_addr;
 } __rte_packed;
 
 /* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
@@ -33882,8 +35761,15 @@ struct hwrm_tf_session_resc_alloc_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
+	/*
+	 * Size of the returned tf_rm_resc_entry data array. The value
+	 * cannot exceed the req_size defined by the input msg. The data
+	 * array is returned using the resv_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
 	/* unused. */
-	uint8_t	unused0[7];
+	uint8_t	unused0[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33946,11 +35832,12 @@ struct hwrm_tf_session_resc_free_input {
 	 * Defines the size, in bytes, of the provided free_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	free_size;
 	/*
 	 * This is the DMA address for the free input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size field of this message.
+	 * buffer.  Array is of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'free_size' field of this
+	 * message.
 	 */
 	uint64_t	free_addr;
 } __rte_packed;
@@ -34029,11 +35916,12 @@ struct hwrm_tf_session_resc_flush_input {
 	 * Defines the size, in bytes, of the provided flush_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	flush_size;
 	/*
 	 * This is the DMA address for the flush input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * buffer.  Array of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'flush_size' field in this
+	 * message.
 	 */
 	uint64_t	flush_addr;
 } __rte_packed;
@@ -34062,12 +35950,9 @@ struct hwrm_tf_session_resc_flush_output {
 } __rte_packed;
 
 /* TruFlow RM capability of a resource. */
-/* tf_rm_cap (size:64b/8B) */
-struct tf_rm_cap {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_req_entry (size:64b/8B) */
+struct tf_rm_resc_req_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Minimum value. */
 	uint16_t	min;
@@ -34075,25 +35960,10 @@ struct tf_rm_cap {
 	uint16_t	max;
 } __rte_packed;
 
-/* TruFlow RM number of a resource. */
-/* tf_rm_num (size:64b/8B) */
-struct tf_rm_num {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
-	uint32_t	type;
-	/* Number of resources. */
-	uint32_t	num;
-} __rte_packed;
-
 /* TruFlow RM reservation information. */
-/* tf_rm_res (size:64b/8B) */
-struct tf_rm_res {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_entry (size:64b/8B) */
+struct tf_rm_resc_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Start offset. */
 	uint16_t	start;
@@ -34925,6 +36795,162 @@ struct hwrm_tf_ext_em_qcfg_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/*********************
+ * hwrm_tf_em_insert *
+ *********************/
+
+
+/* hwrm_tf_em_insert_input (size:832b/104B) */
+struct hwrm_tf_em_insert_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware Session Id. */
+	uint32_t	fw_session_id;
+	/* Control Flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX
+	/* Reported match strength. */
+	uint16_t	strength;
+	/* Index to action. */
+	uint32_t	action_ptr;
+	/* Index of EM record. */
+	uint32_t	em_record_idx;
+	/* EM Key value. */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
+/* hwrm_tf_em_insert_output (size:128b/16B) */
+struct hwrm_tf_em_insert_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* EM record pointer index. */
+	uint16_t	rptr_index;
+	/* EM record offset 0~3. */
+	uint8_t	rptr_entry;
+	/* Number of word entries consumed by the key. */
+	uint8_t	num_of_entries;
+	/* unused. */
+	uint32_t	unused0;
+} __rte_packed;
+
+/*********************
+ * hwrm_tf_em_delete *
+ *********************/
+
+
+/* hwrm_tf_em_delete_input (size:832b/104B) */
+struct hwrm_tf_em_delete_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Session Id. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX
+	/* Unused0 */
+	uint16_t	unused0;
+	/* EM internal flow hanndle. */
+	uint64_t	flow_handle;
+	/* EM Key value */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused1[3];
+} __rte_packed;
+
+/* hwrm_tf_em_delete_output (size:128b/16B) */
+struct hwrm_tf_em_delete_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Original stack allocation index. */
+	uint16_t	em_index;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
 /********************
  * hwrm_tf_tcam_set *
  ********************/
@@ -35582,10 +37608,10 @@ struct ctx_hw_stats {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35598,10 +37624,10 @@ struct ctx_hw_stats {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35618,7 +37644,11 @@ struct ctx_hw_stats {
 	uint64_t	tpa_aborts;
 } __rte_packed;
 
-/* Periodic statistics context DMA to host. */
+/*
+ * Extended periodic statistics context DMA to host. On cards that
+ * support TPA v2, additional TPA related stats exist and can be retrieved
+ * by DMA of ctx_hw_stats_ext, rather than legacy ctx_hw_stats structure.
+ */
 /* ctx_hw_stats_ext (size:1344b/168B) */
 struct ctx_hw_stats_ext {
 	/* Number of received unicast packets */
@@ -35627,10 +37657,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35643,10 +37673,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35912,7 +37942,14 @@ struct hwrm_stat_ctx_query_input {
 	uint64_t	resp_addr;
 	/* ID of the statistics context that is being queried. */
 	uint32_t	stat_ctx_id;
-	uint8_t	unused_0[4];
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK     UINT32_C(0x1)
+	uint8_t	unused_0[3];
 } __rte_packed;
 
 /* hwrm_stat_ctx_query_output (size:1408b/176B) */
@@ -35949,7 +37986,7 @@ struct hwrm_stat_ctx_query_output {
 	uint64_t	rx_bcast_pkts;
 	/* Number of received packets with error */
 	uint64_t	rx_err_pkts;
-	/* Number of dropped packets on received path */
+	/* Number of dropped packets on receive path */
 	uint64_t	rx_drop_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
@@ -35977,6 +38014,117 @@ struct hwrm_stat_ctx_query_output {
 } __rte_packed;
 
 /***************************
+ * hwrm_stat_ext_ctx_query *
+ ***************************/
+
+
+/* hwrm_stat_ext_ctx_query_input (size:192b/24B) */
+struct hwrm_stat_ext_ctx_query_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* ID of the extended statistics context that is being queried. */
+	uint32_t	stat_ctx_id;
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_EXT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK \
+		UINT32_C(0x1)
+	uint8_t	unused_0[3];
+} __rte_packed;
+
+/* hwrm_stat_ext_ctx_query_output (size:1472b/184B) */
+struct hwrm_stat_ext_ctx_query_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on receive path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/***************************
  * hwrm_stat_ctx_eng_query *
  ***************************/
 
@@ -37565,6 +39713,13 @@ struct hwrm_nvm_install_update_input {
 	 */
 	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_ALLOWED_TO_DEFRAG \
 		UINT32_C(0x4)
+	/*
+	 * If set to 1, FW will verify the package in the "UPDATE" NVM item
+	 * without installing it. This flag is for FW internal use only.
+	 * Users should not set this flag. The request will otherwise fail.
+	 */
+	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_VERIFY_ONLY \
+		UINT32_C(0x8)
 	uint8_t	unused_0[2];
 } __rte_packed;
 
@@ -38115,6 +40270,72 @@ struct hwrm_nvm_validate_option_cmd_err {
 	uint8_t	unused_0[7];
 } __rte_packed;
 
+/****************
+ * hwrm_oem_cmd *
+ ****************/
+
+
+/* hwrm_oem_cmd_input (size:1024b/128B) */
+struct hwrm_oem_cmd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific command data. */
+	uint32_t	oem_data[26];
+} __rte_packed;
+
+/* hwrm_oem_cmd_output (size:768b/96B) */
+struct hwrm_oem_cmd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific response data. */
+	uint32_t	oem_data[18];
+	uint8_t	unused_1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /*****************
  * hwrm_fw_reset *
  ******************/
@@ -38338,6 +40559,55 @@ struct hwrm_port_ts_query_output {
 	uint8_t		valid;
 } __rte_packed;
 
+/*
+ * This structure is fixed at the beginning of the ChiMP SRAM (GRC
+ * offset: 0x31001F0). Host software is expected to read from this
+ * location for a defined signature. If it exists, the software can
+ * assume the presence of this structure and the validity of the
+ * FW_STATUS location in the next field.
+ */
+/* hcomm_status (size:64b/8B) */
+struct hcomm_status {
+	uint32_t	sig_ver;
+	/*
+	 * This field defines the version of the structure. The latest
+	 * version value is 1.
+	 */
+	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
+	#define HCOMM_STATUS_VER_SFT		0
+	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
+	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
+	/*
+	 * This field is to store the signature value to indicate the
+	 * presence of the structure.
+	 */
+	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
+	#define HCOMM_STATUS_SIGNATURE_SFT	8
+	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
+	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
+	uint32_t	fw_status_loc;
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
+	/* PCIE configuration space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
+	/* GRC space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
+	/* BAR0 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
+	/* BAR1 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
+		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
+	/*
+	 * This offset where the fw_status register is located. The value
+	 * is generally 4-byte aligned.
+	 */
+	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
+	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
+} __rte_packed;
+/* This is the GRC offset where the hcomm_status struct resides. */
+#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
+
 /**************************
  * hwrm_cfa_counter_qcaps *
  **************************/
@@ -38622,53 +40892,4 @@ struct hwrm_cfa_counter_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/*
- * This structure is fixed at the beginning of the ChiMP SRAM (GRC
- * offset: 0x31001F0). Host software is expected to read from this
- * location for a defined signature. If it exists, the software can
- * assume the presence of this structure and the validity of the
- * FW_STATUS location in the next field.
- */
-/* hcomm_status (size:64b/8B) */
-struct hcomm_status {
-	uint32_t	sig_ver;
-	/*
-	 * This field defines the version of the structure. The latest
-	 * version value is 1.
-	 */
-	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
-	#define HCOMM_STATUS_VER_SFT		0
-	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
-	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
-	/*
-	 * This field is to store the signature value to indicate the
-	 * presence of the structure.
-	 */
-	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
-	#define HCOMM_STATUS_SIGNATURE_SFT	8
-	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
-	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
-	uint32_t	fw_status_loc;
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
-	/* PCIE configuration space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
-	/* GRC space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
-	/* BAR0 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
-	/* BAR1 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
-		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
-	/*
-	 * This offset where the fw_status register is located. The value
-	 * is generally 4-byte aligned.
-	 */
-	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
-	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
-} __rte_packed;
-/* This is the GRC offset where the hcomm_status struct resides. */
-#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
-
 #endif /* _HSI_STRUCT_DEF_DPDK_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 3419095..439950e 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -86,6 +86,7 @@ struct tf_tbl_type_get_output;
 struct tf_em_internal_insert_input;
 struct tf_em_internal_insert_output;
 struct tf_em_internal_delete_input;
+struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -949,6 +950,8 @@ typedef struct tf_em_internal_insert_output {
 	uint16_t			 rptr_index;
 	/* EM record offset 0~3 */
 	uint8_t			  rptr_entry;
+	/* Number of word entries consumed by the key */
+	uint8_t			  num_of_entries;
 } tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
 
 /* Input params for EM INTERNAL rule delete */
@@ -969,4 +972,10 @@ typedef struct tf_em_internal_delete_input {
 	uint16_t			 em_key_bitlen;
 } tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
 
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_output {
+	/* Original stack allocation index */
+	uint16_t			 em_index;
+} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
index e5abcc2..b1fd2cd 100644
--- a/drivers/net/bnxt/tf_core/lookup3.h
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -152,7 +152,6 @@ static inline uint32_t hashword(const uint32_t *k,
 		final(a, b, c);
 		/* Falls through. */
 	case 0:	    /* case 0: nothing left to add */
-		/* FALLTHROUGH */
 		break;
 	}
 	/*------------------------------------------------- report the result */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 9cfbd24..9548063 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -27,6 +27,14 @@ stack_init(int num_entries, uint32_t *items, struct stack *st)
 	return 0;
 }
 
+/*
+ * Return the address of the items
+ */
+uint32_t *stack_items(struct stack *st)
+{
+	return st->items;
+}
+
 /* Return the size of the stack
  */
 int32_t
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
index ebd0555..6732e03 100644
--- a/drivers/net/bnxt/tf_core/stack.h
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -36,6 +36,16 @@ int stack_init(int num_entries,
 	       uint32_t *items,
 	       struct stack *st);
 
+/** Return the address of the stack contents
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    pointer to the stack contents
+ */
+uint32_t *stack_items(struct stack *st);
+
 /** Return the size of the stack
  *
  *  [in] st
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index cf9f36a..ba54df6 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -45,6 +45,100 @@ static void tf_seeds_init(struct tf_session *session)
 	}
 }
 
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	tfp_free(ptr);
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -54,6 +148,7 @@ tf_open_session(struct tf                    *tfp,
 	struct tfp_calloc_parms alloc_parms;
 	unsigned int domain, bus, slot, device;
 	uint8_t fw_session_id;
+	int dir;
 
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
@@ -110,7 +205,7 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup;
 	}
 
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+	tfp->session = alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -175,6 +270,16 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize EM pool */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EM Pool initialization failed\n");
+			goto cleanup_close;
+		}
+	}
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -239,6 +344,7 @@ tf_close_session(struct tf *tfp)
 	int rc_close = 0;
 	struct tf_session *tfs;
 	union tf_session_id session_id;
+	int dir;
 
 	if (tfp == NULL || tfp->session == NULL)
 		return -EINVAL;
@@ -268,6 +374,10 @@ tf_close_session(struct tf *tfp)
 
 	/* Final cleanup as we're last user of the session */
 	if (tfs->ref_count == 0) {
+		/* Free EM pool */
+		for (dir = 0; dir < TF_DIR_MAX; dir++)
+			tf_free_em_pool(tfs, dir);
+
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
 		tfp->session = NULL;
@@ -301,16 +411,25 @@ int tf_insert_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
 	/* Process the EM entry per Table Scope type */
-	return tf_insert_eem_entry((struct tf_session *)tfp->session->core_data,
-				   tbl_scope_cb,
-				   parms);
+	if (parms->mem == TF_MEM_EXTERNAL) {
+		/* External EEM */
+		return tf_insert_eem_entry
+			((struct tf_session *)(tfp->session->core_data),
+			tbl_scope_cb,
+			parms);
+	} else if (parms->mem == TF_MEM_INTERNAL) {
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
+	}
+
+	return -EINVAL;
 }
 
 /** Delete EM hash entry API
@@ -327,13 +446,16 @@ int tf_delete_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
-	return tf_delete_eem_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tfp, parms);
+	else
+		return tf_delete_em_internal_entry(tfp, parms);
 }
 
 /** allocate identifier resource
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1eedd80..81ff760 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -44,44 +44,7 @@ enum tf_mem {
 };
 
 /**
- * The size of the external action record (Wh+/Brd2)
- *
- * Currently set to 512.
- *
- * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
- * + stats (16) = 304 aligned on a 16B boundary
- *
- * Theoretically, the size should be smaller. ~304B
- */
-#define TF_ACTION_RECORD_SZ 512
-
-/**
- * External pool size
- *
- * Defines a single pool of external action records of
- * fixed size.  Currently, this is an index.
- */
-#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
-
-/**
- *  External pool entry count
- *
- *  Defines the number of entries in the external action pool
- */
-#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
-
-/**
- * Number of external pools
- */
-#define TF_EXT_POOL_CNT_MAX 1
-
-/**
- * External pool Id
- */
-#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
-#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
-
-/** EEM record AR helper
+ * EEM record AR helper
  *
  * Helper to handle the Action Record Pointer in the EEM Record Entry.
  *
@@ -109,7 +72,8 @@ enum tf_mem {
  */
 
 
-/** Session Version defines
+/**
+ * Session Version defines
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -119,7 +83,8 @@ enum tf_mem {
 #define TF_SESSION_VER_MINOR  0   /**< Minor Version */
 #define TF_SESSION_VER_UPDATE 0   /**< Update Version */
 
-/** Session Name
+/**
+ * Session Name
  *
  * Name of the TruFlow control channel interface.  Expects
  * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
@@ -128,7 +93,8 @@ enum tf_mem {
 
 #define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Session ID define */
 
-/** Session Identifier
+/**
+ * Session Identifier
  *
  * Unique session identifier which includes PCIe bus info to
  * distinguish the PF and session info to identify the associated
@@ -146,7 +112,8 @@ union tf_session_id {
 	} internal;
 };
 
-/** Session Version
+/**
+ * Session Version
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -160,8 +127,8 @@ struct tf_session_version {
 	uint8_t update;
 };
 
-/** Session supported device types
- *
+/**
+ * Session supported device types
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
@@ -171,6 +138,147 @@ enum tf_device_type {
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
+/** Identifier resource types
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for SR2
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC,
+	TF_IDENT_TYPE_MAX
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/* Internal */
+
+	/** Wh+/SR Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Wh+/SR/Th Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Wh+/SR Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Wh+/SR Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 32 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Wh+/SR Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Wh+/SR Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Wh+/SR Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Wh+/SR Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Wh+/SR Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Wh+/SR Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Wh+/SR Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** SR2 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** SR2 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** SR2 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** SR2 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** SR2 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** SR2 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** SR2 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** SR2 VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+	/** Th/SR2 EM Flexible Key builder */
+	TF_TBL_TYPE_EM_FKB,
+	/** Th/SR2 WC Flexible Key builder */
+	TF_TBL_TYPE_WC_FKB,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	TF_TBL_TYPE_MAX
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	/** L2 Context TCAM */
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	/** Profile TCAM */
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	/** Wildcard TCAM */
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	/** Source Properties TCAM */
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	/** Connection Tracking Rule TCAM */
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	/** Virtual Edge Bridge TCAM */
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+};
+
+/**
+ * EM Resources
+ * These defines are provisioned during
+ * tf_open_session()
+ */
+enum tf_em_tbl_type {
+	/** The number of internal EM records for the session */
+	TF_EM_TBL_TYPE_EM_RECORD,
+	/** The number of table scopes reequested */
+	TF_EM_TBL_TYPE_TBL_SCOPE,
+	TF_EM_TBL_TYPE_MAX
+};
+
 /** TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
@@ -309,6 +417,30 @@ struct tf_open_session_parms {
 	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
 	 */
 	enum tf_device_type device_type;
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
 };
 
 /**
@@ -417,31 +549,6 @@ int tf_close_session(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
-	 *  and can be used in WC TCAM or EM keys to virtualize further
-	 *  lookups.
-	 */
-	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
-	 *  to enable virtualization of the profile TCAM.
-	 */
-	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
-	 *  to enable virtualization of the WC TCAM hardware.
-	 */
-	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
-	 *  to enable virtualization of the EM hardware. (not required for Brd4
-	 *  as it has table scope)
-	 */
-	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
-	 *  enable virtualization of further lookups.
-	 */
-	TF_IDENT_TYPE_L2_FUNC
-};
-
 /** tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
@@ -631,19 +738,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-/**
- * TCAM table type
- */
-enum tf_tcam_tbl_type {
-	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	TF_TCAM_TBL_TYPE_PROF_TCAM,
-	TF_TCAM_TBL_TYPE_WC_TCAM,
-	TF_TCAM_TBL_TYPE_SP_TCAM,
-	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
-	TF_TCAM_TBL_TYPE_VEB_TCAM,
-	TF_TCAM_TBL_TYPE_MAX
-
-};
 
 /**
  * @page tcam TCAM Access
@@ -813,7 +907,8 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** get TCAM entry
+/*
+ * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -824,7 +919,8 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/** tf_free_tcam_entry parameter definition
+/*
+ * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
 	/**
@@ -845,8 +941,7 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free TCAM entry
- *
+/*
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -873,84 +968,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  */
 
 /**
- * Enumeration of TruFlow table types. A table type is used to identify a
- * resource object.
- *
- * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
- * the only table type that is connected with a table scope.
- */
-enum tf_tbl_type {
-	/** Wh+/Brd2 Action Record */
-	TF_TBL_TYPE_FULL_ACT_RECORD,
-	/** Multicast Groups */
-	TF_TBL_TYPE_MCAST_GROUPS,
-	/** Action Encap 8 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_8B,
-	/** Action Encap 16 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_16B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_32B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_64B,
-	/** Action Source Properties SMAC */
-	TF_TBL_TYPE_ACT_SP_SMAC,
-	/** Action Source Properties SMAC IPv4 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
-	/** Action Source Properties SMAC IPv6 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
-	/** Action Statistics 64 Bits */
-	TF_TBL_TYPE_ACT_STATS_64,
-	/** Action Modify L4 Src Port */
-	TF_TBL_TYPE_ACT_MODIFY_SPORT,
-	/** Action Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_DPORT,
-	/** Action Modify IPv4 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
-	/** Action _Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
-
-	/* HW */
-
-	/** Meter Profiles */
-	TF_TBL_TYPE_METER_PROF,
-	/** Meter Instance */
-	TF_TBL_TYPE_METER_INST,
-	/** Mirror Config */
-	TF_TBL_TYPE_MIRROR_CONFIG,
-	/** UPAR */
-	TF_TBL_TYPE_UPAR,
-	/** Brd4 Epoch 0 table */
-	TF_TBL_TYPE_EPOCH0,
-	/** Brd4 Epoch 1 table  */
-	TF_TBL_TYPE_EPOCH1,
-	/** Brd4 Metadata  */
-	TF_TBL_TYPE_METADATA,
-	/** Brd4 CT State  */
-	TF_TBL_TYPE_CT_STATE,
-	/** Brd4 Range Profile  */
-	TF_TBL_TYPE_RANGE_PROF,
-	/** Brd4 Range Entry  */
-	TF_TBL_TYPE_RANGE_ENTRY,
-	/** Brd4 LAG Entry  */
-	TF_TBL_TYPE_LAG,
-	/** Brd4 only VNIC/SVIF Table */
-	TF_TBL_TYPE_VNIC_SVIF,
-
-	/* External */
-
-	/** External table type - initially 1 poolsize entries.
-	 * All External table types are associated with a table
-	 * scope. Internal types are not.
-	 */
-	TF_TBL_TYPE_EXT,
-	TF_TBL_TYPE_MAX
-};
-
-/** tf_alloc_tbl_entry parameter definition
+ * tf_alloc_tbl_entry parameter definition
  */
 struct tf_alloc_tbl_entry_parms {
 	/**
@@ -993,7 +1011,8 @@ struct tf_alloc_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** allocate index table entries
+/**
+ * allocate index table entries
  *
  * Internal types:
  *
@@ -1023,7 +1042,8 @@ struct tf_alloc_tbl_entry_parms {
 int tf_alloc_tbl_entry(struct tf *tfp,
 		       struct tf_alloc_tbl_entry_parms *parms);
 
-/** tf_free_tbl_entry parameter definition
+/**
+ * tf_free_tbl_entry parameter definition
  */
 struct tf_free_tbl_entry_parms {
 	/**
@@ -1049,7 +1069,8 @@ struct tf_free_tbl_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free index table entry
+/**
+ * free index table entry
  *
  * Used to free a previously allocated table entry.
  *
@@ -1075,7 +1096,8 @@ struct tf_free_tbl_entry_parms {
 int tf_free_tbl_entry(struct tf *tfp,
 		      struct tf_free_tbl_entry_parms *parms);
 
-/** tf_set_tbl_entry parameter definition
+/**
+ * tf_set_tbl_entry parameter definition
  */
 struct tf_set_tbl_entry_parms {
 	/**
@@ -1104,7 +1126,8 @@ struct tf_set_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** set index table entry
+/**
+ * set index table entry
  *
  * Used to insert an application programmed index table entry into a
  * previous allocated table location.  A shadow copy of the table
@@ -1115,7 +1138,8 @@ struct tf_set_tbl_entry_parms {
 int tf_set_tbl_entry(struct tf *tfp,
 		     struct tf_set_tbl_entry_parms *parms);
 
-/** tf_get_tbl_entry parameter definition
+/**
+ * tf_get_tbl_entry parameter definition
  */
 struct tf_get_tbl_entry_parms {
 	/**
@@ -1140,7 +1164,8 @@ struct tf_get_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** get index table entry
+/**
+ * get index table entry
  *
  * Used to retrieve a previous set index table entry.
  *
@@ -1163,7 +1188,8 @@ int tf_get_tbl_entry(struct tf *tfp,
  * @ref tf_search_em_entry
  *
  */
-/** tf_insert_em_entry parameter definition
+/**
+ * tf_insert_em_entry parameter definition
  */
 struct tf_insert_em_entry_parms {
 	/**
@@ -1240,6 +1266,10 @@ struct tf_delete_em_entry_parms {
 	 */
 	uint16_t *epochs;
 	/**
+	 * [out] The index of the entry
+	 */
+	uint16_t index;
+	/**
 	 * [in] structure containing flow delete handle information
 	 */
 	uint64_t flow_handle;
@@ -1291,7 +1321,8 @@ struct tf_search_em_entry_parms {
 	uint64_t flow_handle;
 };
 
-/** insert em hash entry in internal table memory
+/**
+ * insert em hash entry in internal table memory
  *
  * Internal:
  *
@@ -1328,7 +1359,8 @@ struct tf_search_em_entry_parms {
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms);
 
-/** delete em hash entry table memory
+/**
+ * delete em hash entry table memory
  *
  * Internal:
  *
@@ -1353,7 +1385,8 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms);
 
-/** search em hash entry table memory
+/**
+ * search em hash entry table memory
  *
  * Internal:
 
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index bd8e2ba..fd1797e 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -287,7 +287,7 @@ static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
 }
 
 static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t	       *in_key,
+				    uint8_t *in_key,
 				    struct tf_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
@@ -308,7 +308,7 @@ static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
  * EEXIST  - Key does exist in table at "index" in table "table".
  * TF_ERR     - Something went horribly wrong.
  */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 					  enum tf_dir dir,
 					  struct tf_eem_64b_entry *entry,
 					  uint32_t key0_hash,
@@ -368,8 +368,8 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session	   *session,
-			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
@@ -457,6 +457,96 @@ int tf_insert_eem_entry(struct tf_session	   *session,
 	return -EINVAL;
 }
 
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms)
+{
+	int       rc;
+	uint32_t  gfid;
+	uint16_t  rptr_index = 0;
+	uint8_t   rptr_entry = 0;
+	uint8_t   num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG
+		   (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc != 0)
+		return -1;
+
+	PMD_DRV_LOG
+		   (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int tf_delete_em_internal_entry(struct tf *tfp,
+				struct tf_delete_em_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+
 /** delete EEM hash entry API
  *
  * returns:
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 8a3584f..c1805df 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -12,6 +12,20 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+/*
+ * Used to build GFID:
+ *
+ *   15           2  0
+ *  +--------------+--+
+ *  |   Index      |E |
+ *  +--------------+--+
+ *
+ * E = Entry (bucket inndex)
+ */
+#define TF_EM_INTERNAL_INDEX_SHIFT 2
+#define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
+#define TF_EM_INTERNAL_ENTRY_MASK  0x3
+
 /** EEM Entry header
  *
  */
@@ -53,6 +67,17 @@ struct tf_eem_64b_entry {
 	struct tf_eem_entry_hdr hdr;
 };
 
+/** EM Entry
+ *  Each EM entry is 512-bit (64-bytes) but ordered differently to
+ *  EEM.
+ */
+struct tf_em_64b_entry {
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+};
+
 /**
  * Allocates EEM Table scope
  *
@@ -106,9 +131,15 @@ int tf_insert_eem_entry(struct tf_session *session,
 			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms);
 
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms);
+
 int tf_delete_eem_entry(struct tf *tfp,
 			struct tf_delete_em_entry_parms *parms);
 
+int tf_delete_em_internal_entry(struct tf                       *tfp,
+				struct tf_delete_em_entry_parms *parms);
+
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
index 417a99c..399f7d1 100644
--- a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -90,6 +90,22 @@ do {									\
 		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
 } while (0)
 
+#define TF_GET_NUM_KEY_ENTRIES_FROM_FLOW_HANDLE(flow_handle,		\
+					  num_key_entries)		\
+do {									\
+	num_key_entries =						\
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		     TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT);		\
+} while (0)
+
+#define TF_GET_ENTRY_NUM_FROM_FLOW_HANDLE(flow_handle,		\
+					  entry_num)		\
+do {									\
+	entry_num =						\
+		(((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT);		\
+} while (0)
+
 /*
  * 32 bit Flow ID handlers
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index beecafd..554a849 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -16,6 +16,7 @@
 #include "tf_msg.h"
 #include "hsi_struct_def_dpdk.h"
 #include "hwrm_tf.h"
+#include "tf_em.h"
 
 /**
  * Endian converts min and max values from the HW response to the query
@@ -1014,14 +1015,93 @@ int tf_msg_em_cfg(struct tf *tfp,
 }
 
 /**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_insert_input req = { 0 };
+	struct tf_em_internal_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
+		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_INSERT,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends EM delete insert request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_delete_input req = { 0 };
+	struct tf_em_internal_delete_output resp = { 0 };
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.tf_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_DELETE,
+		 req,
+		resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
  * Sends EM operation request to Firmware
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op)
+		 int dir,
+		 uint16_t op)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_input req = {0};
 	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
 	struct tfp_send_msg_parms parms = { 0 };
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 030d188..89f7370 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -122,6 +122,19 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   struct tf_rm_entry *sram_entry);
 
 /**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				    struct tf_insert_em_entry_parms *params,
+				    uint16_t *rptr_index,
+				    uint8_t *rptr_entry,
+				    uint8_t *num_of_entries);
+/**
+ * Sends EM internal delete request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms);
+/**
  * Sends EM mem register request to Firmware
  */
 int tf_msg_em_mem_rgtr(struct tf *tfp,
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 50ef2d5..c9f4f8f 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -13,12 +13,25 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "stack.h"
 
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
+/**
+ * Number of EM entries. Static for now will be removed
+ * when parameter added at a later date. At this stage we
+ * are using fixed size entries so that each stack entry
+ * represents 4 RT (f/n)blocks. So we take the total block
+ * allocation for truflow and divide that by 4.
+ */
+#define TF_SESSION_TOTAL_FN_BLOCKS (1024 * 8) /* 8K blocks */
+#define TF_SESSION_EM_ENTRY_SIZE 4 /* 4 blocks per entry */
+#define TF_SESSION_EM_POOL_SIZE \
+	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
+
 /** Session
  *
  * Shared memory containing private TruFlow session information.
@@ -289,6 +302,11 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/**
+	 * EM Pools
+	 */
+	struct stack em_pool[TF_DIR_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d900c9c..b9c71d4 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -156,7 +156,7 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
 		if (tfp_calloc(&parms) != 0)
 			goto cleanup;
 
-		tp->pg_pa_tbl[i] = (uint64_t)(uintptr_t)parms.mem_pa;
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
 		tp->pg_va_tbl[i] = parms.mem_va;
 
 		memset(tp->pg_va_tbl[i], 0, pg_size);
@@ -727,13 +727,13 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries
+		= parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size
+		= parms->rx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries =
-		0;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries
+		= 0;
 
 	/* Tx */
 	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
@@ -746,13 +746,13 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries
+		= parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries =
-		0;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries
+		= 0;
 
 	return 0;
 }
@@ -792,7 +792,8 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	index = parms->idx;
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1179,7 +1180,8 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1330,7 +1332,8 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1801,3 +1804,91 @@ tf_free_tbl_entry(struct tf *tfp,
 			    rc);
 	return rc;
 }
+
+
+static void
+tf_dump_link_page_table(struct tf_em_page_tbl *tp,
+			struct tf_em_page_tbl *tp_next)
+{
+	uint64_t *pg_va;
+	uint32_t i;
+	uint32_t j;
+	uint32_t k = 0;
+
+	printf("pg_count:%d pg_size:0x%x\n",
+	       tp->pg_count,
+	       tp->pg_size);
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+		printf("\t%p\n", (void *)pg_va);
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
+			if (((pg_va[j] & 0x7) ==
+			     tfp_cpu_to_le_64(PTU_PTE_LAST |
+					      PTU_PTE_VALID)))
+				return;
+
+			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
+				printf("** Invalid entry **\n");
+				return;
+			}
+
+			if (++k >= tp_next->pg_count) {
+				printf("** Shouldn't get here **\n");
+				return;
+			}
+		}
+	}
+}
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
+{
+	struct tf_session      *session;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_page_tbl *tp;
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_table *tbl;
+	int i;
+	int j;
+	int dir;
+
+	printf("called %s\n", __func__);
+
+	/* find session struct */
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* find control block for table scope */
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		PMD_DRV_LOG(ERR, "No table scope\n");
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
+
+		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
+			printf
+	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
+			       j,
+			       tbl->type,
+			       tbl->num_entries,
+			       tbl->entry_size,
+			       tbl->num_lvl);
+			if (tbl->pg_tbl[0].pg_va_tbl &&
+			    tbl->pg_tbl[0].pg_pa_tbl)
+				printf("%p %p\n",
+			       tbl->pg_tbl[0].pg_va_tbl[0],
+			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
+			for (i = 0; i < tbl->num_lvl - 1; i++) {
+				printf("Level:%d\n", i);
+				tp = &tbl->pg_tbl[i];
+				tp_next = &tbl->pg_tbl[i + 1];
+				tf_dump_link_page_table(tp, tp_next);
+			}
+			printf("\n");
+		}
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index bdc6288..6cda487 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -76,38 +76,51 @@ struct tf_tbl_scope_cb {
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
  */
-#define BNXT_PAGE_SHIFT 22 /** 2M */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define PAGE_SIZE TF_EM_PAGE_SIZE_2M
 
-#if (BNXT_PAGE_SHIFT < 12)				/** < 4K >> 4K */
-#define TF_EM_PAGE_SHIFT 12
+#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_PAGE_SHIFT <= 13)			/** 4K, 8K */
-#define TF_EM_PAGE_SHIFT 13
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
-#define TF_EM_PAGE_SHIFT 15
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
-#elif (BNXT_PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
-#define TF_EM_PAGE_SHIFT 16
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
-#define TF_EM_PAGE_SHIFT 18
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_PAGE_SHIFT <= 21)			/** 1M */
-#define TF_EM_PAGE_SHIFT 20
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_PAGE_SHIFT <= 22)			/** 2M, 4M */
-#define TF_EM_PAGE_SHIFT 21
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
-#define TF_EM_PAGE_SHIFT 22
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#else						/** >= 1G >> 1G */
-#define TF_EM_PAGE_SHIFT	30
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8d5e94e..fe49b63 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -3,14 +3,23 @@
  * All rights reserved.
  */
 
-/* This header file defines the Portability structures and APIs for
+/*
+ * This header file defines the Portability structures and APIs for
  * TruFlow.
  */
 
 #ifndef _TFP_H_
 #define _TFP_H_
 
+#include <rte_config.h>
 #include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+
+/**
+ * DPDK/Driver specific log level for the BNXT Eth driver.
+ */
+extern int bnxt_logtype_driver;
 
 /** Spinlock
  */
@@ -18,13 +27,21 @@ struct tfp_spinlock_parms {
 	rte_spinlock_t slock;
 };
 
+#define TFP_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define TFP_DRV_LOG(level, fmt, args...) \
+	TFP_DRV_LOG_RAW(level, fmt, ## args)
+
 /**
  * @file
  *
  * TrueFlow Portability API Header File
  */
 
-/** send message parameter definition
+/**
+ * send message parameter definition
  */
 struct tfp_send_msg_parms {
 	/**
@@ -62,7 +79,8 @@ struct tfp_send_msg_parms {
 	uint32_t *resp_data;
 };
 
-/** calloc parameter definition
+/**
+ * calloc parameter definition
  */
 struct tfp_calloc_parms {
 	/**
@@ -96,43 +114,15 @@ struct tfp_calloc_parms {
  * @ref tfp_send_msg_tunneled
  *
  * @ref tfp_calloc
- * @ref tfp_free
  * @ref tfp_memcpy
+ * @ref tfp_free
  *
  * @ref tfp_spinlock_init
  * @ref tfp_spinlock_lock
  * @ref tfp_spinlock_unlock
  *
- * @ref tfp_cpu_to_le_16
- * @ref tfp_le_to_cpu_16
- * @ref tfp_cpu_to_le_32
- * @ref tfp_le_to_cpu_32
- * @ref tfp_cpu_to_le_64
- * @ref tfp_le_to_cpu_64
- * @ref tfp_cpu_to_be_16
- * @ref tfp_be_to_cpu_16
- * @ref tfp_cpu_to_be_32
- * @ref tfp_be_to_cpu_32
- * @ref tfp_cpu_to_be_64
- * @ref tfp_be_to_cpu_64
  */
 
-#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
-#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
-#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
-#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
-#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
-#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
-#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
-#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
-#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
-#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
-#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
-#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
-#define tfp_bswap_16(val) rte_bswap16(val)
-#define tfp_bswap_32(val) rte_bswap32(val)
-#define tfp_bswap_64(val) rte_bswap64(val)
-
 /**
  * Provides communication capability from the TrueFlow API layer to
  * the TrueFlow firmware. The portability layer internally provides
@@ -162,10 +152,25 @@ int tfp_send_msg_direct(struct tf *tfp,
  *   -1             - Global error like not supported
  *   -EINVAL        - Parameter Error
  */
-int tfp_send_msg_tunneled(struct tf                 *tfp,
+int tfp_send_msg_tunneled(struct tf *tfp,
 			  struct tfp_send_msg_parms *parms);
 
 /**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
+/**
  * Allocates zero'ed memory from the heap.
  *
  * NOTE: Also performs virt2phy address conversion by default thus is
@@ -179,10 +184,58 @@ int tfp_send_msg_tunneled(struct tf                 *tfp,
  *   -EINVAL        - Parameter error
  */
 int tfp_calloc(struct tfp_calloc_parms *parms);
-
-void tfp_free(void *addr);
 void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_free(void *addr);
+
 void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
+
+/*
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
 #endif /* _TFP_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 10/50] net/bnxt: modify EM insert and delete to use HWRM direct
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (8 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 09/50] net/bnxt: add support for Exact Match Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 11/50] net/bnxt: add multi device support Somnath Kotur
                   ` (40 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

Modify Exact Match insert and delete to use the HWRM messages directly.
Remove tunneled EM insert and delete message types.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h | 70 +++-----------------------------------
 drivers/net/bnxt/tf_core/tf_msg.c  | 66 ++++++++++++++++++++---------------
 2 files changed, 43 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 439950e..d342c69 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -23,8 +23,6 @@ typedef enum tf_subtype {
 	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
 	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
 	HWRM_TFT_TBL_SCOPE_CFG = 731,
-	HWRM_TFT_EM_RULE_INSERT = 739,
-	HWRM_TFT_EM_RULE_DELETE = 740,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
@@ -83,10 +81,6 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_em_internal_insert_input;
-struct tf_em_internal_insert_output;
-struct tf_em_internal_delete_input;
-struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -351,7 +345,7 @@ typedef struct tf_session_hw_resc_alloc_output {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -453,7 +447,7 @@ typedef struct tf_session_hw_resc_free_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -555,7 +549,7 @@ typedef struct tf_session_hw_resc_flush_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -922,60 +916,4 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
-/* Input params for EM internal rule insert */
-typedef struct tf_em_internal_insert_input {
-	/* Firmware Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* strength */
-	uint16_t			 strength;
-	/* index to action */
-	uint32_t			 action_ptr;
-	/* index of em record */
-	uint32_t			 em_record_idx;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
-
-/* Output params for EM internal rule insert */
-typedef struct tf_em_internal_insert_output {
-	/* EM record pointer index */
-	uint16_t			 rptr_index;
-	/* EM record offset 0~3 */
-	uint8_t			  rptr_entry;
-	/* Number of word entries consumed by the key */
-	uint8_t			  num_of_entries;
-} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_input {
-	/* Session Id */
-	uint32_t			 tf_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* EM internal flow hanndle */
-	uint64_t			 flow_handle;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_output {
-	/* Original stack allocation index */
-	uint16_t			 em_index;
-} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
-
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 554a849..c8f6b88 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1023,32 +1023,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				uint8_t *rptr_entry,
 				uint8_t *num_of_entries)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_insert_input req = { 0 };
-	struct tf_em_internal_insert_output resp = { 0 };
+	int                         rc;
+	struct tfp_send_msg_parms        parms = { 0 };
+	struct hwrm_tf_em_insert_input   req = { 0 };
+	struct hwrm_tf_em_insert_output  resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
 		TF_LKUP_RECORD_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_INSERT,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1056,7 +1062,7 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 	*rptr_index = resp.rptr_index;
 	*num_of_entries = resp.num_of_entries;
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
@@ -1065,32 +1071,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_delete_input req = { 0 };
-	struct tf_em_internal_delete_output resp = { 0 };
+	int                             rc;
+	struct tfp_send_msg_parms       parms = { 0 };
+	struct hwrm_tf_em_delete_input  req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
-	req.tf_session_id =
+	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_DELETE,
-		 req,
-		resp);
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
 	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 11/50] net/bnxt: add multi device support
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (9 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 10/50] net/bnxt: modify EM insert and delete to use HWRM direct Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 12/50] net/bnxt: support bulk table get and mirror Somnath Kotur
                   ` (39 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Michael Wildt <michael.wildt@broadcom.com>

Introduce new modules for Device, Resource Manager, Identifier,
Table Types, and TCAM.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   8 +
 drivers/net/bnxt/tf_core/Makefile             |   9 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 266 +++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c            |   2 +
 drivers/net/bnxt/tf_core/tf_core.h            |  56 ++--
 drivers/net/bnxt/tf_core/tf_device.c          |  50 ++++
 drivers/net/bnxt/tf_core/tf_device.h          | 331 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  24 ++
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  64 +++++
 drivers/net/bnxt/tf_core/tf_identifier.c      |  47 ++++
 drivers/net/bnxt/tf_core/tf_identifier.h      | 140 ++++++++++
 drivers/net/bnxt/tf_core/tf_rm.c              |  54 +---
 drivers/net/bnxt/tf_core/tf_rm.h              |  18 --
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 102 +++++++
 drivers/net/bnxt/tf_core/tf_rm_new.h          | 368 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_session.c         |  31 +++
 drivers/net/bnxt/tf_core/tf_session.h         |  54 ++++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |  63 +++++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      | 240 +++++++++++++++++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |  63 +++++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     | 239 +++++++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c             |   1 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  78 ++++++
 drivers/net/bnxt/tf_core/tf_tbl_type.h        | 309 +++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_tcam.c            |  78 ++++++
 drivers/net/bnxt/tf_core/tf_tcam.h            | 314 ++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_util.c            | 145 ++++++++++
 drivers/net/bnxt/tf_core/tf_util.h            |  41 +++
 28 files changed, 3101 insertions(+), 94 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 5c7859c..a50cb26 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,14 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_device_p4.c',
+	'tf_core/tf_identifier.c',
+	'tf_core/tf_shadow_tbl.c',
+	'tf_core/tf_shadow_tcam.c',
+	'tf_core/tf_tbl_type.c',
+	'tf_core/tf_tcam.c',
+	'tf_core/tf_util.c',
+	'tf_core/tf_rm_new.c',
 
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 379da30..71df75b 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,4 +14,13 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
new file mode 100644
index 0000000..c0c1e75
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -0,0 +1,266 @@
+/*
+ * Copyright(c) 2001-2020, Broadcom. All rights reserved. The
+ * term Broadcom refers to Broadcom Inc. and/or its subsidiaries.
+ * Proprietary and Confidential Information.
+ *
+ * This source file is the property of Broadcom Corporation, and
+ * may not be copied or distributed in any isomorphic form without
+ * the prior written consent of Broadcom Corporation.
+ *
+ * DO NOT MODIFY!!! This file is automatically generated.
+ */
+
+#ifndef _CFA_RESOURCE_TYPES_H_
+#define _CFA_RESOURCE_TYPES_H_
+
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+/* Wildcard TCAM Profile Id */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+/* Meter Profile */
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+/* Source Properties TCAM */
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+/* L2 Func */
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+/* EPOCH */
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+/* Metadata */
+#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+/* Connection Tracking Rule TCAM */
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+/* Range Profile */
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+/* Range */
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+/* Link Aggrigation */
+#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+
+
+#endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index ba54df6..58924b1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -6,6 +6,7 @@
 #include <stdio.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
 #include "tf_em.h"
@@ -229,6 +230,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	session->dev = NULL;
 	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 81ff760..becc50c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -371,6 +371,35 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * tf_session_resources parameter definition.
+ */
+struct tf_session_resources {
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+};
 
 /**
  * tf_open_session parameters definition.
@@ -414,33 +443,14 @@ struct tf_open_session_parms {
 	union tf_session_id session_id;
 	/** [in] device type
 	 *
-	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
 	enum tf_device_type device_type;
-	/** [in] Requested Identifier Resources
-	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
-	 */
-	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
-	/** [in] Requested Index Table resource counts
-	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
-	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
-	/** [in] Requested TCAM Table resource counts
-	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
-	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
-	/** [in] Requested EM resource counts
+	/** [in] resources
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * Resource allocation
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
+	struct tf_session_resources resources;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
new file mode 100644
index 0000000..3b36831
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_device_p4.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+struct tf;
+
+/**
+ * Device specific bind function
+ */
+static int
+dev_bind_p4(struct tf *tfp __rte_unused,
+	    struct tf_session_resources *resources __rte_unused,
+	    struct tf_dev_info *dev_info)
+{
+	/* Initialize the modules */
+
+	dev_info->ops = &tf_dev_ops_p4;
+	return 0;
+}
+
+int
+dev_bind(struct tf *tfp __rte_unused,
+	 enum tf_device_type type,
+	 struct tf_session_resources *resources,
+	 struct tf_dev_info *dev_info)
+{
+	switch (type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_bind_p4(tfp,
+				   resources,
+				   dev_info);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "Device type not supported\n");
+		return -ENOTSUP;
+	}
+}
+
+int
+dev_unbind(struct tf *tfp __rte_unused,
+	   struct tf_dev_info *dev_handle __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
new file mode 100644
index 0000000..8b63ff1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -0,0 +1,331 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_H_
+#define _TF_DEVICE_H_
+
+#include "tf_core.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * The Device module provides a general device template. A supported
+ * device type should implement one or more of the listed function
+ * pointers according to its capabilities.
+ *
+ * If a device function pointer is NULL the device capability is not
+ * supported.
+ */
+
+/**
+ * TF device information
+ */
+struct tf_dev_info {
+	const struct tf_dev_ops *ops;
+};
+
+/**
+ * @page device Device
+ *
+ * @ref tf_dev_bind
+ *
+ * @ref tf_dev_unbind
+ */
+
+/**
+ * Device bind handles the initialization of the specified device
+ * type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] type
+ *   Device type
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int dev_bind(struct tf *tfp,
+	     enum tf_device_type type,
+	     struct tf_session_resources *resources,
+	     struct tf_dev_info *dev_handle);
+
+/**
+ * Device release handles cleanup of the device specific information.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dev_handle
+ *   Device handle
+ */
+int dev_unbind(struct tf *tfp,
+	       struct tf_dev_info *dev_handle);
+
+/**
+ * Truflow device specific function hooks structure
+ *
+ * The following device hooks can be defined; unless noted otherwise,
+ * they are optional and can be filled with a null pointer. The
+ * purpose of these hooks is to support Truflow device operations for
+ * different device variants.
+ */
+struct tf_dev_ops {
+	/**
+	 * Allocation of an identifier element.
+	 *
+	 * This API allocates the specified identifier element from a
+	 * device specific identifier DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ident)(struct tf *tfp,
+				  struct tf_ident_alloc_parms *parms);
+
+	/**
+	 * Free of an identifier element.
+	 *
+	 * This API free's a previous allocated identifier element from a
+	 * device specific identifier DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ident)(struct tf *tfp,
+				 struct tf_ident_free_parms *parms);
+
+	/**
+	 * Allocation of a table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
+				     struct tf_tbl_type_alloc_parms *parms);
+
+	/**
+	 * Free of a table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tbl_type)(struct tf *tfp,
+				    struct tf_tbl_type_free_parms *parms);
+
+	/**
+	 * Searches for the specified table type element in a shadow DB.
+	 *
+	 * This API searches for the specified table type element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the table type
+	 * DB and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tbl_type)
+			(struct tf *tfp,
+			struct tf_tbl_type_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_set_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_get_parms *parms);
+
+	/**
+	 * Allocation of a tcam element.
+	 *
+	 * This API allocates the specified tcam element from a device
+	 * specific tcam DB. The allocated element is returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tcam)(struct tf *tfp,
+				 struct tf_tcam_alloc_parms *parms);
+
+	/**
+	 * Free of a tcam element.
+	 *
+	 * This API free's a previous allocated tcam element from a
+	 * device specific tcam DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tcam)(struct tf *tfp,
+				struct tf_tcam_free_parms *parms);
+
+	/**
+	 * Searches for the specified tcam element in a shadow DB.
+	 *
+	 * This API searches for the specified tcam element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the tcam DB
+	 * and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tcam)
+			(struct tf *tfp,
+			struct tf_tcam_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified tcam element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tcam)(struct tf *tfp,
+			       struct tf_tcam_set_parms *parms);
+
+	/**
+	 * Retrieves the specified tcam element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tcam)(struct tf *tfp,
+			       struct tf_tcam_get_parms *parms);
+};
+
+/**
+ * Supported device operation structures
+ */
+extern const struct tf_dev_ops tf_dev_ops_p4;
+
+#endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
new file mode 100644
index 0000000..c3c4d1e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_alloc_ident = tf_ident_alloc,
+	.tf_dev_free_ident = tf_ident_free,
+	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
+	.tf_dev_free_tbl_type = tf_tbl_type_free,
+	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
+	.tf_dev_set_tbl_type = tf_tbl_type_set,
+	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tcam = tf_tcam_alloc,
+	.tf_dev_free_tcam = tf_tcam_free,
+	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_set_tcam = tf_tcam_set,
+	.tf_dev_get_tcam = tf_tcam_get,
+};
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
new file mode 100644
index 0000000..84d90e3
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_P4_H_
+#define _TF_DEVICE_P4_H_
+
+#include <cfa_resource_types.h>
+
+#include "tf_core.h"
+#include "tf_rm_new.h"
+
+struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+};
+
+struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
+	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+};
+
+struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+};
+
+#endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
new file mode 100644
index 0000000..726d0b4
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_identifier.h"
+
+struct tf;
+
+/**
+ * Identifier DBs.
+ */
+/* static void *ident_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+int
+tf_ident_bind(struct tf *tfp __rte_unused,
+	      struct tf_ident_cfg *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_alloc(struct tf *tfp __rte_unused,
+	       struct tf_ident_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_free(struct tf *tfp __rte_unused,
+	      struct tf_ident_free_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
new file mode 100644
index 0000000..b77c91b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_IDENTIFIER_H_
+#define _TF_IDENTIFIER_H_
+
+#include "tf_core.h"
+
+/**
+ * The Identifier module provides processing of Identifiers.
+ */
+
+struct tf_ident_cfg {
+	/**
+	 * Number of identifier types in each of the configuration
+	 * arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Identifier allcoation parameter definition
+ */
+struct tf_ident_alloc_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/**
+ * Identifier free parameter definition
+ */
+struct tf_ident_free_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/**
+ * @page ident Identity Management
+ *
+ * @ref tf_ident_bind
+ *
+ * @ref tf_ident_unbind
+ *
+ * @ref tf_ident_alloc
+ *
+ * @ref tf_ident_free
+ */
+
+/**
+ * Initializes the Identifier module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_bind(struct tf *tfp,
+		  struct tf_ident_cfg *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_unbind(struct tf *tfp);
+
+/**
+ * Allocates a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_alloc(struct tf *tfp,
+		   struct tf_ident_alloc_parms *parms);
+
+/**
+ * Free's a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_free(struct tf *tfp,
+		  struct tf_ident_free_parms *parms);
+
+#endif /* _TF_IDENTIFIER_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 38b1e71..2264704 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -9,6 +9,7 @@
 
 #include "tf_rm.h"
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_resources.h"
 #include "tf_msg.h"
@@ -77,59 +78,6 @@
 	} while (0)
 
 const char
-*tf_dir_2_str(enum tf_dir dir)
-{
-	switch (dir) {
-	case TF_DIR_RX:
-		return "RX";
-	case TF_DIR_TX:
-		return "TX";
-	default:
-		return "Invalid direction";
-	}
-}
-
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
-{
-	switch (id_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		return "l2_ctxt_remap";
-	case TF_IDENT_TYPE_PROF_FUNC:
-		return "prof_func";
-	case TF_IDENT_TYPE_WC_PROF:
-		return "wc_prof";
-	case TF_IDENT_TYPE_EM_PROF:
-		return "em_prof";
-	case TF_IDENT_TYPE_L2_FUNC:
-		return "l2_func";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
-{
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		return "l2_ctxt_tcam";
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		return "prof_tcam";
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		return "wc_tcam";
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		return "veb_tcam";
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		return "sp_tcam";
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		return "ct_rule_tcam";
-	default:
-		return "Invalid tcam table type";
-	}
-}
-
-const char
 *tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
 {
 	switch (hw_type) {
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index e69d443..1a09f13 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -125,24 +125,6 @@ struct tf_rm_db {
 };
 
 /**
- * Helper function converting direction to text string
- */
-const char
-*tf_dir_2_str(enum tf_dir dir);
-
-/**
- * Helper function converting identifier to text string
- */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
-
-/**
- * Helper function converting tcam type to text string
- */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
-
-/**
  * Helper function used to convert HW HCAPI resource type to a string.
  */
 const char
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
new file mode 100644
index 0000000..51bb9ba
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_rm_new.h"
+
+/**
+ * Resource query single entry. Used when accessing HCAPI RM on the
+ * firmware.
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Generic RM Element data type that an RM DB is build upon.
+ */
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type type;
+
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
+
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
+
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
+
+/**
+ * TF RM DB definition
+ */
+struct tf_rm_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
+
+int
+tf_rm_create_db(struct tf *tfp __rte_unused,
+		struct tf_rm_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free_db(struct tf *tfp __rte_unused,
+	      struct tf_rm_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
new file mode 100644
index 0000000..72dba09
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -0,0 +1,368 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_core.h"
+#include "bitalloc.h"
+
+struct tf;
+
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
+ *
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
+ */
+
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
+	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	TF_RM_TYPE_MAX
+};
+
+/**
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
+ */
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Allocation information for a single element.
+ */
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_entry entry;
+};
+
+/**
+ * Create RM DB parameters
+ */
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements in the parameter structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure
+	 */
+	struct tf_rm_element_cfg *parms;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Free RM DB parameters
+ */
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Allocate RM parameters for a single element
+ */
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+};
+
+/**
+ * Free RM parameters for a single element
+ */
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+};
+
+/**
+ * Is Allocated parameters for a single element
+ */
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	uint8_t *allocated;
+};
+
+/**
+ * Get Allocation information for a single element
+ */
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * @page rm Resource Manager
+ *
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ */
+
+/**
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
+
+/**
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
+
+/**
+ * Allocates a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to allocate parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
+
+/**
+ * Free's a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EpINVAL) on failure.
+ */
+int tf_rm_free(struct tf_rm_free_parms *parms);
+
+/**
+ * Performs an allocation verification check on a specified element.
+ *
+ * [in] parms
+ *   Pointer to is allocated parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
+
+/**
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
+ *
+ * [in] parms
+ *   Pointer to get info parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
+
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
new file mode 100644
index 0000000..c749945
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session *tfs)
+{
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		TFP_DRV_LOG(ERR, "Session not created\n");
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	return 0;
+}
+
+int
+tf_session_get_device(struct tf_session *tfs,
+		      struct tf_device *tfd)
+{
+	if (tfs->dev == NULL) {
+		TFP_DRV_LOG(ERR, "Device not created\n");
+		return -EINVAL;
+	}
+	tfd = tfs->dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index c9f4f8f..b1cc7a4 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -11,10 +11,21 @@
 
 #include "bitalloc.h"
 #include "tf_core.h"
+#include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
 #include "stack.h"
 
+/**
+ * The Session module provides session control support. A session is
+ * to the ULP layer known as a session_info instance. The session
+ * private data is the actual session.
+ *
+ * Session manages:
+ *   - The device and all the resources related to the device.
+ *   - Any session sharing between ULP applications
+ */
+
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
@@ -90,6 +101,9 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Device */
+	struct tf_dev_info *dev;
+
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
 
@@ -309,4 +323,44 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * @page session Session Management
+ *
+ * @ref tf_session_get_session
+ *
+ * @ref tf_session_get_device
+ */
+
+/**
+ * Looks up the private session information from the TF session info.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session(struct tf *tfp,
+			   struct tf_session *tfs);
+
+/**
+ * Looks up the device information from the TF Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfd
+ *   Pointer to the device
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_device(struct tf_session *tfs,
+			  struct tf_dev_info *tfd);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
new file mode 100644
index 0000000..8f2b6de
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tbl.h"
+
+/**
+ * Shadow table DB element
+ */
+struct tf_shadow_tbl_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of table type entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow table DB definition
+ */
+struct tf_shadow_tbl_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tbl_element *db;
+};
+
+int
+tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
new file mode 100644
index 0000000..dfd336e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TBL_H_
+#define _TF_SHADOW_TBL_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow Table module provides shadow DB handling for table based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow table DB is intended to be used by the Table Type module
+ * only.
+ */
+
+/**
+ * Shadow DB configuration information for a single table type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of table types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tbl_cfg_parms {
+	/**
+	 * TF Table type
+	 */
+	enum tf_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow table DB creation parameters
+ */
+struct tf_shadow_tbl_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tbl_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table DB free parameters
+ */
+struct tf_shadow_tbl_free_db_parms {
+	/**
+	 * Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table search parameters
+ */
+struct tf_shadow_tbl_search_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Table type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table insert parameters
+ */
+struct tf_shadow_tbl_insert_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table remove parameters
+ */
+struct tf_shadow_tbl_remove_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tbl Shadow table DB
+ *
+ * @ref tf_shadow_tbl_create_db
+ *
+ * @ref tf_shadow_tbl_free_db
+ *
+ * @reg tf_shadow_tbl_search
+ *
+ * @reg tf_shadow_tbl_insert
+ *
+ * @reg tf_shadow_tbl_remove
+ */
+
+/**
+ * Creates and fills a Shadow table DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms);
+
+/**
+ * Closes the Shadow table DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms);
+
+/**
+ * Search Shadow table db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow table DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow table DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TBL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.c b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
new file mode 100644
index 0000000..c61b833
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tcam.h"
+
+/**
+ * Shadow tcam DB element
+ */
+struct tf_shadow_tcam_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of tcam entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow tcam DB definition
+ */
+struct tf_shadow_tcam_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tcam_element *db;
+};
+
+int
+tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.h b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
new file mode 100644
index 0000000..e2c4e06
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TCAM_H_
+#define _TF_SHADOW_TCAM_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow tcam module provides shadow DB handling for tcam based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow tcam DB is intended to be used by the Tcam module only.
+ */
+
+/**
+ * Shadow DB configuration information for a single tcam type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of tcam types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tcam_cfg_parms {
+	/**
+	 * TF tcam type
+	 */
+	enum tf_tcam_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow tcam DB creation parameters
+ */
+struct tf_shadow_tcam_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tcam_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam DB free parameters
+ */
+struct tf_shadow_tcam_free_db_parms {
+	/**
+	 * Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam search parameters
+ */
+struct tf_shadow_tcam_search_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam insert parameters
+ */
+struct tf_shadow_tcam_insert_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam remove parameters
+ */
+struct tf_shadow_tcam_remove_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tcam Shadow tcam DB
+ *
+ * @ref tf_shadow_tcam_create_db
+ *
+ * @ref tf_shadow_tcam_free_db
+ *
+ * @reg tf_shadow_tcam_search
+ *
+ * @reg tf_shadow_tcam_insert
+ *
+ * @reg tf_shadow_tcam_remove
+ */
+
+/**
+ * Creates and fills a Shadow tcam DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms);
+
+/**
+ * Closes the Shadow tcam DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms);
+
+/**
+ * Search Shadow tcam db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow tcam DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow tcam DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TCAM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index b9c71d4..c403d81 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,6 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
new file mode 100644
index 0000000..a57a5dd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tbl_type.h"
+
+struct tf;
+
+/**
+ * Table Type DBs.
+ */
+/* static void *tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Table Type Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_type_bind(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc(struct tf *tfp __rte_unused,
+		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_free(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
+			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_set(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_get(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
new file mode 100644
index 0000000..c880b36
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Table Type module provides processing of Internal TF table types.
+ */
+
+/**
+ * Table Type configuration parameters
+ */
+struct tf_tbl_type_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Table Type allocation parameters
+ */
+struct tf_tbl_type_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type free parameters
+ */
+struct tf_tbl_type_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+struct tf_tbl_type_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type set parameters
+ */
+struct tf_tbl_type_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type get parameters
+ */
+struct tf_tbl_type_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tbl_type Table Type
+ *
+ * @ref tf_tbl_type_bind
+ *
+ * @ref tf_tbl_type_unbind
+ *
+ * @ref tf_tbl_type_alloc
+ *
+ * @ref tf_tbl_type_free
+ *
+ * @ref tf_tbl_type_alloc_search
+ *
+ * @ref tf_tbl_type_set
+ *
+ * @ref tf_tbl_type_get
+ */
+
+/**
+ * Initializes the Table Type module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_bind(struct tf *tfp,
+		     struct tf_tbl_type_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc(struct tf *tfp,
+		      struct tf_tbl_type_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_free(struct tf *tfp,
+		     struct tf_tbl_type_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc_search(struct tf *tfp,
+			     struct tf_tbl_type_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_set(struct tf *tfp,
+		    struct tf_tbl_type_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_get(struct tf *tfp,
+		    struct tf_tbl_type_get_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
new file mode 100644
index 0000000..3ad99dd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tcam.h"
+
+struct tf;
+
+/**
+ * TCAM DBs.
+ */
+/* static void *tcam_db[TF_DIR_MAX]; */
+
+/**
+ * TCAM Shadow DBs
+ */
+/* static void *shadow_tcam_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tcam_bind(struct tf *tfp __rte_unused,
+	     struct tf_tcam_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc(struct tf *tfp __rte_unused,
+	      struct tf_tcam_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_free(struct tf *tfp __rte_unused,
+	     struct tf_tcam_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc_search(struct tf *tfp __rte_unused,
+		     struct tf_tcam_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_set(struct tf *tfp __rte_unused,
+	    struct tf_tcam_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_get(struct tf *tfp __rte_unused,
+	    struct tf_tcam_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
new file mode 100644
index 0000000..1420c9e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -0,0 +1,314 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TCAM_H_
+#define _TF_TCAM_H_
+
+#include "tf_core.h"
+
+/**
+ * The TCAM module provides processing of Internal TCAM types.
+ */
+
+/**
+ * TCAM configuration parameters
+ */
+struct tf_tcam_cfg_parms {
+	/**
+	 * Number of tcam types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * TCAM allocation parameters
+ */
+struct tf_tcam_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM free parameters
+ */
+struct tf_tcam_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/**
+ * TCAM allocate search parameters
+ */
+struct tf_tcam_alloc_search_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/**
+ * TCAM set parameters
+ */
+struct tf_tcam_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM get parameters
+ */
+struct tf_tcam_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tcam TCAM
+ *
+ * @ref tf_tcam_bind
+ *
+ * @ref tf_tcam_unbind
+ *
+ * @ref tf_tcam_alloc
+ *
+ * @ref tf_tcam_free
+ *
+ * @ref tf_tcam_alloc_search
+ *
+ * @ref tf_tcam_set
+ *
+ * @ref tf_tcam_get
+ *
+ */
+
+/**
+ * Initializes the TCAM module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_bind(struct tf *tfp,
+		 struct tf_tcam_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested tcam type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc(struct tf *tfp,
+		  struct tf_tcam_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_free(struct tf *tfp,
+		 struct tf_tcam_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc_search(struct tf *tfp,
+			 struct tf_tcam_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_set(struct tf *tfp,
+		struct tf_tcam_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_get(struct tf *tfp,
+		struct tf_tcam_get_parms *parms);
+
+#endif /* _TF_TCAM_H */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
new file mode 100644
index 0000000..a901054
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include "tf_util.h"
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+{
+	switch (tbl_type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		return "Full Action record";
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		return "Multicast Groups";
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		return "Encap 8B";
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		return "Encap 16B";
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+		return "Encap 32B";
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		return "Encap 64B";
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		return "Source Properties SMAC";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		return "Source Properties SMAC IPv4";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		return "Source Properties SMAC IPv6";
+	case TF_TBL_TYPE_ACT_STATS_64:
+		return "Stats 64B";
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		return "NAT Source Port";
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		return "NAT Destination Port";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		return "NAT IPv4 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		return "NAT IPv4 Destination";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+		return "NAT IPv6 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+		return "NAT IPv6 Destination";
+	case TF_TBL_TYPE_METER_PROF:
+		return "Meter Profile";
+	case TF_TBL_TYPE_METER_INST:
+		return "Meter";
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		return "Mirror";
+	case TF_TBL_TYPE_UPAR:
+		return "UPAR";
+	case TF_TBL_TYPE_EPOCH0:
+		return "EPOCH0";
+	case TF_TBL_TYPE_EPOCH1:
+		return "EPOCH1";
+	case TF_TBL_TYPE_METADATA:
+		return "Metadata";
+	case TF_TBL_TYPE_CT_STATE:
+		return "Connection State";
+	case TF_TBL_TYPE_RANGE_PROF:
+		return "Range Profile";
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		return "Range";
+	case TF_TBL_TYPE_LAG:
+		return "Link Aggregation";
+	case TF_TBL_TYPE_VNIC_SVIF:
+		return "VNIC SVIF";
+	case TF_TBL_TYPE_EM_FKB:
+		return "EM Flexible Key Builder";
+	case TF_TBL_TYPE_WC_FKB:
+		return "WC Flexible Key Builder";
+	case TF_TBL_TYPE_EXT:
+		return "External";
+	default:
+		return "Invalid tbl type";
+	}
+}
+
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+{
+	switch (em_type) {
+	case TF_EM_TBL_TYPE_EM_RECORD:
+		return "EM Record";
+	case TF_EM_TBL_TYPE_TBL_SCOPE:
+		return "Table Scope";
+	default:
+		return "Invalid EM type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
new file mode 100644
index 0000000..4099629
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_UTIL_H_
+#define _TF_UTIL_H_
+
+#include "tf_core.h"
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function converting tbl type to text string
+ */
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+
+/**
+ * Helper function converting em tbl type to text string
+ */
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+
+#endif /* _TF_UTIL_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 12/50] net/bnxt: support bulk table get and mirror
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (10 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 11/50] net/bnxt: add multi device support Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 13/50] net/bnxt: update multi device design support Somnath Kotur
                   ` (38 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add new bulk table type get using FW
  to DMA the data back to host.
- Add flag to allow records to be cleared if possible
- Set mirror using tf_alloc_tbl_entry

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h      |  37 +++++++++++-
 drivers/net/bnxt/tf_core/tf_common.h    |  54 +++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |   2 +
 drivers/net/bnxt/tf_core/tf_core.h      |  55 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  70 +++++++++++++++++-----
 drivers/net/bnxt/tf_core/tf_msg.h       |  15 +++++
 drivers/net/bnxt/tf_core/tf_resources.h |   5 +-
 drivers/net/bnxt/tf_core/tf_tbl.c       | 103 ++++++++++++++++++++++++++++++++
 8 files changed, 319 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index d342c69..c04d103 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Broadcom
+ * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -27,7 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -81,6 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
+struct tf_tbl_type_get_bulk_input;
+struct tf_tbl_type_get_bulk_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -902,6 +905,8 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -916,4 +921,32 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_bulk_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Starting index to get from */
+	uint32_t			 start_index;
+	/* Number of entries to get */
+	uint32_t			 num_entries;
+	/* Host memory where data will be stored */
+	uint64_t			 host_addr;
+} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_bulk_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
new file mode 100644
index 0000000..2aa4b86
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_COMMON_H_
+#define _TF_COMMON_H_
+
+/* Helper to check the parms */
+#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "%s: session error\n", \
+				    tf_dir_2_str((parms)->dir)); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_TFP_SESSION(tfp) do { \
+		if ((tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 58924b1..0098690 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -16,6 +16,8 @@
 #include "bitalloc.h"
 #include "bnxt.h"
 #include "rand.h"
+#include "tf_common.h"
+#include "hwrm_tf.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index becc50c..96a1a79 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1165,7 +1165,7 @@ struct tf_get_tbl_entry_parms {
 	 */
 	uint8_t *data;
 	/**
-	 * [out] Entry size
+	 * [in] Entry size
 	 */
 	uint16_t data_sz_in_bytes;
 	/**
@@ -1189,6 +1189,59 @@ int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
 /**
+ * tf_get_bulk_tbl_entry parameter definition
+ */
+struct tf_get_bulk_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Clear hardware entries on reads only
+	 * supported for TF_TBL_TYPE_ACT_STATS_64
+	 */
+	bool clear_on_read;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * Bulk get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_bulk_tbl_entry(struct tf *tfp,
+		     struct tf_get_bulk_tbl_entry_parms *parms);
+
+/**
  * @page exact_match Exact Match Table
  *
  * @ref tf_insert_em_entry
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c8f6b88..c755c85 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1216,12 +1216,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 static int
-tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
 {
 	struct tfp_calloc_parms alloc_parms;
 	int rc;
@@ -1229,15 +1225,10 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	/* Allocate session */
 	alloc_parms.nitems = 1;
 	alloc_parms.size = size;
-	alloc_parms.alignment = 0;
+	alloc_parms.alignment = 4096;
 	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate tcam dma entry, rc:%d\n",
-			    rc);
+	if (rc)
 		return -ENOMEM;
-	}
 
 	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
 	buf->va_addr = alloc_parms.mem_va;
@@ -1246,6 +1237,52 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 }
 
 int
+tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *params)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_bulk_input req = { 0 };
+	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	int data_size = 0;
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16((params->dir) |
+		((params->clear_on_read) ?
+		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+
+	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < data_size)
+		return -EINVAL;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
+int
 tf_msg_tcam_entry_set(struct tf *tfp,
 		      struct tf_set_tcam_entry_parms *parms)
 {
@@ -1282,9 +1319,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	} else {
 		/* use dma buffer */
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_get_dma_buf(&buf, data_size);
-		if (rc != 0)
-			return rc;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
 		data = buf.va_addr;
 		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
 	}
@@ -1303,8 +1340,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
+cleanup:
 	if (buf.va_addr != NULL)
 		tfp_free(buf.va_addr);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 89f7370..8d050c4 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -267,4 +267,19 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 			 uint8_t *data,
 			 uint32_t index);
 
+/**
+ * Sends bulk get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to table get bulk parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 05e131f..9b7f5a0 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -149,11 +149,10 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_RX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_TX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index c403d81..0f2979e 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
@@ -794,6 +795,7 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -915,6 +917,76 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_bulk_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->starting_idx;
+
+	/*
+	 * Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->starting_idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return rc;
+}
+
 #if (TF_SHADOW == 1)
 /**
  * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
@@ -1182,6 +1254,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -1665,6 +1738,36 @@ tf_get_tbl_entry(struct tf *tfp,
 
 /* API defined in tf_core.h */
 int
+tf_get_bulk_tbl_entry(struct tf *tfp,
+		 struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type,
+				    strerror(-rc));
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
 {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 13/50] net/bnxt: update multi device design support
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (11 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 12/50] net/bnxt: support bulk table get and mirror Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 14/50] net/bnxt: support two-level priority for TCAMs Somnath Kotur
                   ` (37 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Michael Wildt <michael.wildt@broadcom.com>

- Implement the modules RM, Device (WH+), Identifier.
- Update Session module.
- Implement new HWRMs for RM direct messaging.
- Add new parameter check macro's and clean up the header includes for
  i.e. tfp such that bnxt.h is not directly included in the new modules.
- Add cfa_resource_types, required for RM design.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   2 +
 drivers/net/bnxt/tf_core/Makefile             |   1 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 291 ++++++++---------
 drivers/net/bnxt/tf_core/tf_common.h          |  24 ++
 drivers/net/bnxt/tf_core/tf_core.c            | 286 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_core.h            |  12 +-
 drivers/net/bnxt/tf_core/tf_device.c          | 150 ++++++++-
 drivers/net/bnxt/tf_core/tf_device.h          |  79 ++++-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  78 ++++-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  79 +++--
 drivers/net/bnxt/tf_core/tf_identifier.c      | 142 ++++++++-
 drivers/net/bnxt/tf_core/tf_identifier.h      |  25 +-
 drivers/net/bnxt/tf_core/tf_msg.c             | 268 ++++++++++++++--
 drivers/net/bnxt/tf_core/tf_msg.h             |  59 ++++
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 434 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h          |  72 +++--
 drivers/net/bnxt/tf_core/tf_session.c         | 280 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_session.h         | 118 ++++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   4 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  30 +-
 drivers/net/bnxt/tf_core/tf_tbl_type.h        |  95 +++---
 drivers/net/bnxt/tf_core/tf_tcam.h            |  14 +-
 22 files changed, 2144 insertions(+), 399 deletions(-)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index a50cb26..1f7df9d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_session.c',
+	'tf_core/tf_device.c',
 	'tf_core/tf_device_p4.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 71df75b..7191c7f 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,6 +14,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index c0c1e75..11e8892 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -12,6 +12,11 @@
 
 #ifndef _CFA_RESOURCE_TYPES_H_
 #define _CFA_RESOURCE_TYPES_H_
+/*
+ * This is the constant used to define invalid CFA
+ * resource types across all devices.
+ */
+#define CFA_RESOURCE_TYPE_INVALID 65535
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
@@ -58,209 +63,205 @@
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index 2aa4b86..ec3bca8 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -51,4 +51,28 @@
 		} \
 	} while (0)
 
+
+#define TF_CHECK_PARMS1(parms) do {					\
+		if ((parms) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS2(parms1, parms2) do {				\
+		if ((parms1) == NULL || (parms2) == NULL) {		\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
+		if ((parms1) == NULL ||					\
+		    (parms2) == NULL ||					\
+		    (parms3) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
 #endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0098690..28a6bbd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -85,7 +85,7 @@ tf_create_em_pool(struct tf_session *session,
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
 
 	if (rc != 0) {
 		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
@@ -231,7 +231,6 @@ tf_open_session(struct tf                    *tfp,
 		   TF_SESSION_NAME_MAX);
 
 	/* Initialize Session */
-	session->device_type = parms->device_type;
 	session->dev = NULL;
 	tf_rm_init(tfp);
 
@@ -276,7 +275,9 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		rc = tf_create_em_pool(session,
+				       (enum tf_dir)dir,
+				       TF_SESSION_EM_POOL_SIZE);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "EM Pool initialization failed\n");
@@ -314,6 +315,64 @@ tf_open_session(struct tf                    *tfp,
 }
 
 int
+tf_open_session_new(struct tf *tfp,
+		    struct tf_open_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_open_session_parms oparms;
+
+	TF_CHECK_PARMS(tfp, parms);
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
+		return -ENOTSUP;
+	}
+
+	/* Verify control channel and build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	oparms.open_cfg = parms;
+
+	rc = tf_session_open_session(tfp, &oparms);
+	/* Logging handled by tf_session_open_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+}
+
+int
 tf_attach_session(struct tf *tfp __rte_unused,
 		  struct tf_attach_session_parms *parms __rte_unused)
 {
@@ -342,6 +401,69 @@ tf_attach_session(struct tf *tfp __rte_unused,
 }
 
 int
+tf_attach_session_new(struct tf *tfp,
+		      struct tf_attach_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_attach_session_parms aparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Verify control channel */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Verify 'attach' channel */
+	rc = sscanf(parms->attach_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device attach_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Prepare return value of session_id, using ctrl_chan_name
+	 * device values as it becomes the session id.
+	 */
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	aparms.attach_cfg = parms;
+	rc = tf_session_attach_session(tfp,
+				       &aparms);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Attached to session, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return rc;
+}
+
+int
 tf_close_session(struct tf *tfp)
 {
 	int rc;
@@ -380,7 +502,7 @@ tf_close_session(struct tf *tfp)
 	if (tfs->ref_count == 0) {
 		/* Free EM pool */
 		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, dir);
+			tf_free_em_pool(tfs, (enum tf_dir)dir);
 
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
@@ -401,6 +523,39 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+int
+tf_close_session_new(struct tf *tfp)
+{
+	int rc;
+	struct tf_session_close_session_parms cparms = { 0 };
+	union tf_session_id session_id = { 0 };
+	uint8_t ref_count;
+
+	TF_CHECK_PARMS1(tfp);
+
+	cparms.ref_count = &ref_count;
+	cparms.session_id = &session_id;
+	rc = tf_session_close_session(tfp,
+				      &cparms);
+	/* Logging handled by tf_session_close_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    cparms.session_id->id,
+		    *cparms.ref_count);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    cparms.session_id->internal.domain,
+		    cparms.session_id->internal.bus,
+		    cparms.session_id->internal.device,
+		    cparms.session_id->internal.fw_session_id);
+
+	return rc;
+}
+
 /** insert EM hash entry API
  *
  *    returns:
@@ -539,10 +694,67 @@ int tf_alloc_identifier(struct tf *tfp,
 	return 0;
 }
 
-/** free identifier resource
- *
- * Returns success or failure code.
- */
+int
+tf_alloc_identifier_new(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_alloc_parms aparms;
+	uint16_t id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_ident_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.ident_type = parms->ident_type;
+	aparms.id = &id;
+	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->id = id;
+
+	return 0;
+}
+
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms)
 {
@@ -619,6 +831,64 @@ int tf_free_identifier(struct tf *tfp,
 }
 
 int
+tf_free_identifier_new(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_ident_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.ident_type = parms->ident_type;
+	fparms.id = parms->id;
+	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
 tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 96a1a79..74ed24e 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -380,7 +380,7 @@ struct tf_session_resources {
 	 * The number of identifier resources requested for the session.
 	 * The index used is tf_identifier_type.
 	 */
-	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
 	/** [in] Requested Index Table resource counts
 	 *
 	 * The number of index table resources requested for the session.
@@ -480,6 +480,9 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+int tf_open_session_new(struct tf *tfp,
+			struct tf_open_session_parms *parms);
+
 struct tf_attach_session_parms {
 	/** [in] ctrl_chan_name
 	 *
@@ -542,6 +545,8 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
+int tf_attach_session_new(struct tf *tfp,
+			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -551,6 +556,7 @@ int tf_attach_session(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
+int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -602,6 +608,8 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
+int tf_alloc_identifier_new(struct tf *tfp,
+			    struct tf_alloc_identifier_parms *parms);
 
 /** free identifier resource
  *
@@ -613,6 +621,8 @@ int tf_alloc_identifier(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
+int tf_free_identifier_new(struct tf *tfp,
+			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 3b36831..4c46cad 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,45 +6,169 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
-#include "bnxt.h"
 
 struct tf;
 
+/* Forward declarations */
+static int dev_unbind_p4(struct tf *tfp);
+
 /**
- * Device specific bind function
+ * Device specific bind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] shadow_copy
+ *   Flag controlling shadow copy DB creation
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp __rte_unused,
-	    struct tf_session_resources *resources __rte_unused,
-	    struct tf_dev_info *dev_info)
+dev_bind_p4(struct tf *tfp,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
+	int rc;
+	int frc;
+	struct tf_ident_cfg_parms ident_cfg;
+	struct tf_tbl_cfg_parms tbl_cfg;
+	struct tf_tcam_cfg_parms tcam_cfg;
+
 	/* Initialize the modules */
 
-	dev_info->ops = &tf_dev_ops_p4;
+	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
+	ident_cfg.cfg = tf_ident_p4;
+	ident_cfg.shadow_copy = shadow_copy;
+	ident_cfg.resources = resources;
+	rc = tf_ident_bind(tfp, &ident_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier initialization failure\n");
+		goto fail;
+	}
+
+	tbl_cfg.num_elements = TF_TBL_TYPE_MAX;
+	tbl_cfg.cfg = tf_tbl_p4;
+	tbl_cfg.shadow_copy = shadow_copy;
+	tbl_cfg.resources = resources;
+	rc = tf_tbl_bind(tfp, &tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Table initialization failure\n");
+		goto fail;
+	}
+
+	tcam_cfg.num_elements = TF_TCAM_TBL_TYPE_MAX;
+	tcam_cfg.cfg = tf_tcam_p4;
+	tcam_cfg.shadow_copy = shadow_copy;
+	tcam_cfg.resources = resources;
+	rc = tf_tcam_bind(tfp, &tcam_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM initialization failure\n");
+		goto fail;
+	}
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	dev_handle->ops = &tf_dev_ops_p4;
+
 	return 0;
+
+ fail:
+	/* Cleanup of already created modules */
+	frc = dev_unbind_p4(tfp);
+	if (frc)
+		return frc;
+
+	return rc;
+}
+
+/**
+ * Device specific unbind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+dev_unbind_p4(struct tf *tfp)
+{
+	int rc = 0;
+	bool fail = false;
+
+	/* Unbind all the support modules. As this is only done on
+	 * close we only report errors as everything has to be cleaned
+	 * up regardless.
+	 */
+	rc = tf_ident_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Identifier\n");
+		fail = true;
+	}
+
+	rc = tf_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Table Type\n");
+		fail = true;
+	}
+
+	rc = tf_tcam_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, TCAM\n");
+		fail = true;
+	}
+
+	if (fail)
+		return -1;
+
+	return rc;
 }
 
 int
 dev_bind(struct tf *tfp __rte_unused,
 	 enum tf_device_type type,
+	 bool shadow_copy,
 	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_info)
+	 struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
 		return dev_bind_p4(tfp,
+				   shadow_copy,
 				   resources,
-				   dev_info);
+				   dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
-			    "Device type not supported\n");
-		return -ENOTSUP;
+			    "No such device\n");
+		return -ENODEV;
 	}
 }
 
 int
-dev_unbind(struct tf *tfp __rte_unused,
-	   struct tf_dev_info *dev_handle __rte_unused)
+dev_unbind(struct tf *tfp,
+	   struct tf_dev_info *dev_handle)
 {
-	return 0;
+	switch (dev_handle->type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_unbind_p4(tfp);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "No such device\n");
+		return -ENODEV;
+	}
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 8b63ff1..6aeb6fe 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -27,6 +27,7 @@ struct tf_session;
  * TF device information
  */
 struct tf_dev_info {
+	enum tf_device_type type;
 	const struct tf_dev_ops *ops;
 };
 
@@ -56,10 +57,12 @@ struct tf_dev_info {
  *
  * Returns
  *   - (0) if successful.
- *   - (-EINVAL) on failure.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_bind(struct tf *tfp,
 	     enum tf_device_type type,
+	     bool shadow_copy,
 	     struct tf_session_resources *resources,
 	     struct tf_dev_info *dev_handle);
 
@@ -71,6 +74,11 @@ int dev_bind(struct tf *tfp,
  *
  * [in] dev_handle
  *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_unbind(struct tf *tfp,
 	       struct tf_dev_info *dev_handle);
@@ -85,6 +93,44 @@ int dev_unbind(struct tf *tfp,
  */
 struct tf_dev_ops {
 	/**
+	 * Retrives the MAX number of resource types that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] max_types
+	 *   Pointer to MAX number of types the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_max_types)(struct tf *tfp,
+				    uint16_t *max_types);
+
+	/**
+	 * Retrieves the WC TCAM slice information that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] slice_size
+	 *   Pointer to slice size the device supports
+	 *
+	 * [out] num_slices_per_row
+	 *   Pointer to number of slices per row the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
+					 uint16_t *slice_size,
+					 uint16_t *num_slices_per_row);
+
+	/**
 	 * Allocation of an identifier element.
 	 *
 	 * This API allocates the specified identifier element from a
@@ -134,14 +180,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation parameters
+	 *   Pointer to table allocation parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
-				     struct tf_tbl_type_alloc_parms *parms);
+	int (*tf_dev_alloc_tbl)(struct tf *tfp,
+				struct tf_tbl_alloc_parms *parms);
 
 	/**
 	 * Free of a table type element.
@@ -153,14 +199,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type free parameters
+	 *   Pointer to table free parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_free_tbl_type)(struct tf *tfp,
-				    struct tf_tbl_type_free_parms *parms);
+	int (*tf_dev_free_tbl)(struct tf *tfp,
+			       struct tf_tbl_free_parms *parms);
 
 	/**
 	 * Searches for the specified table type element in a shadow DB.
@@ -175,15 +221,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation and search parameters
+	 *   Pointer to table allocation and search parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_search_tbl_type)
-			(struct tf *tfp,
-			struct tf_tbl_type_alloc_search_parms *parms);
+	int (*tf_dev_alloc_search_tbl)(struct tf *tfp,
+				       struct tf_tbl_alloc_search_parms *parms);
 
 	/**
 	 * Sets the specified table type element.
@@ -195,14 +240,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type set parameters
+	 *   Pointer to table set parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_set_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_set_parms *parms);
+	int (*tf_dev_set_tbl)(struct tf *tfp,
+			      struct tf_tbl_set_parms *parms);
 
 	/**
 	 * Retrieves the specified table type element.
@@ -214,14 +259,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type get parameters
+	 *   Pointer to table get parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_get_parms *parms);
+	int (*tf_dev_get_tbl)(struct tf *tfp,
+			       struct tf_tbl_get_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c3c4d1e..c235976 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -3,19 +3,87 @@
  * All rights reserved.
  */
 
+#include <rte_common.h>
+#include <cfa_resource_types.h>
+
 #include "tf_device.h"
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
 
+/**
+ * Device specific function that retrieves the MAX number of HCAPI
+ * types the device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] max_types
+ *   Pointer to the MAX number of HCAPI types supported
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
+			uint16_t *max_types)
+{
+	if (max_types == NULL)
+		return -EINVAL;
+
+	*max_types = CFA_RESOURCE_TYPE_P4_LAST + 1;
+
+	return 0;
+}
+
+/**
+ * Device specific function that retrieves the WC TCAM slices the
+ * device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] slice_size
+ *   Pointer to the WC TCAM slice size
+ *
+ * [out] num_slices_per_row
+ *   Pointer to the WC TCAM row slice configuration
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
+			     uint16_t *slice_size,
+			     uint16_t *num_slices_per_row)
+{
+#define CFA_P4_WC_TCAM_SLICE_SIZE       12
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+
+	if (slice_size == NULL || num_slices_per_row == NULL)
+		return -EINVAL;
+
+	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
+	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+
+	return 0;
+}
+
+/**
+ * Truflow P4 device specific functions
+ */
 const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
-	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
-	.tf_dev_free_tbl_type = tf_tbl_type_free,
-	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
-	.tf_dev_set_tbl_type = tf_tbl_type_set,
-	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 84d90e3..5cd02b2 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,11 +12,12 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
@@ -24,41 +25,57 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
-	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
-	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+	/* CFA_RESOURCE_TYPE_P4_UPAR */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EPOC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_METADATA */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_CT_STATE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_PROF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_LAG */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EM_FBK */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_WC_FKB */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EXT */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 726d0b4..e89f976 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -6,42 +6,172 @@
 #include <rte_common.h>
 
 #include "tf_identifier.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Identifier DBs.
  */
-/* static void *ident_db[TF_DIR_MAX]; */
+static void *ident_db[TF_DIR_MAX];
 
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 int
-tf_ident_bind(struct tf *tfp __rte_unused,
-	      struct tf_ident_cfg *parms __rte_unused)
+tf_ident_bind(struct tf *tfp,
+	      struct tf_ident_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
+		db_cfg.rm_db = ident_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Identifier DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_ident_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = ident_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		ident_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_ident_alloc(struct tf *tfp __rte_unused,
-	       struct tf_ident_alloc_parms *parms __rte_unused)
+	       struct tf_ident_alloc_parms *parms)
 {
+	int rc;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = (uint32_t *)&parms->id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type);
+		return rc;
+	}
+
 	return 0;
 }
 
 int
 tf_ident_free(struct tf *tfp __rte_unused,
-	      struct tf_ident_free_parms *parms __rte_unused)
+	      struct tf_ident_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = parms->id;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = ident_db[parms->dir];
+	fparms.db_index = parms->ident_type;
+	fparms.index = parms->id;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index b77c91b..1c5319b 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -12,21 +12,28 @@
  * The Identifier module provides processing of Identifiers.
  */
 
-struct tf_ident_cfg {
+struct tf_ident_cfg_parms {
 	/**
-	 * Number of identifier types in each of the configuration
-	 * arrays
+	 * [in] Number of identifier types in each of the
+	 * configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
-	 * TCAM configuration array
+	 * [in] Identifier configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * [in] Boolean controlling the request shadow copy.
 	 */
-	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+	bool shadow_copy;
+	/**
+	 * [in] Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Identifier allcoation parameter definition
+ * Identifier allocation parameter definition
  */
 struct tf_ident_alloc_parms {
 	/**
@@ -40,7 +47,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [out] Identifier allocated
 	 */
-	uint16_t id;
+	uint16_t *id;
 };
 
 /**
@@ -88,7 +95,7 @@ struct tf_ident_free_parms {
  *   - (-EINVAL) on failure.
  */
 int tf_ident_bind(struct tf *tfp,
-		  struct tf_ident_cfg *parms);
+		  struct tf_ident_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c755c85..e08a96f 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -6,15 +6,13 @@
 #include <inttypes.h>
 #include <stdbool.h>
 #include <stdlib.h>
-
-#include "bnxt.h"
-#include "tf_core.h"
-#include "tf_session.h"
-#include "tfp.h"
+#include <string.h>
 
 #include "tf_msg_common.h"
 #include "tf_msg.h"
-#include "hsi_struct_def_dpdk.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tfp.h"
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
@@ -141,6 +139,51 @@ tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
 }
 
 /**
+ * Allocates a DMA buffer that can be used for message transfer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ *
+ * [in] size
+ *   Requested size of the buffer in bytes
+ *
+ * Returns:
+ *    0      - Success
+ *   -ENOMEM - Unable to allocate buffer, no memory
+ */
+static int
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 4096;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc)
+		return -ENOMEM;
+
+	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+/**
+ * Free's a previous allocated DMA buffer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ */
+static void
+tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
+{
+	tfp_free(buf->va_addr);
+}
+
+/**
  * Sends session open request to TF Firmware
  */
 int
@@ -154,7 +197,7 @@ tf_msg_session_open(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+	tfp_memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
 
 	parms.tf_type = HWRM_TF_SESSION_OPEN;
 	parms.req_data = (uint32_t *)&req;
@@ -870,6 +913,180 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_session_resc_qcaps(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *query,
+			  enum tf_rm_resc_resv_strategy *resv_strategy)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_qcaps_input req = { 0 };
+	struct hwrm_tf_session_resc_qcaps_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf qcaps_buf = { 0 };
+	struct tf_rm_resc_req_entry *data;
+	int dma_size;
+
+	if (size == 0 || query == NULL || resv_strategy == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Resource QCAPS parameter error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffer */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&qcaps_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.qcaps_size = size;
+	req.qcaps_addr = qcaps_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: QCAPS message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].min = tfp_le_to_cpu_16(data[i].min);
+		query[i].max = tfp_le_to_cpu_16(data[i].max);
+	}
+
+	*resv_strategy = resp.flags &
+	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
+
+	tf_msg_free_dma_buf(&qcaps_buf);
+
+	return rc;
+}
+
+int
+tf_msg_session_resc_alloc(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *request,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_alloc_input req = { 0 };
+	struct hwrm_tf_session_resc_alloc_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf req_buf = { 0 };
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_req_entry *req_data;
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&req_buf, dma_size);
+	if (rc)
+		return rc;
+
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.req_size = size;
+
+	req_data = (struct tf_rm_resc_req_entry *)req_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		req_data[i].type = tfp_cpu_to_le_32(request[i].type);
+		req_data[i].min = tfp_cpu_to_le_16(request[i].min);
+		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
+	}
+
+	req.req_addr = req_buf.pa_addr;
+	req.resp_addr = resv_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
+		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
+		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+	}
+
+	tf_msg_free_dma_buf(&req_buf);
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1034,7 +1251,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
@@ -1216,26 +1435,6 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-static int
-tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
-{
-	struct tfp_calloc_parms alloc_parms;
-	int rc;
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = size;
-	alloc_parms.alignment = 4096;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc)
-		return -ENOMEM;
-
-	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
-	buf->va_addr = alloc_parms.mem_va;
-
-	return 0;
-}
-
 int
 tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 			  struct tf_get_bulk_tbl_entry_parms *params)
@@ -1323,12 +1522,14 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		if (rc)
 			goto cleanup;
 		data = buf.va_addr;
-		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
 	}
 
-	memcpy(&data[0], parms->key, key_bytes);
-	memcpy(&data[key_bytes], parms->mask, key_bytes);
-	memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, key_bytes);
+	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
+	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1343,8 +1544,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		goto cleanup;
 
 cleanup:
-	if (buf.va_addr != NULL)
-		tfp_free(buf.va_addr);
+	tf_msg_free_dma_buf(&buf);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8d050c4..06f52ef 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,8 +6,12 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include <rte_common.h>
+#include <hsi_struct_def_dpdk.h>
+
 #include "tf_tbl.h"
 #include "tf_rm.h"
+#include "tf_rm_new.h"
 
 struct tf;
 
@@ -122,6 +126,61 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   struct tf_rm_entry *sram_entry);
 
 /**
+ * Sends session HW resource query capability request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the query. Should be set to the max
+ *   elements for the device type
+ *
+ * [out] query
+ *   Pointer to an array of query elements
+ *
+ * [out] resv_strategy
+ *   Pointer to the reservation strategy
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_qcaps(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *query,
+			      enum tf_rm_resc_resv_strategy *resv_strategy);
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] req
+ *   Pointer to an array of request elements
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_alloc(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *request,
+			      struct tf_rm_resc_entry *resv);
+
+/**
  * Sends EM internal insert request to Firmware
  */
 int tf_msg_insert_em_internal_entry(struct tf *tfp,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 51bb9ba..7cadb23 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -3,20 +3,18 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
 #include <rte_common.h>
 
-#include "tf_rm_new.h"
+#include <cfa_resource_types.h>
 
-/**
- * Resource query single entry. Used when accessing HCAPI RM on the
- * firmware.
- */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
-};
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_msg.h"
 
 /**
  * Generic RM Element data type that an RM DB is build upon.
@@ -27,7 +25,7 @@ struct tf_rm_element {
 	 * hcapi_type can be ignored. If Null then the element is not
 	 * valid for the device.
 	 */
-	enum tf_rm_elem_cfg_type type;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/**
 	 * HCAPI RM Type for the element.
@@ -50,53 +48,435 @@ struct tf_rm_element {
 /**
  * TF RM DB definition
  */
-struct tf_rm_db {
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
+
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
 	struct tf_rm_element *db;
 };
 
+
+/**
+ * Resource Manager Adjust of base index definitions.
+ */
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
+
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
+{
+	int rc = 0;
+	uint32_t base_index;
+
+	base_index = db[db_index].alloc.entry.start;
+
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
 int
-tf_rm_create_db(struct tf *tfp __rte_unused,
-		struct tf_rm_create_db_parms *parms __rte_unused)
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
+
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against db requirements */
+
+	/* Alloc request, alignment already set */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			req[i].type = parms->cfg[i].hcapi_type;
+			/* Check that we can get the full amount allocated */
+			if (parms->alloc_num[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[i].min = parms->alloc_num[i];
+				req[i].max = parms->alloc_num[i];
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_num[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		} else {
+			/* Skip the element */
+			req[i].type = CFA_RESOURCE_TYPE_INVALID;
+		}
+	}
+
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       parms->num_elements,
+				       req,
+				       resv);
+	if (rc)
+		return rc;
+
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db = (void *)cparms.mem_va;
+
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
+
+	db = rm_db->db;
+	for (i = 0; i < parms->num_elements; i++) {
+		/* If allocation failed for a single entry the DB
+		 * creation is considered a failure.
+		 */
+		if (parms->alloc_num[i] != resv[i].stride) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    i);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_num[i],
+				    resv[i].stride);
+			goto fail;
+		}
+
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+		db[i].alloc.entry.start = resv[i].start;
+		db[i].alloc.entry.stride = resv[i].stride;
+
+		/* Create pool */
+		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
+			     sizeof(struct bitalloc));
+		/* Alloc request, alignment already set */
+		cparms.nitems = pool_size;
+		cparms.size = sizeof(struct bitalloc);
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+		db[i].pool = (struct bitalloc *)cparms.mem_va;
+	}
+
+	rm_db->num_entries = i;
+	rm_db->dir = parms->dir;
+	parms->rm_db = (void *)rm_db;
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
 	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
 int
 tf_rm_free_db(struct tf *tfp __rte_unused,
-	      struct tf_rm_free_db_parms *parms __rte_unused)
+	      struct tf_rm_free_db_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int i;
+	struct tf_rm_new_db *rm_db;
+
+	/* Traverse the DB and clear each pool.
+	 * NOTE:
+	 *   Firmware is not cleared. It will be cleared on close only.
+	 */
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db->pool);
+
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int id;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -ENOMEM;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				parms->index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -1;
+	}
+
+	return rc;
 }
 
 int
-tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
+
+	return rc;
 }
 
 int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	parms->info = &rm_db->db[parms->db_index].alloc;
+
+	return rc;
 }
 
 int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 72dba09..6d8234d 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
 #include "tf_core.h"
 #include "bitalloc.h"
@@ -32,13 +32,16 @@ struct tf;
  * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
  * structure and several new APIs would need to be added to allow for
  * growth of a single TF resource type.
+ *
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
 
 /**
  * Resource reservation single entry result. Used when accessing HCAPI
  * RM on the firmware.
  */
-struct tf_rm_entry {
+struct tf_rm_new_entry {
 	/** Starting index of the allocated resource */
 	uint16_t start;
 	/** Number of allocated elements */
@@ -52,13 +55,33 @@ struct tf_rm_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
-	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
-	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
 	TF_RM_TYPE_MAX
 };
 
 /**
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
+ */
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
+};
+
+/**
  * RM Element configuration structure, used by the Device to configure
  * how an individual TF type is configured in regard to the HCAPI RM
  * of same type.
@@ -68,7 +91,7 @@ struct tf_rm_element_cfg {
 	 * RM Element config controls how the DB for that element is
 	 * processed.
 	 */
-	enum tf_rm_elem_cfg_type cfg;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/* If a HCAPI to TF type conversion is required then TF type
 	 * can be added here.
@@ -92,7 +115,7 @@ struct tf_rm_alloc_info {
 	 * In case of dynamic allocation support this would have
 	 * to be changed to linked list of tf_rm_entry instead.
 	 */
-	struct tf_rm_entry entry;
+	struct tf_rm_new_entry entry;
 };
 
 /**
@@ -104,17 +127,21 @@ struct tf_rm_create_db_parms {
 	 */
 	enum tf_dir dir;
 	/**
-	 * [in] Number of elements in the parameter structure
+	 * [in] Number of elements.
 	 */
 	uint16_t num_elements;
 	/**
-	 * [in] Parameter structure
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Allocation number array. Array size is num_elements.
 	 */
-	struct tf_rm_element_cfg *parms;
+	uint16_t *alloc_num;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -128,7 +155,7 @@ struct tf_rm_free_db_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -138,7 +165,7 @@ struct tf_rm_allocate_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -159,7 +186,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -168,7 +195,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t index;
+	uint16_t index;
 };
 
 /**
@@ -178,7 +205,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -191,7 +218,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] Pointer to flag that indicates the state of the query
 	 */
-	uint8_t *allocated;
+	int *allocated;
 };
 
 /**
@@ -201,7 +228,7 @@ struct tf_rm_get_alloc_info_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -221,7 +248,7 @@ struct tf_rm_get_hcapi_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -306,6 +333,7 @@ int tf_rm_free_db(struct tf *tfp,
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
 int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
@@ -317,7 +345,7 @@ int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
  *
  * Returns
  *   - (0) if successful.
- *   - (-EpINVAL) on failure.
+ *   - (-EINVAL) on failure.
  */
 int tf_rm_free(struct tf_rm_free_parms *parms);
 
@@ -365,4 +393,4 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index c749945..2f769d8 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -3,29 +3,293 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_session.h"
+#include "tf_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tfp_calloc_parms cparms;
+	uint8_t fw_session_id;
+	union tf_session_id *session_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->open_cfg->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
+		else
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
+
+		parms->open_cfg->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_info);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session = (struct tf_session_info *)cparms.mem_va;
+
+	/* Allocate core data for the session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session->core_data = cparms.mem_va;
+
+	/* Initialize Session and Device */
+	session = (struct tf_session *)tfp->session->core_data;
+	session->ver.major = 0;
+	session->ver.minor = 0;
+	session->ver.update = 0;
+
+	session_id = &parms->open_cfg->session_id;
+	session->session_id.internal.domain = session_id->internal.domain;
+	session->session_id.internal.bus = session_id->internal.bus;
+	session->session_id.internal.device = session_id->internal.device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+	/* Return the allocated fw session id */
+	session_id->internal.fw_session_id = fw_session_id;
+
+	session->shadow_copy = parms->open_cfg->shadow_copy;
+
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->open_cfg->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	rc = dev_bind(tfp,
+		      parms->open_cfg->device_type,
+		      session->shadow_copy,
+		      &parms->open_cfg->resources,
+		      session->dev);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	/* Query for Session Config
+	 */
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	tf_close_session(tfp);
+	return -EINVAL;
+}
+
+int
+tf_session_attach_session(struct tf *tfp __rte_unused,
+			  struct tf_session_attach_session_parms *parms __rte_unused)
+{
+#if 0
+
+	/* A shared session is similar to single session. It consists
+	 * of two parts the tf_session_info element which remains
+	 * private to the caller and the session within this element
+	 * which is shared. The session it self holds the dynamic
+	 * data, i.e. the device and its sub modules.
+	 *
+	 * Firmware side is updated about any sharing as well.
+	 */
+
+	/* - Open the shared memory for the attach_chan_name
+	 * - Point to the shared session for this Device instance
+	 * - Check that session is valid
+	 * - Attach to the firmware so it can record there is more
+	 *   than one client of the session.
+	 */
+
+	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_attach(tfp,
+					   parms->ctrl_chan_name,
+					   parms->session_id);
+	}
+#endif /* 0 */
+	int rc = -EOPNOTSUPP;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	TFP_DRV_LOG(ERR,
+		    "Attach not yet supported, rc:%s\n",
+		    strerror(-rc));
+	return rc;
+}
+
+int
+tf_session_close_session(struct tf *tfp,
+			 struct tf_session_close_session_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *tfd;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (tfs->session_id.id == TF_SESSION_ID_INVALID) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid session id, unable to close, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Record the session we're closing so the caller knows the
+	 * details.
+	 */
+	*parms->session_id = tfs->session_id;
+
+	rc = tf_session_get_device(tfs, &tfd);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* In case we're attached only the session client gets closed */
+	rc = tf_msg_session_close(tfp);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "FW Session close failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	tfs->ref_count--;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		/* Unbind the device */
+		rc = dev_unbind(tfp, tfd);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "Device unbind failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	return 0;
+}
+
 int
 tf_session_get_session(struct tf *tfp,
-		       struct tf_session *tfs)
+		       struct tf_session **tfs)
 {
+	int rc;
+
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		TFP_DRV_LOG(ERR, "Session not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	*tfs = (struct tf_session *)(tfp->session->core_data);
 
 	return 0;
 }
 
 int
 tf_session_get_device(struct tf_session *tfs,
-		      struct tf_device *tfd)
+		      struct tf_dev_info **tfd)
 {
+	int rc;
+
 	if (tfs->dev == NULL) {
-		TFP_DRV_LOG(ERR, "Device not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Device not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
-	tfd = tfs->dev;
+
+	*tfd = tfs->dev;
+
+	return 0;
+}
+
+int
+tf_session_get_fw_session_id(struct tf *tfp,
+			     uint8_t *fw_session_id)
+{
+	int rc;
+	struct tf_session *tfs;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*fw_session_id = tfs->session_id.internal.fw_session_id;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index b1cc7a4..9279251 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -63,12 +63,7 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Device type, provided by tf_open_session().
-	 */
-	enum tf_device_type device_type;
-
-	/** Session ID, allocated by FW on tf_open_session().
-	 */
+	/** Session ID, allocated by FW on tf_open_session() */
 	union tf_session_id session_id;
 
 	/**
@@ -101,7 +96,7 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
-	/** Device */
+	/** Device handle */
 	struct tf_dev_info *dev;
 
 	/** Session HW and SRAM resources */
@@ -324,12 +319,96 @@ struct tf_session {
 };
 
 /**
+ * Session open parameter definition
+ */
+struct tf_session_open_session_parms {
+	/**
+	 * [in] Pointer to the TF open session configuration
+	 */
+	struct tf_open_session_parms *open_cfg;
+};
+
+/**
+ * Session attach parameter definition
+ */
+struct tf_session_attach_session_parms {
+	/**
+	 * [in] Pointer to the TF attach session configuration
+	 */
+	struct tf_attach_session_parms *attach_cfg;
+};
+
+/**
+ * Session close parameter definition
+ */
+struct tf_session_close_session_parms {
+	uint8_t *ref_count;
+	union tf_session_id *session_id;
+};
+
+/**
  * @page session Session Management
  *
+ * @ref tf_session_open_session
+ *
+ * @ref tf_session_attach_session
+ *
+ * @ref tf_session_close_session
+ *
  * @ref tf_session_get_session
  *
  * @ref tf_session_get_device
+ *
+ * @ref tf_session_get_fw_session_id
+ */
+
+/**
+ * Creates a host session with a corresponding firmware session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
+int tf_session_open_session(struct tf *tfp,
+			    struct tf_session_open_session_parms *parms);
+
+/**
+ * Attaches a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_attach_session(struct tf *tfp,
+			      struct tf_session_attach_session_parms *parms);
+
+/**
+ * Closes a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in/out] parms
+ *   Pointer to the session close parameters.
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_close_session(struct tf *tfp,
+			     struct tf_session_close_session_parms *parms);
 
 /**
  * Looks up the private session information from the TF session info.
@@ -338,14 +417,14 @@ struct tf_session {
  *   Pointer to TF handle
  *
  * [out] tfs
- *   Pointer to the session
+ *   Pointer pointer to the session
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_session(struct tf *tfp,
-			   struct tf_session *tfs);
+			   struct tf_session **tfs);
 
 /**
  * Looks up the device information from the TF Session.
@@ -354,13 +433,30 @@ int tf_session_get_session(struct tf *tfp,
  *   Pointer to TF handle
  *
  * [out] tfd
- *   Pointer to the device
+ *   Pointer pointer to the device
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_device(struct tf_session *tfs,
-			  struct tf_dev_info *tfd);
+			  struct tf_dev_info **tfd);
+
+/**
+ * Looks up the FW session id of the firmware connection for the
+ * requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_fw_session_id(struct tf *tfp,
+				 uint8_t *fw_session_id);
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 6cda487..b335a9c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,8 +7,12 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+
+#include "tf_core.h"
 #include "stack.h"
 
+struct tf_session;
+
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
 	PT_LVL_1,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index a57a5dd..b79706f 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -10,12 +10,12 @@
 struct tf;
 
 /**
- * Table Type DBs.
+ * Table DBs.
  */
 /* static void *tbl_db[TF_DIR_MAX]; */
 
 /**
- * Table Type Shadow DBs
+ * Table Shadow DBs
  */
 /* static void *shadow_tbl_db[TF_DIR_MAX]; */
 
@@ -30,49 +30,49 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_type_bind(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp __rte_unused,
+	    struct tf_tbl_cfg_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc(struct tf *tfp __rte_unused,
-		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_free(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_free_parms *parms __rte_unused)
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
-			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_set(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp __rte_unused,
+	   struct tf_tbl_set_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_get(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp __rte_unused,
+	   struct tf_tbl_get_parms *parms __rte_unused)
 {
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index c880b36..11f2aa3 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -11,33 +11,39 @@
 struct tf;
 
 /**
- * The Table Type module provides processing of Internal TF table types.
+ * The Table module provides processing of Internal TF table types.
  */
 
 /**
- * Table Type configuration parameters
+ * Table configuration parameters
  */
-struct tf_tbl_type_cfg_parms {
+struct tf_tbl_cfg_parms {
 	/**
 	 * Number of table types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * Table Type element configuration array
 	 */
-	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Table Type allocation parameters
+ * Table allocation parameters
  */
-struct tf_tbl_type_alloc_parms {
+struct tf_tbl_alloc_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -53,9 +59,9 @@ struct tf_tbl_type_alloc_parms {
 };
 
 /**
- * Table Type free parameters
+ * Table free parameters
  */
-struct tf_tbl_type_free_parms {
+struct tf_tbl_free_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -75,7 +81,10 @@ struct tf_tbl_type_free_parms {
 	uint16_t ref_cnt;
 };
 
-struct tf_tbl_type_alloc_search_parms {
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -117,9 +126,9 @@ struct tf_tbl_type_alloc_search_parms {
 };
 
 /**
- * Table Type set parameters
+ * Table set parameters
  */
-struct tf_tbl_type_set_parms {
+struct tf_tbl_set_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -143,9 +152,9 @@ struct tf_tbl_type_set_parms {
 };
 
 /**
- * Table Type get parameters
+ * Table get parameters
  */
-struct tf_tbl_type_get_parms {
+struct tf_tbl_get_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -169,39 +178,39 @@ struct tf_tbl_type_get_parms {
 };
 
 /**
- * @page tbl_type Table Type
+ * @page tbl Table
  *
- * @ref tf_tbl_type_bind
+ * @ref tf_tbl_bind
  *
- * @ref tf_tbl_type_unbind
+ * @ref tf_tbl_unbind
  *
- * @ref tf_tbl_type_alloc
+ * @ref tf_tbl_alloc
  *
- * @ref tf_tbl_type_free
+ * @ref tf_tbl_free
  *
- * @ref tf_tbl_type_alloc_search
+ * @ref tf_tbl_alloc_search
  *
- * @ref tf_tbl_type_set
+ * @ref tf_tbl_set
  *
- * @ref tf_tbl_type_get
+ * @ref tf_tbl_get
  */
 
 /**
- * Initializes the Table Type module with the requested DBs. Must be
+ * Initializes the Table module with the requested DBs. Must be
  * invoked as the first thing before any of the access functions.
  *
  * [in] tfp
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table configuration parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_bind(struct tf *tfp,
-		     struct tf_tbl_type_cfg_parms *parms);
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
@@ -216,7 +225,7 @@ int tf_tbl_type_bind(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_unbind(struct tf *tfp);
+int tf_tbl_unbind(struct tf *tfp);
 
 /**
  * Allocates the requested table type from the internal RM DB.
@@ -225,14 +234,14 @@ int tf_tbl_type_unbind(struct tf *tfp);
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table allocation parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc(struct tf *tfp,
-		      struct tf_tbl_type_alloc_parms *parms);
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
 
 /**
  * Free's the requested table type and returns it to the DB. If shadow
@@ -244,14 +253,14 @@ int tf_tbl_type_alloc(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_free(struct tf *tfp,
-		     struct tf_tbl_type_free_parms *parms);
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
 
 /**
  * Supported if Shadow DB is configured. Searches the Shadow DB for
@@ -269,8 +278,8 @@ int tf_tbl_type_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc_search(struct tf *tfp,
-			     struct tf_tbl_type_alloc_search_parms *parms);
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
 
 /**
  * Configures the requested element by sending a firmware request which
@@ -280,14 +289,14 @@ int tf_tbl_type_alloc_search(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table set parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_set(struct tf *tfp,
-		    struct tf_tbl_type_set_parms *parms);
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
 
 /**
  * Retrieves the requested element by sending a firmware request to get
@@ -297,13 +306,13 @@ int tf_tbl_type_set(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table get parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_get(struct tf *tfp,
-		    struct tf_tbl_type_get_parms *parms);
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
 
 #endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 1420c9e..68c25eb 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -20,16 +20,22 @@ struct tf_tcam_cfg_parms {
 	 * Number of tcam types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * TCAM configuration array
 	 */
-	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tcam_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 14/50] net/bnxt: support two-level priority for TCAMs
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (12 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 13/50] net/bnxt: update multi device design support Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 15/50] net/bnxt: add HCAPI interface support Somnath Kotur
                   ` (36 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Shahaji Bhosle <sbhosle@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 36 +++++++++++++++++++++++++++++-------
 drivers/net/bnxt/tf_core/tf_core.h |  4 +++-
 drivers/net/bnxt/tf_core/tf_em.c   |  6 ++----
 drivers/net/bnxt/tf_core/tf_tbl.c  |  2 +-
 4 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 28a6bbd..7d9bca8 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -893,7 +893,7 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
+	int index = 0;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
 
@@ -916,12 +916,34 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority) {
+		for (index = session_pool->size - 1; index >= 0; index--) {
+			if (ba_inuse(session_pool,
+					  index) == BA_ENTRY_FREE) {
+				break;
+			}
+		}
+		if (ba_alloc_index(session_pool,
+				   index) == BA_FAIL) {
+			TFP_DRV_LOG(ERR,
+				    "%s: %s: ba_alloc index %d failed\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+				    index);
+			return -ENOMEM;
+		}
+	} else {
+		index = ba_alloc(session_pool);
+		if (index == BA_FAIL) {
+			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+			return -ENOMEM;
+		}
 	}
 
 	parms->idx = index;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 74ed24e..f1ef00b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -799,7 +799,9 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested (definition TBD)
+	 * [in] Priority of entry requested
+	 * 0: index from top i.e. highest priority first
+	 * !0: index from bottom i.e lowest priority first
 	 */
 	uint32_t priority;
 	/**
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index fd1797e..91cbc62 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -479,8 +479,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG
-		   (ERR,
+		TFP_DRV_LOG(ERR,
 		   "dir:%d, EM entry index allocation failed\n",
 		   parms->dir);
 		return rc;
@@ -495,8 +494,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	PMD_DRV_LOG
-		   (ERR,
+	TFP_DRV_LOG(INFO,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 0f2979e..f9bfae7 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -1967,7 +1967,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
+		TFP_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 15/50] net/bnxt: add HCAPI interface support
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (13 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 14/50] net/bnxt: support two-level priority for TCAMs Somnath Kotur
@ 2020-06-12 13:28 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 16/50] net/bnxt: add core changes for EM and EEM lookups Somnath Kotur
                   ` (35 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

Add new hardware shim APIs to support multiple
device generations

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/hcapi/Makefile           |   7 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h        | 272 ++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 ++++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h   | 669 ++++++++++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     | 411 ++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h     | 451 ++++++++++++++++++++
 drivers/net/bnxt/meson.build              |   3 +
 drivers/net/bnxt/tf_core/tf_em.c          |  22 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  92 ++--
 drivers/net/bnxt/tf_core/tf_tbl.h         |  24 +-
 10 files changed, 1974 insertions(+), 69 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h

diff --git a/drivers/net/bnxt/hcapi/Makefile b/drivers/net/bnxt/hcapi/Makefile
new file mode 100644
index 0000000..c4c91b6
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/Makefile
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019-2020 Broadcom Limited.
+# All rights reserved.
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_p4.c
+
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
new file mode 100644
index 0000000..a27c749
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_H_
+#define _HCAPI_CFA_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#include "hcapi_cfa_defs.h"
+
+#define SUPPORT_CFA_HW_P4  1
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL  1
+#endif
+
+/**
+ * Index used for the sram_entries field
+ */
+enum hcapi_cfa_resc_type_sram {
+	HCAPI_CFA_RESC_TYPE_SRAM_FULL_ACTION,
+	HCAPI_CFA_RESC_TYPE_SRAM_MCG,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_8B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_16B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	HCAPI_CFA_RESC_TYPE_SRAM_COUNTER_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_SPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_DPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_S_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_D_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_MAX
+};
+
+/**
+ * Index used for the hw_entries field in struct cfa_rm_db
+ */
+enum hcapi_cfa_resc_type_hw {
+	/* common HW resources for all chip variants */
+	HCAPI_CFA_RESC_TYPE_HW_L2_CTXT_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_EM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_EM_REC,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_METER_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_METER_INST,
+	HCAPI_CFA_RESC_TYPE_HW_MIRROR,
+	HCAPI_CFA_RESC_TYPE_HW_UPAR,
+	/* Wh+/SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_SP_TCAM,
+	/* Thor, SR2 common HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_FKB,
+	/* SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_TBL_SCOPE,
+	HCAPI_CFA_RESC_TYPE_HW_L2_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH0,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH1,
+	HCAPI_CFA_RESC_TYPE_HW_METADATA,
+	HCAPI_CFA_RESC_TYPE_HW_CT_STATE,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_LAG_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_MAX
+};
+
+struct hcapi_cfa_key_result {
+	uint64_t bucket_mem_ptr;
+	uint8_t bucket_idx;
+};
+
+/* common CFA register access macros */
+#define CFA_REG(x)		OFFSETOF(cfa_reg_t, cfa_##x)
+
+#ifndef REG_WR
+#define REG_WR(_p, x, y)  (*((uint32_t volatile *)(x)) = (y))
+#endif
+#ifndef REG_RD
+#define REG_RD(_p, x)  (*((uint32_t volatile *)(x)))
+#endif
+#define CFA_REG_RD(_p, x)	REG_RD(0, (uint32_t)(_p)->base_addr + CFA_REG(x))
+#define CFA_REG_WR(_p, x, y)	REG_WR(0, (uint32_t)(_p)->base_addr + CFA_REG(x), y)
+
+
+/* Constants used by Resource Manager Registration*/
+#define RM_CLIENT_NAME_MAX_LEN          32
+
+/**
+ *  Resource Manager Data Structures used for resource requests
+ */
+struct hcapi_cfa_resc_req_entry {
+	uint16_t min;
+	uint16_t max;
+};
+
+struct hcapi_cfa_resc_req {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_req_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the starting index and the number of
+	 * resources in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_req_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_req_db {
+	struct hcapi_cfa_resc_req rx;
+	struct hcapi_cfa_resc_req tx;
+};
+
+struct hcapi_cfa_resc_entry {
+	uint16_t start;
+	uint16_t stride;
+	uint16_t tag;
+};
+
+struct hcapi_cfa_resc {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the startin index and the number of resources
+	 * in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_db {
+	struct hcapi_cfa_resc rx;
+	struct hcapi_cfa_resc tx;
+};
+
+/**
+ * This is the main data structure used by the CFA Resource
+ * Manager.  This data structure holds all the state and table
+ * management information.
+ */
+typedef struct hcapi_cfa_rm_data {
+    uint32_t dummy_data;
+} hcapi_cfa_rm_data_t;
+
+/* End RM support */
+
+struct hcapi_cfa_devops;
+
+struct hcapi_cfa_devinfo {
+			  uint8_t global_cfg_data[CFA_GLOBAL_CFG_DATA_SZ];
+			  struct hcapi_cfa_layout_tbl layouts;
+			  struct hcapi_cfa_devops *devops;
+};
+
+int hcapi_cfa_dev_bind(enum hcapi_cfa_ver hw_ver,
+			struct hcapi_cfa_devinfo *dev_info);
+
+int hcapi_cfa_key_compile_layout(
+				 struct hcapi_cfa_key_template *key_template,
+				 struct hcapi_cfa_key_layout *key_layout);
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data, uint16_t bitlen);
+int hcapi_cfa_action_compile_layout(
+				    struct hcapi_cfa_action_template *act_template,
+				    struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_init_obj(
+			      uint64_t *act_obj,
+			      struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_compute_ptr(
+				 uint64_t *act_obj,
+				 struct hcapi_cfa_action_layout *act_layout,
+				 uint32_t base_ptr);
+
+int hcapi_cfa_action_hw_op(struct hcapi_cfa_hwop *op,
+			   uint8_t *act_tbl,
+			   struct hcapi_cfa_data *act_obj);
+int hcapi_cfa_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_rm_register_client(hcapi_cfa_rm_data_t *data,
+				 const char *client_name,
+				 int *client_id);
+int hcapi_cfa_rm_unregister_client(hcapi_cfa_rm_data_t *data,
+				   int client_id);
+int hcapi_cfa_rm_query_resources(hcapi_cfa_rm_data_t *data,
+				 int client_id,
+				 uint16_t chnl_id,
+				 struct hcapi_cfa_resc_req_db *req_db);
+int hcapi_cfa_rm_query_resources_one(hcapi_cfa_rm_data_t *data,
+				     int clien_id,
+				     struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_reserve_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_release_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_initialize(hcapi_cfa_rm_data_t *data);
+
+#if SUPPORT_CFA_HW_P4
+
+int hcapi_cfa_p4_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxt_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxtrmp_hwop(struct hcapi_cfa_hwop *op,
+				      struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcam_hwop(struct hcapi_cfa_hwop *op,
+				 struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcamrmp_hwop(struct hcapi_cfa_hwop *op,
+				    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
+			       struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+#endif /* SUPPORT_CFA_HW_P4 */
+/**
+ *  HCAPI CFA device HW operation function callback definition
+ *  This is standardized function callback hook to install different
+ *  CFA HW table programming function callback.
+ */
+
+struct hcapi_cfa_tbl_cb {
+	/**
+	 * This function callback provides the functionality to read/write
+	 * HW table entry from a HW table.
+	 *
+	 * @param[in] op
+	 *   A pointer to the Hardware operation parameter
+	 *
+	 * @param[in] obj_data
+	 *   A pointer to the HW data object for the hardware operation
+	 *
+	 * @return
+	 *   0 for SUCCESS, negative value for FAILURE
+	 */
+	int (*hwop_cb)(struct hcapi_cfa_hwop *op,
+		       struct hcapi_cfa_data *obj_data);
+};
+
+#endif  /* HCAPI_CFA_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
new file mode 100644
index 0000000..39afd4d
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
@@ -0,0 +1,92 @@
+/*
+ *   Copyright(c) 2019-2020 Broadcom Limited.
+ *   All rights reserved.
+ */
+
+#include "bitstring.h"
+#include "hcapi_cfa_defs.h"
+#include <errno.h>
+#include "assert.h"
+
+/* HCAPI CFA common PUT APIs */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val)
+{
+	assert(layout);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		bs_put_msb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	else
+		bs_put_lsb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	return 0;
+}
+
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz)
+{
+	int i;
+	uint16_t bitpos;
+	uint8_t bitlen;
+	uint16_t field_id;
+
+	assert(layout);
+	assert(field_tbl);
+
+	if (layout->is_msb_order) {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_msb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	} else {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_lsb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	}
+	return 0;
+}
+
+/* HCAPI CFA common GET APIs */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id,
+			uint64_t *val)
+{
+	assert(layout);
+	assert(val);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		*val = bs_get_msb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	else
+		*val = bs_get_lsb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	return 0;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
new file mode 100644
index 0000000..ca562d2
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -0,0 +1,669 @@
+/*
+  *   Copyright(c) Broadcom Limited.
+  *   All rights reserved.
+  */
+
+/*!
+ *   \file
+ *   \brief Exported functions for CFA HW programming
+ */
+#ifndef _HCAPI_CFA_DEFS_H_
+#define _HCAPI_CFA_DEFS_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#define SUPPORT_CFA_HW_ALL 0
+#define SUPPORT_CFA_HW_P4  1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+
+#define CFA_BITS_PER_BYTE (8)
+#define __CFA_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+#define CFA_ALIGN(x, a) __CFA_ALIGN_MASK(x, (a)-1)
+#define CFA_ALIGN_128(x) CFA_ALIGN(x, 128)
+#define CFA_ALIGN_32(x) CFA_ALIGN(x, 32)
+
+#define NUM_WORDS_ALIGN_32BIT(x)                                               \
+	(CFA_ALIGN_32(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+#define NUM_WORDS_ALIGN_128BIT(x)                                              \
+	(CFA_ALIGN_128(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+
+#define CFA_GLOBAL_CFG_DATA_SZ (100)
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL (1)
+#endif
+
+#include "hcapi_cfa_p4.h"
+#define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+#define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
+#define CFA_PROF_MAX_KEY_CFG_SZ sizeof(struct cfa_p4_prof_key_cfg)
+#define CFA_KEY_MAX_FIELD_CNT 41
+#define CFA_ACT_MAX_TEMPLATE_SZ sizeof(struct cfa_p4_action_template)
+
+/**
+ * CFA HW version definition
+ */
+enum hcapi_cfa_ver {
+	HCAPI_CFA_P40 = 0, /**< CFA phase 4.0 */
+	HCAPI_CFA_P45 = 1, /**< CFA phase 4.5 */
+	HCAPI_CFA_P58 = 2, /**< CFA phase 5.8 */
+	HCAPI_CFA_P59 = 3, /**< CFA phase 5.9 */
+	HCAPI_CFA_PMAX = 4
+};
+
+/**
+ * CFA direction definition
+ */
+enum hcapi_cfa_dir {
+	HCAPI_CFA_DIR_RX = 0, /**< Receive */
+	HCAPI_CFA_DIR_TX = 1, /**< Transmit */
+	HCAPI_CFA_DIR_MAX = 2
+};
+
+/**
+ * CFA HW OPCODE definition
+ */
+enum hcapi_cfa_hwops {
+	HCAPI_CFA_HWOPS_PUT, /**< Write to HW operation */
+	HCAPI_CFA_HWOPS_GET, /**< Read from HW operation */
+	HCAPI_CFA_HWOPS_ADD, /**< For operations which require more then simple
+			      *   writes to HW, this operation is used.  The
+			      *   distinction with this operation when compared
+			      *   to the PUT ops is that this operation is used
+			      *   in conjunction with the HCAPI_CFA_HWOPS_DEL
+			      *   op to remove the operations issued by the
+			      *   ADD OP.
+			      */
+	HCAPI_CFA_HWOPS_DEL, /**< This issues operations to clear the hardware.
+			      *   This operation is used in conjunction
+			      *   with the HCAPI_CFA_HWOPS_ADD op and is the
+			      *   way to undo/clear the ADD op.
+			      */
+	HCAPI_CFA_HWOPS_MAX
+};
+
+/**
+ * CFA HW KEY CONTROL OPCODE definition
+ */
+enum hcapi_cfa_key_ctrlops {
+	HCAPI_CFA_KEY_CTRLOPS_INSERT, /**< insert control bits */
+	HCAPI_CFA_KEY_CTRLOPS_STRIP, /**< strip control bits */
+	HCAPI_CFA_KEY_CTRLOPS_MAX
+};
+
+/**
+ * CFA HW field structure definition
+ */
+struct hcapi_cfa_field {
+	/** [in] Starting bit position pf the HW field within a HW table
+	 *  entry.
+	 */
+	uint16_t bitpos;
+	/** [in] Number of bits for the HW field. */
+	uint8_t bitlen;
+};
+
+/**
+ * CFA HW table entry layout structure definition
+ */
+struct hcapi_cfa_layout {
+	/** [out] Bit order of layout */
+	bool is_msb_order;
+	/** [out] Size in bits of entry */
+	uint32_t total_sz_in_bits;
+	/** [out] data pointer of the HW layout fields array */
+	const struct hcapi_cfa_field *field_array;
+	/** [out] number of HW field entries in the HW layout field array */
+	uint32_t array_sz;
+};
+
+/**
+ * CFA HW data object definition
+ */
+struct hcapi_cfa_data_obj {
+	/** [in] HW field identifier. Used as an index to a HW table layout */
+	uint16_t field_id;
+	/** [in] Value of the HW field */
+	uint64_t val;
+};
+
+/**
+ * CFA HW definition
+ */
+struct hcapi_cfa_hw {
+	/** [in] HW table base address for the operation with optional device
+	 *  handle. For on-chip HW table operation, this is the either the TX
+	 *  or RX CFA HW base address. For off-chip table, this field is the
+	 *  base memory address of the off-chip table.
+	 */
+	uint64_t base_addr;
+	/** [in] Optional opaque device handle. It is generally used to access
+	 *  an GRC register space through PCIE BAR and passed to the BAR memory
+	 *  accessor routine.
+	 */
+	void *handle;
+};
+
+/**
+ * CFA HW operation definition
+ *
+ */
+struct hcapi_cfa_hwop {
+	/** [in] HW opcode */
+	enum hcapi_cfa_hwops opcode;
+	/** [in] CFA HW information used by accessor routines.
+	 */
+	struct hcapi_cfa_hw hw;
+};
+
+/**
+ * CFA HW data structure definition
+ */
+struct hcapi_cfa_data {
+	/** [in] physical offset to the HW table for the data to be
+	 *  written to.  If this is an array of registers, this is the
+	 *  index into the array of registers.  For writing keys, this
+	 *  is the byte offset into the memory wher the key should be
+	 *  written.
+	 */
+	union {
+		uint32_t index;
+		uint32_t byte_offset;
+	};
+	/** [in] HW data buffer pointer */
+	uint8_t *data;
+	/** [in] HW data mask buffer pointer */
+	uint8_t *data_mask;
+	/** [in] size of the HW data buffer in bytes */
+	uint16_t data_sz;
+};
+
+/*********************** Truflow start ***************************/
+enum hcapi_cfa_pg_tbl_lvl {
+	PT_LVL_0,
+	PT_LVL_1,
+	PT_LVL_2,
+	PT_LVL_MAX
+};
+
+enum hcapi_cfa_em_table_type {
+	KEY0_TABLE,
+	KEY1_TABLE,
+	RECORD_TABLE,
+	EFC_TABLE,
+	MAX_TABLE
+};
+
+struct hcapi_cfa_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct hcapi_cfa_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct hcapi_cfa_em_page_tbl    pg_tbl[PT_LVL_MAX];
+};
+
+struct hcapi_cfa_em_ctx_mem_info {
+	struct hcapi_cfa_em_table		em_tables[MAX_TABLE];
+};
+
+/*********************** Truflow end ****************************/
+
+/**
+ * CFA HW key table definition
+ *
+ * Applicable to EEM and off-chip EM table only.
+ */
+struct hcapi_cfa_key_tbl {
+	/** [in] For EEM, this is the KEY0 base mem pointer. For off-chip EM,
+	 *  this is the base mem pointer of the key table.
+	 */
+	uint8_t *base0;
+	/** [in] total size of the key table in bytes. For EEM, this size is
+	 *  same for both KEY0 and KEY1 table.
+	 */
+	uint32_t size;
+	/** [in] number of key buckets, applicable for newer chips */
+	uint32_t num_buckets;
+	/** [in] For EEM, this is KEY1 base mem pointer. Fo off-chip EM,
+	 *  this is the key record memory base pointer within the key table,
+	 *  applicable for newer chip
+	 */
+	uint8_t *base1;
+};
+
+/**
+ * CFA HW key buffer definition
+ */
+struct hcapi_cfa_key_obj {
+	/** [in] pointer to the key data buffer */
+	uint32_t *data;
+	/** [in] buffer len in bits */
+	uint32_t len;
+	/** [in] Pointer to the key layout */
+	struct hcapi_cfa_key_layout *layout;
+};
+
+/**
+ * CFA HW key data definition
+ */
+struct hcapi_cfa_key_data {
+	/** [in] For on-chip key table, it is the offset in unit of smallest
+	 *  key. For off-chip key table, it is the byte offset relative
+	 *  to the key record memory base.
+	 */
+	uint32_t offset;
+	/** [in] HW key data buffer pointer */
+	uint8_t *data;
+	/** [in] size of the key in bytes */
+	uint16_t size;
+};
+
+/**
+ * CFA HW key location definition
+ */
+struct hcapi_cfa_key_loc {
+	/** [out] on-chip EM bucket offset or off-chip EM bucket mem pointer */
+	uint64_t bucket_mem_ptr;
+	/** [out] index within the EM bucket */
+	uint8_t bucket_idx;
+};
+
+/**
+ * CFA HW layout table definition
+ */
+struct hcapi_cfa_layout_tbl {
+	/** [out] data pointer to an array of fix formatted layouts supported.
+	 *  The index to the array is the CFA HW table ID
+	 */
+	const struct hcapi_cfa_layout *tbl;
+	/** [out] number of fix formatted layouts in the layout array */
+	uint16_t num_layouts;
+};
+
+/**
+ * Key template consists of key fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_key_template {
+	/** [in] key field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t field_en[CFA_KEY_MAX_FIELD_CNT];
+	/** [in] Identified if the key template is for TCAM. If false, the
+	 *  the key template is for EM. This field is mandantory for device that
+	 *  only support fix key formats.
+	 */
+	bool is_wc_tcam_key;
+};
+
+/**
+ * key layout consist of field array, key bitlen, key ID, and other meta data
+ * pertain to a key
+ */
+struct hcapi_cfa_key_layout {
+	/** [out] key layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual key size in number of bits */
+	uint16_t bitlen;
+	/** [out] key identifier and this field is only valid for device
+	 *  that supports fix key formats
+	 */
+	uint16_t id;
+	/** [out] Indentified the key layout is WC TCAM key */
+	bool is_wc_tcam_key;
+	/** [out] total slices size, valid for WC TCAM key only. It can be
+	 *  used by the user to determine the total size of WC TCAM key slices
+	 *  in bytes. */
+	uint16_t slices_size;
+};
+
+/**
+ * key layout memory contents
+ */
+struct hcapi_cfa_key_layout_contents {
+	/** key layouts */
+	struct hcapi_cfa_key_layout key_layout;
+
+	/** layout */
+	struct hcapi_cfa_layout layout;
+
+	/** fields */
+	struct hcapi_cfa_field field_array[CFA_KEY_MAX_FIELD_CNT];
+};
+
+/**
+ * Action template consists of action fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_action_template {
+	/** [in] CFA version for the action template */
+	enum hcapi_cfa_ver hw_ver;
+	/** [in] action field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t data[CFA_ACT_MAX_TEMPLATE_SZ];
+};
+
+/**
+ * action layout consist of field array, action wordlen and action format ID
+ */
+struct hcapi_cfa_action_layout {
+	/** [in] action identifier */
+	uint16_t id;
+	/** [out] action layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual action record size in number of bits */
+	uint16_t wordlen;
+};
+
+/**
+ *  \defgroup CFA_HCAPI_PUT_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to program a specified value to a
+ * HW field based on the provided programming layout.
+ *
+ * @param[in,out] obj_data
+ *   A data pointer to a CFA HW key/mask data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[in] val
+ *   Value of the HW field to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val);
+
+/**
+ * This API provides the functionality to program an array of field values
+ * with corresponding field IDs to a number of profiler sub-block fields
+ * based on the fixed profiler sub-block hardware programming layout.
+ *
+ * @param[in, out] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * This API provides the functionality to write a value to a
+ * field within the bit position and bit length of a HW data
+ * object based on a provided programming layout.
+ *
+ * @param[in, out] act_obj
+ *   A pointer of the action object to be initialized
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param field_id
+ *   [in] Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[in] val
+ *   HW field value to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t val);
+
+/*@}*/
+
+/**
+ *  \defgroup CFA_HCAPI_GET_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to get the word length of
+ * a layout object.
+ *
+ * @param[in] layout
+ *   A pointer of the HW layout
+ *
+ * @return
+ *   Word length of the layout object
+ */
+uint16_t hcapi_cfa_get_wordlen(const struct hcapi_cfa_layout *layout);
+
+/**
+ * The API provides the functionality to get bit offset and bit
+ * length information of a field from a programming layout.
+ *
+ * @param[in] layout
+ *   A pointer of the action layout
+ *
+ * @param[out] slice
+ *   A pointer to the action offset info data structure
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_slice(const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, struct hcapi_cfa_field *slice);
+
+/**
+ * This API provides the functionality to read the value of a
+ * CFA HW field from CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA HW key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t *val);
+
+/**
+ * This API provides the functionality to read a number of
+ * HW fields from a CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in, out] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * Get a value to a specific location relative to a HW field
+ *
+ * This API provides the functionality to read HW field from
+ * a section of a HW data object identified by the bit position
+ * and bit length from a given programming layout in order to avoid
+ * reading the entire HW data object.
+ *
+ * @param[in] obj_data
+ *   A pointer of the data object to read from
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param[in] field_id
+ *   Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t *val);
+
+/**
+ * This function is used to initialize a layout_contents structure
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * initialized.
+ *
+ * @param[in] layout_contents
+ *  A pointer of the layout contents to initialize
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_init_key_layout_contents(
+	struct hcapi_cfa_key_layout_contents *layout_contents);
+
+/**
+ * This function is used to validate a key template
+ *
+ * The struct hcapi_cfa_key_template is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_template
+ *  A pointer of the key template contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_is_valid_key_template(struct hcapi_cfa_key_template *key_template);
+
+/**
+ * This function is used to validate a key layout
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_layout
+ *  A pointer of the key layout contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_is_valid_key_layout(struct hcapi_cfa_key_layout *key_layout);
+
+/**
+ * This function is used to hash E/EM keys
+ *
+ *
+ * @param[in] key_data
+ *  A pointer of the key
+ *
+ * @param[in] bitlen
+ *  Number of bits in the key
+ *
+ * @return
+ *   CRC32 and Lookup3 hashes of the input key
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen);
+
+/**
+ * This function is used to execute an operation
+ *
+ *
+ * @param[in] op
+ *  Operation
+ *
+ * @param[in] key_tbl
+ *  Table
+ *
+ * @param[in] key_obj
+ *  Key data
+ *
+ * @param[in] key_key_loc
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc);
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset);
+#endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
new file mode 100644
index 0000000..5b5cac8
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -0,0 +1,411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <string.h>
+#include "lookup3.h"
+#include "rand.h"
+
+#include "hcapi_cfa_defs.h"
+
+#if 0
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen);
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc);
+#endif
+
+#define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
+#define TF_EM_PAGE_SIZE (1 << 21)
+uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
+uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
+bool hcapi_cfa_lkup_init;
+
+static __inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
+static void hcapi_cfa_seeds_init(void)
+{
+	int i;
+	uint32_t r;
+
+	if (hcapi_cfa_lkup_init == true)
+		return;
+
+	hcapi_cfa_lkup_init = true;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	hcapi_cfa_lkup_lkup3_init_cfg = SWAP_WORDS32(rand32());
+
+	for (i = 0; i < HCAPI_CFA_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2 + 1] = (r & 0x1);
+	}
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t hcapi_cfa_crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "CRC2:");
+#endif
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+#ifdef TF_EEM_DEBUG
+		TFP_DRV_LOG(DEBUG,
+			    "%02X %08X %08X\n",
+			    (buf[l] & 0xff),
+			    crc,
+			    ~crc);
+#endif
+	}
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "\n");
+#endif
+
+	return ~crc;
+}
+
+static uint32_t hcapi_cfa_crc32_hash(uint8_t *key)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = CFA_P4_EEM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = hcapi_cfa_lkup_em_seed_mem[index * 2];
+	val2 = hcapi_cfa_lkup_em_seed_mem[index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	val1 = hcapi_cfa_crc32i(~val1,
+		      (key - (CFA_P4_EEM_KEY_MAX_SIZE - 1)),
+		      CFA_P4_EEM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((const uint32_t *)(uintptr_t *)in_key) + 1,
+			 CFA_P4_EEM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 hcapi_cfa_lkup_lkup3_init_cfg);
+
+	return val1;
+}
+
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	uint64_t addr;
+
+	if (mem == NULL)
+		return 0;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = mem->num_lvl - 1;
+
+	addr = (uint64_t)mem->pg_tbl[level].pg_va_tbl[page];
+
+#if 0
+	TFP_DRV_LOG(DEBUG, "dir:%d offset:0x%x level:%d page:%d addr:%p\n",
+		    dir, offset, level, page, addr);
+#endif
+
+	return addr;
+}
+
+/** Approximation of HCAPI hcapi_cfa_key_hash()
+ *
+ * Return:
+ *
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen)
+{
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+
+	/*
+	 * Init the seeds if needed
+	 */
+	if (hcapi_cfa_lkup_init == false)
+		hcapi_cfa_seeds_init();
+
+	key0_hash = hcapi_cfa_crc32_hash(((uint8_t *)key_data) +
+					      (bitlen / 8) - 1);
+
+	key1_hash = hcapi_cfa_lookup3_hash((uint8_t *)key_data);
+
+	return ((uint64_t)key0_hash) << 32 | (uint64_t)key1_hash;
+}
+
+static int hcapi_cfa_key_hw_op_put(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy((uint8_t *)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_get(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy(key_obj->data,
+	       (uint8_t *)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_add(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Is entry free?
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is entry is valid then report failure
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT))
+		return -1;
+
+	memcpy((uint8_t *)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_del(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Read entry
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is not a valid entry then report failure.
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT)) {
+		/*
+		 * If a key has been provided then verify the key matches
+		 * before deleting the entry.
+		 */
+		if (key_obj->data != NULL) {
+			if (memcmp(&table_entry,
+				   key_obj->data,
+				   key_obj->size) != 0)
+				return -1;
+		}
+	} else {
+		return -1;
+	}
+
+
+	/*
+	 * Delete entry
+	 */
+	memset((uint8_t *)op->hw.base_addr +
+	       key_obj->offset,
+	       0,
+	       key_obj->size);
+
+	return rc;
+}
+
+
+/** Apporiximation of hcapi_cfa_key_hw_op()
+ *
+ *
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc __attribute__((unused)))
+{
+	int rc = 0;
+
+	if (op == NULL ||
+	    key_tbl == NULL ||
+	    key_obj == NULL ||
+	    key_loc == NULL)
+		return -1;
+
+	op->hw.base_addr =
+		hcapi_get_table_page((struct hcapi_cfa_em_table *)key_tbl->base0,
+				     key_obj->offset);
+
+	if (op->hw.base_addr == 0)
+		return -1;
+
+	switch (op->opcode) {
+	case HCAPI_CFA_HWOPS_PUT: /**< Write to HW operation */
+		rc = hcapi_cfa_key_hw_op_put(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_GET: /**< Read from HW operation */
+		rc = hcapi_cfa_key_hw_op_get(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_ADD: /**< For operations which require more then simple
+			      *   writes to HW, this operation is used.  The
+			      *   distinction with this operation when compared
+			      *   to the PUT ops is that this operation is used
+			      *   in conjunction with the HCAPI_CFA_HWOPS_DEL
+			      *   op to remove the operations issued by the
+			      *   ADD OP.
+			      */
+
+		rc = hcapi_cfa_key_hw_op_add(op, key_obj);
+
+		break;
+	case HCAPI_CFA_HWOPS_DEL:
+		rc = hcapi_cfa_key_hw_op_del(op, key_obj);
+		break;
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
new file mode 100644
index 0000000..0c11876
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -0,0 +1,451 @@
+/*
+  *   Copyright(c) Broadcom Limited.
+  *   All rights reserved.
+  */
+
+#ifndef _HCAPI_CFA_P4_H_
+#define _HCPAI_CFA_P4_H_
+
+#include "cfa_p40_hw.h"
+
+/** CFA phase 4 fix formatted table(layout) ID definition
+ *
+ */
+enum cfa_p4_tbl_id {
+	CFA_P4_TBL_L2CTXT_TCAM = 0,
+	CFA_P4_TBL_L2CTXT_REMAP,
+	CFA_P4_TBL_PROF_TCAM,
+	CFA_P4_TBL_PROF_TCAM_REMAP,
+	CFA_P4_TBL_WC_TCAM,
+	CFA_P4_TBL_WC_TCAM_REC,
+	CFA_P4_TBL_WC_TCAM_REMAP,
+	CFA_P4_TBL_VEB_TCAM,
+	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_MAX
+};
+
+#define CFA_P4_PROF_MAX_KEYS 4
+enum cfa_p4_mac_sel_mode {
+	CFA_P4_MAC_SEL_MODE_FIRST = 0,
+	CFA_P4_MAC_SEL_MODE_LOWEST = 1,
+};
+
+struct cfa_p4_prof_key_cfg {
+	uint8_t mac_sel[CFA_P4_PROF_MAX_KEYS];
+#define CFA_P4_PROF_MAC_SEL_DMAC0 (1 << 0)
+#define CFA_P4_PROF_MAC_SEL_T_MAC0 (1 << 1)
+#define CFA_P4_PROF_MAC_SEL_OUTERMOST_MAC0 (1 << 2)
+#define CFA_P4_PROF_MAC_SEL_DMAC1 (1 << 3)
+#define CFA_P4_PROF_MAC_SEL_T_MAC1 (1 << 4)
+#define CFA_P4_PROF_MAC_OUTERMOST_MAC1 (1 << 5)
+	uint8_t pass_cnt;
+	enum cfa_p4_mac_sel_mode mode;
+};
+
+/**
+ * CFA action layout definition
+ */
+
+#define CFA_P4_ACTION_MAX_LAYOUT_SIZE 184
+
+/**
+ * Action object template structure
+ *
+ * Template structure presents data fields that are necessary to know
+ * at the beginning of Action Builder (AB) processing. Like before the
+ * AB compilation. One such example could be a template that is
+ * flexible in size (Encap Record) and the presence of these fields
+ * allows for determining the template size as well as where the
+ * fields are located in the record.
+ *
+ * The template may also present fields that are not made visible to
+ * the caller by way of the action fields.
+ *
+ * Template fields also allow for additional checking on user visible
+ * fields. One such example could be the encap pointer behavior on a
+ * CFA_P4_ACT_OBJ_TYPE_ACT or CFA_P4_ACT_OBJ_TYPE_ACT_SRAM.
+ */
+struct cfa_p4_action_template {
+	/** Action Object type
+	 *
+	 * Controls the type of the Action Template
+	 */
+	enum {
+		/** Select this type to build an Action Record Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT,
+		/** Select this type to build an Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT,
+		/** Select this type to build a SRAM Action Record
+		 * Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT_SRAM,
+		/** Select this type to build a SRAM Action
+		 * Encapsulation Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ENCAP_SRAM,
+		/** Select this type to build a SRAM Action Modify
+		 * Object, with IPv4 capability.
+		 */
+		/* In case of Stingray the term Modify is used for the 'NAT
+		 * action'. Action builder is leveraged to fill in the NAT
+		 * object which then can be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_MODIFY_IPV4_SRAM,
+		/** Select this type to build a SRAM Action Source
+		 * Property Object.
+		 */
+		/* In case of Stingray this is not a 'pure' action record.
+		 * Action builder is leveraged to full in the Source Property
+		 * object which can then be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM,
+		/** Select this type to build a SRAM Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT_SRAM,
+	} obj_type;
+
+	/** Action Control
+	 *
+	 * Controls the internals of the Action Template
+	 *
+	 * act is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_ACT)
+	 */
+	/*
+	 * Stat and encap are always inline for EEM as table scope
+	 * allocation does not allow for separate Stats allocation,
+	 * but has the xx_inline flags as to be forward compatible
+	 * with Stingray 2, always treated as TRUE.
+	 */
+	struct {
+		/** Set to CFA_HCAPI_TRUE to enable statistics
+		 */
+		uint8_t stat_enable;
+		/** Set to CFA_HCAPI_TRUE to enable statistics to be inlined
+		 */
+		uint8_t stat_inline;
+
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation
+		 */
+		uint8_t encap_enable;
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation to be inlined
+		 */
+		uint8_t encap_inline;
+	} act;
+
+	/** Modify Setting
+	 *
+	 * Controls the type of the Modify Action the template is
+	 * describing
+	 *
+	 * modify is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_MODIFY_SRAM)
+	 */
+	enum {
+		/** Set to enable Modify of Source IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_SOURCE_IPV4 = 0,
+		/** Set to enable Modify of Destination IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_DEST_IPV4
+	} modify;
+
+	/** Encap Control
+	 * Controls the type of encapsulation the template is
+	 * describing
+	 *
+	 * encap is valid when:
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_ACT) &&
+	 *   act.encap_enable) ||
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM)
+	 */
+	struct {
+		/* Direction is required as Stingray Encap on RX is
+		 * limited to l2 and VTAG only.
+		 */
+		/** Receive or Transmit direction
+		 */
+		uint8_t direction;
+		/** Set to CFA_HCAPI_TRUE to enable L2 capability in the
+		 *  template
+		 */
+		uint8_t l2_enable;
+		/** vtag controls the Encap Vector - VTAG Encoding, 4 bits
+		 *
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_0, default, no VLAN
+		 *      Tags applied
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_1, adds capability to
+		 *      set 1 VLAN Tag. Action Template compile adds
+		 *      the following field to the action object
+		 *      ::TF_ER_VLAN1
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_2, adds capability to
+		 *      set 2 VLAN Tags. Action Template compile adds
+		 *      the following fields to the action object
+		 *      ::TF_ER_VLAN1 and ::TF_ER_VLAN2
+		 * </ul>
+		 */
+		enum { CFA_P4_ACT_ENCAP_VTAGS_PUSH_0 = 0,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_1,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_2 } vtag;
+
+		/*
+		 * The remaining fields are NOT supported when
+		 * direction is RX and ((obj_type ==
+		 * CFA_P4_ACT_OBJ_TYPE_ACT) && act.encap_enable).
+		 * ab_compile_layout will perform the checking and
+		 * skip remaining fields.
+		 */
+		/** L3 Encap controls the Encap Vector - L3 Encoding,
+		 *  3 bits. Defines the type of L3 Encapsulation the
+		 *  template is describing.
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_L3_NONE, default, no L3
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV4, enables L3 IPv4
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV6, enables L3 IPv6
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8847, enables L3 MPLS
+		 *      8847 Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8848, enables L3 MPLS
+		 *      8848 Encapsulation.
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable any L3 encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_L3_NONE = 0,
+			/** Set to enable L3 IPv4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV4 = 4,
+			/** Set to enable L3 IPv6 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV6 = 5,
+			/** Set to enable L3 MPLS 8847 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8847 = 6,
+			/** Set to enable L3 MPLS 8848 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8848 = 7
+		} l3;
+
+#define CFA_P4_ACT_ENCAP_MAX_MPLS_LABELS 8
+		/** 1-8 labels, valid when
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8847) ||
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8848)
+		 *
+		 * MAX number of MPLS Labels 8.
+		 */
+		uint8_t l3_num_mpls_labels;
+
+		/** Set to CFA_HCAPI_TRUE to enable L4 capability in the
+		 * template.
+		 *
+		 * CFA_HCAPI_TRUE adds ::TF_EN_UDP_SRC_PORT and
+		 * ::TF_EN_UDP_DST_PORT to the template.
+		 */
+		uint8_t l4_enable;
+
+		/** Tunnel Encap controls the Encap Vector - Tunnel
+		 *  Encap, 3 bits. Defines the type of Tunnel
+		 *  encapsulation the template is describing
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NONE, default, no Tunnel
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL
+		 * <li> CFA_P4_ACT_ENCAP_TNL_VXLAN. NOTE: Expects
+		 *      l4_enable set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NGE. NOTE: Expects l4_enable
+		 *      set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NVGRE. NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GRE.NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable Tunnel header encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NONE = 0,
+			/** Set to enable Tunnel Generic Full header
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL,
+			/** Set to enable VXLAN header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_VXLAN,
+			/** Set to enable NGE (VXLAN2) header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NGE,
+			/** Set to enable NVGRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NVGRE,
+			/** Set to enable GRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GRE,
+			/** Set to enable Generic header after Tunnel
+			 * L4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4,
+			/** Set to enable Generic header after Tunnel
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		} tnl;
+
+		/** Number of bytes of generic tunnel header,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL)
+		 */
+		uint8_t tnl_generic_size;
+		/** Number of 32b words of nge options,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_NGE)
+		 */
+		uint8_t tnl_nge_op_len;
+		/* Currently not planned */
+		/* Custom Header */
+		/*	uint8_t custom_enable; */
+	} encap;
+};
+
+/**
+ * Enumeration of SRAM entry types, used for allocation of
+ * fixed SRAM entities. The memory model for CFA HCAPI
+ * determines if an SRAM entry type is supported.
+ */
+enum cfa_p4_action_sram_entry_type {
+	/* NOTE: Any additions to this enum must be reflected on FW
+	 * side as well.
+	 */
+
+	/** SRAM Action Record */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	/** SRAM Action Encap 8 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
+	/** SRAM Action Encap 16 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
+	/** SRAM Action Encap 64 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+	/** SRAM Action Modify IPv4 Source */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
+	/** SRAM Action Modify IPv4 Destination */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+	/** SRAM Action Source Properties SMAC */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
+	/** SRAM Action Source Properties SMAC IPv4 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV4,
+	/** SRAM Action Source Properties SMAC IPv6 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV6,
+	/** SRAM Action Statistics 64 Bits */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_STATS_64,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MAX
+};
+
+/**
+ * SRAM Action Record structure holding either an action index or an
+ * action ptr.
+ */
+union cfa_p4_action_sram_act_record {
+	/** SRAM Action idx specifies the offset of the SRAM
+	 * element within its SRAM Entry Type block. This
+	 * index can be written into i.e. an L2 Context. Use
+	 * this type for all SRAM Action Record types except
+	 * SRAM Full Action records. Use act_ptr instead.
+	 */
+	uint16_t act_idx;
+	/** SRAM Full Action is special in that it needs an
+	 * action record pointer. This pointer can be written
+	 * into i.e. a Wildcard TCAM entry.
+	 */
+	uint32_t act_ptr;
+};
+
+/**
+ * cfa_p4_action_param parameter definition
+ */
+struct cfa_p4_action_param {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	uint8_t dir;
+	/**
+	 * [in] type of the sram allocation type
+	 */
+	enum cfa_p4_action_sram_entry_type type;
+	/**
+	 * [in] action record to set. The 'type' specified lists the
+	 *	record definition to use in the passed in record.
+	 */
+	union cfa_p4_action_sram_act_record record;
+	/**
+	 * [in] number of elements in act_data
+	 */
+	uint32_t act_size;
+	/**
+	 * [in] ptr to array of action data
+	 */
+	uint64_t *act_data;
+};
+
+/**
+ * EEM Key entry sizes
+ */
+#define CFA_P4_EEM_KEY_MAX_SIZE 52
+#define CFA_P4_EEM_KEY_RECORD_SIZE 64
+
+/**
+ * cfa_eem_entry_hdr
+ */
+struct cfa_p4_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define CFA_P4_EEM_ENTRY_VALID_SHIFT 31
+#define CFA_P4_EEM_ENTRY_VALID_MASK 0x80000000
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_SHIFT 30
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_MASK 0x40000000
+#define CFA_P4_EEM_ENTRY_STRENGTH_SHIFT 28
+#define CFA_P4_EEM_ENTRY_STRENGTH_MASK 0x30000000
+#define CFA_P4_EEM_ENTRY_RESERVED_SHIFT 17
+#define CFA_P4_EEM_ENTRY_RESERVED_MASK 0x0FFE0000
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_SHIFT 8
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_MASK 0x0001FF00
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_SHIFT 3
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_MASK 0x000000F8
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_SHIFT 2
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK 0x00000004
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_SHIFT 1
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_MASK 0x00000002
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_SHIFT 0
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/**
+ *  cfa_p4_eem_key_entry
+ */
+struct cfa_p4_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[CFA_P4_EEM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct cfa_p4_eem_entry_hdr hdr;
+};
+
+#endif /* _CFA_HW_P4_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 1f7df9d..4994d4c 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,6 +43,9 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_rm_new.c',
 
+	'hcapi/hcapi_cfa_common.c',
+	'hcapi/hcapi_cfa_p4.c',
+
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
 	'tf_ulp/ulp_flow_db.c',
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 91cbc62..da1f4d4 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -189,7 +189,7 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
 		return NULL;
 
-	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
 		return NULL;
 
 	/*
@@ -325,7 +325,7 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key0_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key0_hash,
-					 KEY0_TABLE,
+					 TF_KEY0_TABLE,
 					 dir);
 
 	/*
@@ -334,23 +334,23 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key1_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key1_hash,
-					 KEY1_TABLE,
+					 TF_KEY1_TABLE,
 					 dir);
 
 	if (key0_entry == -EEXIST) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return -EEXIST;
 	} else if (key1_entry == -EEXIST) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return -EEXIST;
 	} else if (key0_entry == 0) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return 0;
 	} else if (key1_entry == 0) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return 0;
 	}
@@ -384,7 +384,7 @@ int tf_insert_eem_entry(struct tf_session *session,
 	int		   num_of_entry;
 
 	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
 
 	if (!mask)
 		return -EINVAL;
@@ -420,14 +420,14 @@ int tf_insert_eem_entry(struct tf_session *session,
 				      key1_index,
 				      &index,
 				      &table_type) == 0) {
-		if (table_type == KEY0_TABLE) {
+		if (table_type == TF_KEY0_TABLE) {
 			TF_SET_GFID(gfid,
 				    key0_index,
-				    KEY0_TABLE);
+				    TF_KEY0_TABLE);
 		} else {
 			TF_SET_GFID(gfid,
 				    key1_index,
-				    KEY1_TABLE);
+				    TF_KEY1_TABLE);
 		}
 
 		/*
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index f9bfae7..07c3469 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -285,8 +285,8 @@ tf_em_setup_page_table(struct tf_em_table *tbl)
 		tf_em_link_page_table(tp, tp_next, set_pte_last);
 	}
 
-	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
 /**
@@ -317,7 +317,7 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 			uint64_t *num_data_pages)
 {
 	uint64_t lvl_data_size = page_size;
-	int lvl = PT_LVL_0;
+	int lvl = TF_PT_LVL_0;
 	uint64_t data_size;
 
 	*num_data_pages = 0;
@@ -326,10 +326,10 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 	while (lvl_data_size < data_size) {
 		lvl++;
 
-		if (lvl == PT_LVL_1)
+		if (lvl == TF_PT_LVL_1)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				page_size;
-		else if (lvl == PT_LVL_2)
+		else if (lvl == TF_PT_LVL_2)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				MAX_PAGE_PTRS(page_size) * page_size;
 		else
@@ -386,18 +386,18 @@ tf_em_size_page_tbls(int max_lvl,
 		     uint32_t page_size,
 		     uint32_t *page_cnt)
 {
-	if (max_lvl == PT_LVL_0) {
-		page_cnt[PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == PT_LVL_1) {
-		page_cnt[PT_LVL_1] = num_data_pages;
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
-	} else if (max_lvl == PT_LVL_2) {
-		page_cnt[PT_LVL_2] = num_data_pages;
-		page_cnt[PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
 	} else {
 		return;
 	}
@@ -434,7 +434,7 @@ tf_em_size_table(struct tf_em_table *tbl)
 	/* Determine number of page table levels and the number
 	 * of data pages needed to process the given eem table.
 	 */
-	if (tbl->type == RECORD_TABLE) {
+	if (tbl->type == TF_RECORD_TABLE) {
 		/*
 		 * For action records just a memory size is provided. Work
 		 * backwards to resolve to number of entries
@@ -480,9 +480,9 @@ tf_em_size_table(struct tf_em_table *tbl)
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
 		    num_data_pages,
-		    page_cnt[PT_LVL_0],
-		    page_cnt[PT_LVL_1],
-		    page_cnt[PT_LVL_2]);
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
 
 	return 0;
 }
@@ -508,7 +508,7 @@ tf_em_ctx_unreg(struct tf *tfp,
 	struct tf_em_table *tbl;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
@@ -544,7 +544,7 @@ tf_em_ctx_reg(struct tf *tfp,
 	int rc = 0;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
@@ -719,41 +719,41 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		return -EINVAL;
 	}
 	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries
 		= parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size
 		= parms->rx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries
 		= 0;
 
 	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries
 		= parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size
 		= parms->tx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries
 		= 0;
 
 	return 0;
@@ -1572,11 +1572,11 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 
 		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
 		rc = tf_msg_em_cfg(tfp,
-				   em_tables[KEY0_TABLE].num_entries,
-				   em_tables[KEY0_TABLE].ctx_id,
-				   em_tables[KEY1_TABLE].ctx_id,
-				   em_tables[RECORD_TABLE].ctx_id,
-				   em_tables[EFC_TABLE].ctx_id,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
@@ -1601,8 +1601,8 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 */
 		rc = tf_create_tbl_pool_external(dir,
 					    tbl_scope_cb,
-					    em_tables[RECORD_TABLE].num_entries,
-					    em_tables[RECORD_TABLE].entry_size);
+					    em_tables[TF_RECORD_TABLE].num_entries,
+					    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
 			PMD_DRV_LOG(ERR,
 				    "%d TBL: Unable to allocate idx pools %s\n",
@@ -1672,7 +1672,7 @@ tf_set_tbl_entry(struct tf *tfp,
 		base_addr = tf_em_get_table_page(tbl_scope_cb,
 						 parms->dir,
 						 offset,
-						 RECORD_TABLE);
+						 TF_RECORD_TABLE);
 		if (base_addr == NULL) {
 			PMD_DRV_LOG(ERR,
 				    "dir:%d, Base address lookup failed\n",
@@ -1972,7 +1972,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
 
-		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
 			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
 			printf
 	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index b335a9c..d78e4fe 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -14,18 +14,18 @@
 struct tf_session;
 
 enum tf_pg_tbl_lvl {
-	PT_LVL_0,
-	PT_LVL_1,
-	PT_LVL_2,
-	PT_LVL_MAX
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
 };
 
 enum tf_em_table_type {
-	KEY0_TABLE,
-	KEY1_TABLE,
-	RECORD_TABLE,
-	EFC_TABLE,
-	MAX_TABLE
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
 };
 
 struct tf_em_page_tbl {
@@ -41,15 +41,15 @@ struct tf_em_table {
 	uint16_t			ctx_id;
 	uint32_t			entry_size;
 	int				num_lvl;
-	uint32_t			page_cnt[PT_LVL_MAX];
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
 	uint64_t			num_data_pages;
 	void				*l0_addr;
 	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
 };
 
 struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[MAX_TABLE];
+	struct tf_em_table		em_tables[TF_MAX_TABLE];
 };
 
 /** table scope control block content */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 16/50] net/bnxt: add core changes for EM and EEM lookups
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (14 preceding siblings ...)
  2020-06-12 13:28 ` [dpdk-dev] [PATCH 15/50] net/bnxt: add HCAPI interface support Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 17/50] net/bnxt: implement support for TCAM access Somnath Kotur
                   ` (34 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Randy Schacher <stuart.schacher@broadcom.com>

- Move External Exact and Exact Match to device module using HCAPI
  to add and delete entries
- Make EM active through the device interface.

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Shahaji Bhosle <shahaji.bhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   3 +-
 drivers/net/bnxt/hcapi/Makefile           |   5 +-
 drivers/net/bnxt/hcapi/cfa_p40_hw.h       | 688 ++++++++++++++++++++++++++++++
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h      | 250 +++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 ----
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h   |  26 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     |  14 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h     |   2 +-
 drivers/net/bnxt/meson.build              |   1 -
 drivers/net/bnxt/tf_core/Makefile         |   8 +
 drivers/net/bnxt/tf_core/hwrm_tf.h        |  24 +-
 drivers/net/bnxt/tf_core/stack.c          |   2 +-
 drivers/net/bnxt/tf_core/tf_core.c        | 441 ++++++++++---------
 drivers/net/bnxt/tf_core/tf_core.h        | 141 +++---
 drivers/net/bnxt/tf_core/tf_device.h      |  32 ++
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   3 +
 drivers/net/bnxt/tf_core/tf_em.c          | 567 +++++++-----------------
 drivers/net/bnxt/tf_core/tf_em.h          |  72 +---
 drivers/net/bnxt/tf_core/tf_msg.c         |  22 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |   4 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  25 +-
 drivers/net/bnxt/tf_core/tf_rm.c          | 163 ++++---
 drivers/net/bnxt/tf_core/tf_tbl.c         | 440 ++++++++-----------
 drivers/net/bnxt/tf_core/tf_tbl.h         |  49 +--
 24 files changed, 1816 insertions(+), 1258 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 delete mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 3656274..349b09c 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -46,9 +46,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
+include $(SRCDIR)/hcapi/Makefile
 endif
 
 #
diff --git a/drivers/net/bnxt/hcapi/Makefile b/drivers/net/bnxt/hcapi/Makefile
index c4c91b6..65cddd7 100644
--- a/drivers/net/bnxt/hcapi/Makefile
+++ b/drivers/net/bnxt/hcapi/Makefile
@@ -2,6 +2,9 @@
 # Copyright(c) 2019-2020 Broadcom Limited.
 # All rights reserved.
 
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_p4.c
 
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_defs.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_p4.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/cfa_p40_hw.h
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
new file mode 100644
index 0000000..1c51da8
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
@@ -0,0 +1,688 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_hw.h
+ *
+ * Description: header for SWE based on TFLIB2.0
+ *
+ * Date:  taken from 12/16/19 17:18:12
+ *
+ * Note:  This file was first generated using  tflib_decode.py.
+ *
+ *        Changes have been made due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union table fields)
+ *        Changes not autogenerated are noted in comments.
+ */
+
+#ifndef _CFA_P40_HW_H_
+#define _CFA_P40_HW_H_
+
+/**
+ * Valid TCAM entry. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Key type (pass). (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS 164
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS 2
+/**
+ * Tunnel HDR type. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS 160
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Number of VLAN tags in tunnel l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS 158
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Number of VLAN tags in l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS 156
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS    108
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS  48
+/**
+ * Tunnel Outer VLAN Tag ID. (for idx 3 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS  96
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS 12
+/**
+ * Tunnel Inner VLAN Tag ID. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS  84
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS 12
+/**
+ * Source Partition. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  80
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+/**
+ * Source Virtual I/F. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  8
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS    24
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS  48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS    12
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS  12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS    0
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS  12
+
+enum cfa_p40_prof_l2_ctxt_tcam_flds {
+	CFA_P40_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 1,
+	CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 2,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 3,
+	CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 4,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC1_FLD = 5,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_FLD = 6,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_FLD = 7,
+	CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_FLD = 8,
+	CFA_P40_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P40_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P40_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 167
+
+/**
+ * Valid entry. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_VALID_BITPOS        79
+#define CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS      1
+/**
+ * reserved program to 0. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS     78
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS   1
+/**
+ * PF Parif Number. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS     74
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS   4
+/**
+ * Number of VLAN Tags. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS    72
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS  2
+/**
+ * Dest. MAC Address.
+ */
+#define CFA_P40_ACT_VEB_TCAM_MAC_BITPOS          24
+#define CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS        48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_OVID_BITPOS         12
+#define CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS       12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_IVID_BITPOS         0
+#define CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS       12
+
+enum cfa_p40_act_veb_tcam_flds {
+	CFA_P40_ACT_VEB_TCAM_VALID_FLD = 0,
+	CFA_P40_ACT_VEB_TCAM_RESERVED_FLD = 1,
+	CFA_P40_ACT_VEB_TCAM_PARIF_IN_FLD = 2,
+	CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_FLD = 3,
+	CFA_P40_ACT_VEB_TCAM_MAC_FLD = 4,
+	CFA_P40_ACT_VEB_TCAM_OVID_FLD = 5,
+	CFA_P40_ACT_VEB_TCAM_IVID_FLD = 6,
+	CFA_P40_ACT_VEB_TCAM_MAX_FLD
+};
+
+#define CFA_P40_ACT_VEB_TCAM_TOTAL_NUM_BITS 80
+
+/**
+ * Entry is valid.
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS 18
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS 1
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS 2
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS 16
+/**
+ * for resolving TCAM/EM conflicts
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS 0
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS 2
+
+enum cfa_p40_lkup_tcam_record_mem_flds {
+	CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_FLD = 0,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_FLD = 1,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_FLD = 2,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_MAX_FLD
+};
+
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_TOTAL_NUM_BITS 19
+
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS 62
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_tpid_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_MAX = 0x3UL
+};
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS 60
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_pri_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_MAX = 0x3UL
+};
+/**
+ * Bypass Source Properties Lookup. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS 59
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS 1
+/**
+ * SP Record Pointer. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS 43
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS 16
+/**
+ * BD Action pointer passing enable. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS 42
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS 1
+/**
+ * Default VLAN TPID. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS 39
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS 3
+/**
+ * Allowed VLAN TPIDs. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS 33
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS 6
+/**
+ * Default VLAN PRI.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS 30
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS 3
+/**
+ * Allowed VLAN PRIs.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS 22
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS 8
+/**
+ * Partition.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS 18
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS 4
+/**
+ * Bypass Lookup.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS 17
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS 1
+
+/**
+ * L2 Context Remap Data. Action bypass mode (1) {7'd0,prof_vnic[9:0]} Note:
+ * should also set byp_lkup_en. Action bypass mode (0) byp_lkup_en(0) -
+ * {prof_func[6:0],l2_context[9:0]} byp_lkup_en(1) - {1'b0,act_rec_ptr[15:0]}
+ */
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS 12
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS 10
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS 7
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS 10
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS 16
+
+enum cfa_p40_prof_ctxt_remap_mem_flds {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_FLD = 0,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_FLD = 1,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_FLD = 2,
+	CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_FLD = 3,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_FLD = 4,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_FLD = 5,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_FLD = 6,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_FLD = 7,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_FLD = 8,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_FLD = 9,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_FLD = 10,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_FLD = 11,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_FLD = 12,
+	CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_FLD = 13,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ARP_FLD = 14,
+	CFA_P40_PROF_CTXT_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TOTAL_NUM_BITS 64
+
+/**
+ * Bypass action pointer look up (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS 37
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS 1
+/**
+ * Exact match search enable (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS 36
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS 1
+/**
+ * Exact match profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS 28
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS 8
+/**
+ * Exact match key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS 5
+/**
+ * Exact match key mask
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS 13
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS 10
+/**
+ * TCAM search enable
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS 12
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS 1
+/**
+ * TCAM profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS 4
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS 8
+/**
+ * TCAM key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS 4
+
+enum cfa_p40_prof_profile_tcam_remap_mem_flds {
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TOTAL_NUM_BITS 38
+
+/**
+ * Valid TCAM entry (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS   80
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS 1
+/**
+ * Packet type (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS 76
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS 4
+/**
+ * Pass through CFA (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS 74
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS 2
+/**
+ * Aggregate error (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS 73
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS 1
+/**
+ * Profile function (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS 66
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS 7
+/**
+ * Reserved for future use. Set to 0.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS 57
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS 9
+/**
+ * non-tunnel(0)/tunneled(1) packet (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS 56
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS 1
+/**
+ * Tunnel L2 tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS 55
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L2 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS 53
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped tunnel L2 dest_type UC(0)/MC(2)/BC(3) (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS 51
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS 2
+/**
+ * Tunnel L2 1+ VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS 50
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * Tunnel L2 2 VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS 49
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS 1
+/**
+ * Tunnel L3 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS 48
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS 1
+/**
+ * Tunnel L3 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS 47
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS 1
+/**
+ * Tunnel L3 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS 43
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L3 header is IPV4 or IPV6. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS 42
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 src address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS 41
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 dest address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS 40
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * Tunnel L4 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS 39
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L4 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS 38
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS 1
+/**
+ * Tunnel L4 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS 34
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L4 header is UDP or TCP (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS 33
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS 1
+/**
+ * Tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS 32
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS 31
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS 1
+/**
+ * Tunnel header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS 27
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel header flags
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS 24
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS 3
+/**
+ * L2 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS 1
+/**
+ * L2 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS 22
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS 1
+/**
+ * L2 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS 20
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped L2 dest_type UC(0)/MC(2)/BC(3)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS 18
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS 2
+/**
+ * L2 header 1+ VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS 17
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * L2 header 2 VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS 1
+/**
+ * L3 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS 15
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS 1
+/**
+ * L3 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS 14
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS 1
+/**
+ * L3 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS 10
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS 4
+/**
+ * L3 header is IPV4 or IPV6.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS 9
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS 1
+/**
+ * L3 header IPV6 src address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS 8
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * L3 header IPV6 dest address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS 7
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * L4 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS 6
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS 1
+/**
+ * L4 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS 5
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS 1
+/**
+ * L4 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS 1
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS 4
+/**
+ * L4 header is UDP or TCP
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS 1
+
+enum cfa_p40_prof_profile_tcam_flds {
+	CFA_P40_PROF_PROFILE_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_RESERVED_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_FLD = 10,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_FLD = 11,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_FLD = 12,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_FLD = 13,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_FLD = 14,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_FLD = 15,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_FLD = 16,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_FLD = 17,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_FLD = 18,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_FLD = 19,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_FLD = 20,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_FLD = 21,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_FLD = 22,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_FLD = 23,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_FLD = 24,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_FLD = 25,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_FLD = 26,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_FLD = 27,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_FLD = 28,
+	CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_FLD = 29,
+	CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_FLD = 30,
+	CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_FLD = 31,
+	CFA_P40_PROF_PROFILE_TCAM_L3_VALID_FLD = 32,
+	CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_FLD = 33,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_FLD = 34,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_FLD = 35,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_FLD = 36,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_FLD = 37,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_FLD = 38,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_FLD = 39,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_FLD = 40,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_FLD = 41,
+	CFA_P40_PROF_PROFILE_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_TOTAL_NUM_BITS 81
+
+/**
+ * CFA flexible key layout definition
+ */
+enum cfa_p40_key_fld_id {
+	CFA_P40_KEY_FLD_ID_MAX
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+/**
+ * Valid
+ */
+#define CFA_P40_EEM_KEY_TBL_VALID_BITPOS 0
+#define CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS 1
+
+/**
+ * L1 Cacheable
+ */
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS 1
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS 1
+
+/**
+ * Strength
+ */
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS 2
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS 2
+
+/**
+ * Key Size
+ */
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS 15
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS 9
+
+/**
+ * Record Size
+ */
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS 24
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS 5
+
+/**
+ * Action Record Internal
+ */
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS 29
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS 1
+
+/**
+ * External Flow Counter
+ */
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS 30
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS 31
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS 33
+
+/**
+ * EEM Key omitted - create using keybuilder
+ * Fields here cannot be larger than a uint64_t
+ */
+
+#define CFA_P40_EEM_KEY_TBL_TOTAL_NUM_BITS 64
+
+enum cfa_p40_eem_key_tbl_flds {
+	CFA_P40_EEM_KEY_TBL_VALID_FLD = 0,
+	CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_FLD = 1,
+	CFA_P40_EEM_KEY_TBL_STRENGTH_FLD = 2,
+	CFA_P40_EEM_KEY_TBL_KEY_SZ_FLD = 3,
+	CFA_P40_EEM_KEY_TBL_REC_SZ_FLD = 4,
+	CFA_P40_EEM_KEY_TBL_ACT_REC_INT_FLD = 5,
+	CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_FLD = 6,
+	CFA_P40_EEM_KEY_TBL_AR_PTR_FLD = 7,
+	CFA_P40_EEM_KEY_TBL_MAX_FLD
+};
+#endif /* _CFA_P40_HW_H_ */
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
new file mode 100644
index 0000000..4238561
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_tbl.h
+ *
+ * Description: header for SWE based on TFLIB2.0
+ *
+ * Date:  12/16/19 17:18:12
+ *
+ * Note:  This file was originally generated by tflib_decode.py.
+ *        Remainder is hand coded due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union fields)
+ *
+ **/
+#ifndef _CFA_P40_TBL_H_
+#define _CFA_P40_TBL_H_
+
+#include "cfa_p40_hw.h"
+
+#include "hcapi_cfa_defs.h"
+
+const struct hcapi_cfa_field cfa_p40_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_act_veb_tcam_layout[] = {
+	{CFA_P40_ACT_VEB_TCAM_VALID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_MAC_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_OVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_IVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_lkup_tcam_record_mem_layout[] = {
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_ctxt_remap_mem_layout[] = {
+	{CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS},
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
+	{CFA_P40_EEM_KEY_TBL_VALID_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
+
+};
+#endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
deleted file mode 100644
index 39afd4d..0000000
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- *   Copyright(c) 2019-2020 Broadcom Limited.
- *   All rights reserved.
- */
-
-#include "bitstring.h"
-#include "hcapi_cfa_defs.h"
-#include <errno.h>
-#include "assert.h"
-
-/* HCAPI CFA common PUT APIs */
-int hcapi_cfa_put_field(uint64_t *data_buf,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id, uint64_t val)
-{
-	assert(layout);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		bs_put_msb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	else
-		bs_put_lsb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	return 0;
-}
-
-int hcapi_cfa_put_fields(uint64_t *obj_data,
-			 const struct hcapi_cfa_layout *layout,
-			 struct hcapi_cfa_data_obj *field_tbl,
-			 uint16_t field_tbl_sz)
-{
-	int i;
-	uint16_t bitpos;
-	uint8_t bitlen;
-	uint16_t field_id;
-
-	assert(layout);
-	assert(field_tbl);
-
-	if (layout->is_msb_order) {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_msb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	} else {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_lsb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	}
-	return 0;
-}
-
-/* HCAPI CFA common GET APIs */
-int hcapi_cfa_get_field(uint64_t *obj_data,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id,
-			uint64_t *val)
-{
-	assert(layout);
-	assert(val);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		*val = bs_get_msb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	else
-		*val = bs_get_lsb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	return 0;
-}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
index ca562d2..0b7f98f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -174,7 +174,7 @@ struct hcapi_cfa_data {
 	union {
 		uint32_t index;
 		uint32_t byte_offset;
-	};
+	} u;
 	/** [in] HW data buffer pointer */
 	uint8_t *data;
 	/** [in] HW data mask buffer pointer */
@@ -185,18 +185,18 @@ struct hcapi_cfa_data {
 
 /*********************** Truflow start ***************************/
 enum hcapi_cfa_pg_tbl_lvl {
-	PT_LVL_0,
-	PT_LVL_1,
-	PT_LVL_2,
-	PT_LVL_MAX
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
 };
 
 enum hcapi_cfa_em_table_type {
-	KEY0_TABLE,
-	KEY1_TABLE,
-	RECORD_TABLE,
-	EFC_TABLE,
-	MAX_TABLE
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
 };
 
 struct hcapi_cfa_em_page_tbl {
@@ -212,15 +212,15 @@ struct hcapi_cfa_em_table {
 	uint16_t			ctx_id;
 	uint32_t			entry_size;
 	int				num_lvl;
-	uint32_t			page_cnt[PT_LVL_MAX];
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
 	uint64_t			num_data_pages;
 	void				*l0_addr;
 	uint64_t			l0_dma_addr;
-	struct hcapi_cfa_em_page_tbl    pg_tbl[PT_LVL_MAX];
+	struct hcapi_cfa_em_page_tbl    pg_tbl[TF_PT_LVL_MAX];
 };
 
 struct hcapi_cfa_em_ctx_mem_info {
-	struct hcapi_cfa_em_table		em_tables[MAX_TABLE];
+	struct hcapi_cfa_em_table		em_tables[TF_MAX_TABLE];
 };
 
 /*********************** Truflow end ****************************/
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index 5b5cac8..89c91ea 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -222,7 +222,7 @@ uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
 	 */
 	level = mem->num_lvl - 1;
 
-	addr = (uint64_t)mem->pg_tbl[level].pg_va_tbl[page];
+	addr = (uintptr_t)mem->pg_tbl[level].pg_va_tbl[page];
 
 #if 0
 	TFP_DRV_LOG(DEBUG, "dir:%d offset:0x%x level:%d page:%d addr:%p\n",
@@ -262,7 +262,7 @@ static int hcapi_cfa_key_hw_op_put(struct hcapi_cfa_hwop *op,
 {
 	int rc = 0;
 
-	memcpy((uint8_t *)op->hw.base_addr +
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
 	       key_obj->offset,
 	       key_obj->data,
 	       key_obj->size);
@@ -276,7 +276,7 @@ static int hcapi_cfa_key_hw_op_get(struct hcapi_cfa_hwop *op,
 	int rc = 0;
 
 	memcpy(key_obj->data,
-	       (uint8_t *)op->hw.base_addr +
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
 	       key_obj->offset,
 	       key_obj->size);
 
@@ -293,7 +293,7 @@ static int hcapi_cfa_key_hw_op_add(struct hcapi_cfa_hwop *op,
 	 * Is entry free?
 	 */
 	memcpy(&table_entry,
-	       (uint8_t *)op->hw.base_addr +
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
 	       key_obj->offset,
 	       key_obj->size);
 
@@ -303,7 +303,7 @@ static int hcapi_cfa_key_hw_op_add(struct hcapi_cfa_hwop *op,
 	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT))
 		return -1;
 
-	memcpy((uint8_t *)op->hw.base_addr +
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
 	       key_obj->offset,
 	       key_obj->data,
 	       key_obj->size);
@@ -321,7 +321,7 @@ static int hcapi_cfa_key_hw_op_del(struct hcapi_cfa_hwop *op,
 	 * Read entry
 	 */
 	memcpy(&table_entry,
-	       (uint8_t *)op->hw.base_addr +
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
 	       key_obj->offset,
 	       key_obj->size);
 
@@ -347,7 +347,7 @@ static int hcapi_cfa_key_hw_op_del(struct hcapi_cfa_hwop *op,
 	/*
 	 * Delete entry
 	 */
-	memset((uint8_t *)op->hw.base_addr +
+	memset((uint8_t *)(uintptr_t)op->hw.base_addr +
 	       key_obj->offset,
 	       0,
 	       key_obj->size);
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
index 0c11876..2c1bcad 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -4,7 +4,7 @@
   */
 
 #ifndef _HCAPI_CFA_P4_H_
-#define _HCPAI_CFA_P4_H_
+#define _HCAPI_CFA_P4_H_
 
 #include "cfa_p40_hw.h"
 
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 4994d4c..33e6ebd 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,7 +43,6 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_rm_new.c',
 
-	'hcapi/hcapi_cfa_common.c',
 	'hcapi/hcapi_cfa_p4.c',
 
 	'tf_ulp/bnxt_ulp.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 7191c7f..ecd5aac 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -25,3 +25,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_device.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index c04d103..1e78296 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -27,8 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
+	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -82,8 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_tbl_type_get_bulk_input;
-struct tf_tbl_type_get_bulk_output;
+struct tf_tbl_type_bulk_get_input;
+struct tf_tbl_type_bulk_get_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -905,8 +905,6 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -922,17 +920,17 @@ typedef struct tf_tbl_type_get_output {
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
-typedef struct tf_tbl_type_get_bulk_input {
+typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
 	uint16_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_TX	   (0x1)
 	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Starting index to get from */
@@ -941,12 +939,12 @@ typedef struct tf_tbl_type_get_bulk_input {
 	uint32_t			 num_entries;
 	/* Host memory where data will be stored */
 	uint64_t			 host_addr;
-} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+} tf_tbl_type_bulk_get_input_t, *ptf_tbl_type_bulk_get_input_t;
 
 /* Output params for table type get */
-typedef struct tf_tbl_type_get_bulk_output {
+typedef struct tf_tbl_type_bulk_get_output {
 	/* Size of the total data read in bytes */
 	uint16_t			 size;
-} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+} tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 9548063..240fbe2 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -81,7 +81,7 @@ int
 stack_pop(struct stack *st, uint32_t *x)
 {
 	if (stack_is_empty(st))
-		return -ENOENT;
+		return -ENODATA;
 
 	*x = st->items[st->top];
 	st->top--;
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 7d9bca8..648d0d1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,33 +19,41 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static inline uint32_t SWAP_WORDS32(uint32_t val32)
+static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
+			       enum tf_device_type device,
+			       uint16_t key_sz_in_bits,
+			       uint16_t *num_slice_per_row)
 {
-	return (((val32 & 0x0000ffff) << 16) |
-		((val32 & 0xffff0000) >> 16));
-}
+	uint16_t key_bytes;
+	uint16_t slice_sz = 0;
+
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
+
+	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
+		if (device == TF_DEVICE_TYPE_WH) {
+			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
+			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		} else {
+			TFP_DRV_LOG(ERR,
+				    "Unsupported device type %d\n",
+				    device);
+			return -ENOTSUP;
+		}
 
-static void tf_seeds_init(struct tf_session *session)
-{
-	int i;
-	uint32_t r;
-
-	/* Initialize the lfsr */
-	rand_init();
-
-	/* RX and TX use the same seed values */
-	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
-		session->lkup_lkup3_init_cfg[TF_DIR_TX] =
-						SWAP_WORDS32(rand32());
-
-	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+		if (key_bytes > *num_slice_per_row * slice_sz) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Key size %d is not supported\n",
+				    tf_tcam_tbl_2_str(tcam_tbl_type),
+				    key_bytes);
+			return -ENOTSUP;
+		}
+	} else { /* for other type of tcam */
+		*num_slice_per_row = 1;
 	}
+
+	return 0;
 }
 
 /**
@@ -153,15 +161,18 @@ tf_open_session(struct tf                    *tfp,
 	uint8_t fw_session_id;
 	int dir;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
 	 * firmware open session succeeds.
 	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH)
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
 		return -ENOTSUP;
+	}
 
 	/* Build the beginning of session_id */
 	rc = sscanf(parms->ctrl_chan_name,
@@ -171,7 +182,7 @@ tf_open_session(struct tf                    *tfp,
 		    &slot,
 		    &device);
 	if (rc != 4) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "Failed to scan device ctrl_chan_name\n");
 		return -EINVAL;
 	}
@@ -183,13 +194,13 @@ tf_open_session(struct tf                    *tfp,
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
-			PMD_DRV_LOG(ERR,
-				    "Session is already open, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
 		else
-			PMD_DRV_LOG(ERR,
-				    "Open message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
 
 		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
 		return rc;
@@ -202,13 +213,13 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
-	tfp->session = alloc_parms.mem_va;
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -217,9 +228,9 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -240,12 +251,13 @@ tf_open_session(struct tf                    *tfp,
 	session->session_id.internal.device = device;
 	session->session_id.internal.fw_session_id = fw_session_id;
 
+	/* Query for Session Config
+	 */
 	rc = tf_msg_session_qcfg(tfp);
 	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
@@ -256,9 +268,9 @@ tf_open_session(struct tf                    *tfp,
 #if (TF_SHADOW == 1)
 		rc = tf_rm_shadow_db_init(tfs);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%d",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%s",
+				    strerror(-rc));
 		/* Add additional processing */
 #endif /* TF_SHADOW */
 	}
@@ -266,13 +278,12 @@ tf_open_session(struct tf                    *tfp,
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
-		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Rm allocate validate failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
-	/* Setup hash seeds */
-	tf_seeds_init(session);
-
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		rc = tf_create_em_pool(session,
@@ -290,11 +301,11 @@ tf_open_session(struct tf                    *tfp,
 	/* Return session ID */
 	parms->session_id = session->session_id;
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session created, session_id:%d\n",
 		    parms->session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
@@ -379,8 +390,7 @@ tf_attach_session(struct tf *tfp __rte_unused,
 #if (TF_SHARED == 1)
 	int rc;
 
-	if (tfp == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	/* - Open the shared memory for the attach_chan_name
 	 * - Point to the shared session for this Device instance
@@ -389,12 +399,10 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	 *   than one client of the session.
 	 */
 
-	if (tfp->session) {
-		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-			rc = tf_msg_session_attach(tfp,
-						   parms->ctrl_chan_name,
-						   parms->session_id);
-		}
+	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_attach(tfp,
+					   parms->ctrl_chan_name,
+					   parms->session_id);
 	}
 #endif /* TF_SHARED */
 	return -1;
@@ -472,8 +480,7 @@ tf_close_session(struct tf *tfp)
 	union tf_session_id session_id;
 	int dir;
 
-	if (tfp == NULL || tfp->session == NULL)
-		return -EINVAL;
+	TF_CHECK_TFP_SESSION(tfp);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -487,9 +494,9 @@ tf_close_session(struct tf *tfp)
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "Message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Message send failed, rc:%s\n",
+				    strerror(-rc));
 		}
 
 		/* Update the ref_count */
@@ -509,11 +516,11 @@ tf_close_session(struct tf *tfp)
 		tfp->session = NULL;
 	}
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session closed, session_id:%d\n",
 		    session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    session_id.internal.domain,
 		    session_id.internal.bus,
@@ -565,27 +572,39 @@ tf_close_session_new(struct tf *tfp)
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find(
-		(struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL) {
-		/* External EEM */
-		return tf_insert_eem_entry
-			((struct tf_session *)(tfp->session->core_data),
-			tbl_scope_cb,
-			parms);
-	} else if (parms->mem == TF_MEM_INTERNAL) {
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM insert failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
 	return -EINVAL;
@@ -600,27 +619,44 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find(
-		(struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tfp, parms);
-	else
-		return tf_delete_em_internal_entry(tfp, parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM delete failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
 }
 
-/** allocate identifier resource
- *
- * Returns success or failure code.
- */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms)
 {
@@ -629,14 +665,7 @@ int tf_alloc_identifier(struct tf *tfp,
 	int id;
 	int rc;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -662,30 +691,31 @@ int tf_alloc_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: %s\n",
+		TFP_DRV_LOG(ERR, "%s: %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	id = ba_alloc(session_pool);
 
 	if (id == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		return -ENOMEM;
@@ -763,14 +793,7 @@ int tf_free_identifier(struct tf *tfp,
 	int ba_rc;
 	struct tf_session *tfs;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -796,29 +819,31 @@ int tf_free_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s Identifier pool access failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	ba_rc = ba_inuse(session_pool, (int)parms->id);
 
 	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type),
 			    parms->id);
@@ -893,21 +918,30 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index = 0;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -916,36 +950,16 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority) {
-		for (index = session_pool->size - 1; index >= 0; index--) {
-			if (ba_inuse(session_pool,
-					  index) == BA_ENTRY_FREE) {
-				break;
-			}
-		}
-		if (ba_alloc_index(session_pool,
-				   index) == BA_FAIL) {
-			TFP_DRV_LOG(ERR,
-				    "%s: %s: ba_alloc index %d failed\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-				    index);
-			return -ENOMEM;
-		}
-	} else {
-		index = ba_alloc(session_pool);
-		if (index == BA_FAIL) {
-			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-			return -ENOMEM;
-		}
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
 	}
 
+	index *= num_slice_per_row;
+
 	parms->idx = index;
 	return 0;
 }
@@ -956,26 +970,29 @@ tf_set_tcam_entry(struct tf *tfp,
 {
 	int rc;
 	int id;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
-	/*
-	 * Each tcam send msg function should check for key sizes range
-	 */
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
 
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
@@ -985,11 +1002,12 @@ tf_set_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 		   "%s: %s: Invalid or not allocated index, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
@@ -1006,21 +1024,8 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	int rc = -EOPNOTSUPP;
-
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	return rc;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+	return -EOPNOTSUPP;
 }
 
 int
@@ -1028,20 +1033,29 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row = 1;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 0,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -1050,24 +1064,27 @@ tf_free_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	rc = ba_inuse(session_pool, (int)parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	rc = ba_inuse(session_pool, index);
 	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    index);
 		return -EINVAL;
 	}
 
-	ba_free(session_pool, (int)parms->idx);
+	ba_free(session_pool, index);
 
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    parms->idx,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index f1ef00b..bb456bb 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -10,7 +10,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <stdio.h>
-
+#include "hcapi/hcapi_cfa.h"
 #include "tf_project.h"
 
 /**
@@ -54,6 +54,7 @@ enum tf_mem {
 #define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
 #define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
 
+
 /*
  * Helper Macros
  */
@@ -132,34 +133,40 @@ struct tf_session_version {
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
-	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_SR,     /**< Stingray  */
+	TF_DEVICE_TYPE_THOR,   /**< Thor      */
+	TF_DEVICE_TYPE_SR2,    /**< Stingray2 */
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
-/** Identifier resource types
+/**
+ * Identifier resource types
  */
 enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The L2 Context is returned from the L2 Ctxt TCAM lookup
 	 *  and can be used in WC TCAM or EM keys to virtualize further
 	 *  lookups.
 	 */
 	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The WC profile func is returned from the L2 Ctxt TCAM lookup
 	 *  to enable virtualization of the profile TCAM.
 	 */
 	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
+	/**
+	 *  The WC profile ID is included in the WC lookup key
 	 *  to enable virtualization of the WC TCAM hardware.
 	 */
 	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
+	/**
+	 *  The EM profile ID is included in the EM lookup key
 	 *  to enable virtualization of the EM hardware. (not required for SR2
 	 *  as it has table scope)
 	 */
 	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
+	/**
+	 *  The L2 func is included in the ILT result and from recycling to
 	 *  enable virtualization of further lookups.
 	 */
 	TF_IDENT_TYPE_L2_FUNC,
@@ -239,7 +246,8 @@ enum tf_tbl_type {
 
 	/* External */
 
-	/** External table type - initially 1 poolsize entries.
+	/**
+	 * External table type - initially 1 poolsize entries.
 	 * All External table types are associated with a table
 	 * scope. Internal types are not.
 	 */
@@ -279,13 +287,17 @@ enum tf_em_tbl_type {
 	TF_EM_TBL_TYPE_MAX
 };
 
-/** TruFlow Session Information
+/**
+ * TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
  * session. This structure is initialized at time of
  * tf_open_session(). It is passed to all of the TruFlow APIs as way
  * to prescribe and isolate resources between different TruFlow ULP
  * Applications.
+ *
+ * Ownership of the elements is split between ULP and TruFlow. Please
+ * see the individual elements.
  */
 struct tf_session_info {
 	/**
@@ -355,7 +367,8 @@ struct tf_session_info {
 	uint32_t              core_data_sz_bytes;
 };
 
-/** TruFlow handle
+/**
+ * TruFlow handle
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
@@ -405,7 +418,8 @@ struct tf_session_resources {
  * tf_open_session parameters definition.
  */
 struct tf_open_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -417,7 +431,8 @@ struct tf_open_session_parms {
 	 * shared memory allocation.
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
-	/** [in] shadow_copy
+	/**
+	 * [in] shadow_copy
 	 *
 	 * Boolean controlling the use and availability of shadow
 	 * copy. Shadow copy will allow the TruFlow to keep track of
@@ -430,7 +445,8 @@ struct tf_open_session_parms {
 	 * control channel.
 	 */
 	bool shadow_copy;
-	/** [in/out] session_id
+	/**
+	 * [in/out] session_id
 	 *
 	 * Session_id is unique per session.
 	 *
@@ -441,7 +457,8 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
-	/** [in] device type
+	/**
+	 * [in] device type
 	 *
 	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
@@ -484,7 +501,8 @@ int tf_open_session_new(struct tf *tfp,
 			struct tf_open_session_parms *parms);
 
 struct tf_attach_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -497,7 +515,8 @@ struct tf_attach_session_parms {
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] attach_chan_name
+	/**
+	 * [in] attach_chan_name
 	 *
 	 * String containing name of attach channel interface to be
 	 * used for this session.
@@ -510,7 +529,8 @@ struct tf_attach_session_parms {
 	 */
 	char attach_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] session_id
+	/**
+	 * [in] session_id
 	 *
 	 * Session_id is unique per session. For Attach the session_id
 	 * should be the session_id that was returned on the first
@@ -565,7 +585,8 @@ int tf_close_session_new(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-/** tf_alloc_identifier parameter definition
+/**
+ * tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
 	/**
@@ -582,7 +603,8 @@ struct tf_alloc_identifier_parms {
 	uint16_t id;
 };
 
-/** tf_free_identifier parameter definition
+/**
+ * tf_free_identifier parameter definition
  */
 struct tf_free_identifier_parms {
 	/**
@@ -599,7 +621,8 @@ struct tf_free_identifier_parms {
 	uint16_t id;
 };
 
-/** allocate identifier resource
+/**
+ * allocate identifier resource
  *
  * TruFlow core will allocate a free id from the per identifier resource type
  * pool reserved for the session during tf_open().  No firmware is involved.
@@ -611,7 +634,8 @@ int tf_alloc_identifier(struct tf *tfp,
 int tf_alloc_identifier_new(struct tf *tfp,
 			    struct tf_alloc_identifier_parms *parms);
 
-/** free identifier resource
+/**
+ * free identifier resource
  *
  * TruFlow core will return an id back to the per identifier resource type pool
  * reserved for the session.  No firmware is involved.  During tf_close, the
@@ -639,7 +663,8 @@ int tf_free_identifier_new(struct tf *tfp,
  */
 
 
-/** tf_alloc_tbl_scope_parms definition
+/**
+ * tf_alloc_tbl_scope_parms definition
  */
 struct tf_alloc_tbl_scope_parms {
 	/**
@@ -662,7 +687,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -684,7 +709,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -709,7 +734,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -719,7 +744,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -750,7 +775,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -758,7 +783,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-
 /**
  * @page tcam TCAM Access
  *
@@ -771,7 +795,9 @@ int tf_free_tbl_scope(struct tf *tfp,
  * @ref tf_free_tcam_entry
  */
 
-/** tf_alloc_tcam_entry parameter definition
+
+/**
+ * tf_alloc_tcam_entry parameter definition
  */
 struct tf_alloc_tcam_entry_parms {
 	/**
@@ -799,9 +825,7 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested
-	 * 0: index from top i.e. highest priority first
-	 * !0: index from bottom i.e lowest priority first
+	 * [in] Priority of entry requested (definition TBD)
 	 */
 	uint32_t priority;
 	/**
@@ -819,7 +843,8 @@ struct tf_alloc_tcam_entry_parms {
 	uint16_t idx;
 };
 
-/** allocate TCAM entry
+/**
+ * allocate TCAM entry
  *
  * Allocate a TCAM entry - one of these types:
  *
@@ -844,7 +869,8 @@ struct tf_alloc_tcam_entry_parms {
 int tf_alloc_tcam_entry(struct tf *tfp,
 			struct tf_alloc_tcam_entry_parms *parms);
 
-/** tf_set_tcam_entry parameter definition
+/**
+ * tf_set_tcam_entry parameter definition
  */
 struct	tf_set_tcam_entry_parms {
 	/**
@@ -881,7 +907,8 @@ struct	tf_set_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** set TCAM entry
+/**
+ * set TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -892,7 +919,8 @@ struct	tf_set_tcam_entry_parms {
 int tf_set_tcam_entry(struct tf	*tfp,
 		      struct tf_set_tcam_entry_parms *parms);
 
-/** tf_get_tcam_entry parameter definition
+/**
+ * tf_get_tcam_entry parameter definition
  */
 struct tf_get_tcam_entry_parms {
 	/**
@@ -929,7 +957,7 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/*
+/**
  * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
@@ -941,7 +969,7 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/*
+/**
  * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
@@ -963,7 +991,9 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/*
+/**
+ * free TCAM entry
+ *
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -989,6 +1019,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_get_tbl_entry
  */
 
+
 /**
  * tf_alloc_tbl_entry parameter definition
  */
@@ -1201,9 +1232,9 @@ int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
 /**
- * tf_get_bulk_tbl_entry parameter definition
+ * tf_bulk_get_tbl_entry parameter definition
  */
-struct tf_get_bulk_tbl_entry_parms {
+struct tf_bulk_get_tbl_entry_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -1213,11 +1244,6 @@ struct tf_get_bulk_tbl_entry_parms {
 	 */
 	enum tf_tbl_type type;
 	/**
-	 * [in] Clear hardware entries on reads only
-	 * supported for TF_TBL_TYPE_ACT_STATS_64
-	 */
-	bool clear_on_read;
-	/**
 	 * [in] Starting index to read from
 	 */
 	uint32_t starting_idx;
@@ -1250,8 +1276,8 @@ struct tf_get_bulk_tbl_entry_parms {
  * Returns success or failure code. Failure will be returned if the
  * provided data buffer is too small for the data type requested.
  */
-int tf_get_bulk_tbl_entry(struct tf *tfp,
-		     struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_bulk_get_tbl_entry(struct tf *tfp,
+		     struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
@@ -1280,7 +1306,7 @@ struct tf_insert_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1332,12 +1358,12 @@ struct tf_delete_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
 	 * [in] epoch group IDs of entry to delete
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1366,7 +1392,7 @@ struct tf_search_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1387,7 +1413,7 @@ struct tf_search_em_entry_parms {
 	uint16_t em_record_sz_in_bits;
 	/**
 	 * [in] epoch group IDs of entry to lookup
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1415,7 +1441,7 @@ struct tf_search_em_entry_parms {
  * specified direction and table scope.
  *
  * When inserting an entry into an exact match table, the TruFlow library may
- * need to allocate a dynamic bucket for the entry (Brd4 only).
+ * need to allocate a dynamic bucket for the entry (SR2 only).
  *
  * The insertion of duplicate entries in an EM table is not permitted.	If a
  * TruFlow application can guarantee that it will never insert duplicates, it
@@ -1490,4 +1516,5 @@ int tf_delete_em_entry(struct tf *tfp,
  */
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 6aeb6fe..1501b20 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -366,6 +366,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_tcam)(struct tf *tfp,
 			       struct tf_tcam_get_parms *parms);
+
+	/**
+	 * Insert EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_em_entry)(struct tf *tfp,
+				      struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_em_entry)(struct tf *tfp,
+				      struct tf_delete_em_entry_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c235976..f4bd95f 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
+#include "tf_em.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -89,4 +90,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_insert_em_entry = tf_em_insert_entry,
+	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index da1f4d4..7b430fa 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -17,11 +17,6 @@
 
 #include "bnxt.h"
 
-/* Enable EEM table dump
- */
-#define TF_EEM_DUMP
-
-static struct tf_eem_64b_entry zero_key_entry;
 
 static uint32_t tf_em_get_key_mask(int num_entries)
 {
@@ -36,326 +31,22 @@ static uint32_t tf_em_get_key_mask(int num_entries)
 	return mask;
 }
 
-/* CRC32i support for Key0 hash */
-#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
-#define crc32(x, y) crc32i(~0, x, y)
-
-static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
-0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
-0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
-0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
-0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
-0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
-0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
-0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
-0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
-0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
-0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
-0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
-0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
-0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
-0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
-0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
-0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
-0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
-0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
-0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
-0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
-0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
-0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
-0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
-0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
-0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
-0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
-0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
-0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
-0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
-0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
-0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
-0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
-0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
-0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
-0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
-0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
-0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
-0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
-0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
-0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
-0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
-0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
-0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
-0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
-0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
-0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
-0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
-0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
-0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
-0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
-0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
-0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
-0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
-0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
-0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
-0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
-0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
-0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
-0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
-0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
-0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
-0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
-0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
-0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
-};
-
-static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
-{
-	int l;
-
-	for (l = (len - 1); l >= 0; l--)
-		crc = ucrc32(buf[l], crc);
-
-	return ~crc;
-}
-
-static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
-					  uint8_t *key,
-					  enum tf_dir dir)
-{
-	int i;
-	uint32_t index;
-	uint32_t val1, val2;
-	uint8_t temp[4];
-	uint8_t *kptr = key;
-
-	/* Do byte-wise XOR of the 52-byte HASH key first. */
-	index = *key;
-	kptr--;
-
-	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
-		index = index ^ *kptr;
-		kptr--;
-	}
-
-	/* Get seeds */
-	val1 = session->lkup_em_seed_mem[dir][index * 2];
-	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
-
-	temp[3] = (uint8_t)(val1 >> 24);
-	temp[2] = (uint8_t)(val1 >> 16);
-	temp[1] = (uint8_t)(val1 >> 8);
-	temp[0] = (uint8_t)(val1 & 0xff);
-	val1 = 0;
-
-	/* Start with seed */
-	if (!(val2 & 0x1))
-		val1 = crc32i(~val1, temp, 4);
-
-	val1 = crc32i(~val1,
-		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
-		      TF_HW_EM_KEY_MAX_SIZE);
-
-	/* End with seed */
-	if (val2 & 0x1)
-		val1 = crc32i(~val1, temp, 4);
-
-	return val1;
-}
-
-static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
-					    uint8_t *in_key)
-{
-	uint32_t val1;
-
-	val1 = hashword(((uint32_t *)in_key) + 1,
-			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
-			 lookup3_init_value);
-
-	return val1;
-}
-
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum tf_em_table_type table_type)
-{
-	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
-	void *addr = NULL;
-	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
-
-	if (ctx == NULL)
-		return NULL;
-
-	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
-		return NULL;
-
-	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
-		return NULL;
-
-	/*
-	 * Use the level according to the num_level of page table
-	 */
-	level = ctx->em_tables[table_type].num_lvl - 1;
-
-	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
-
-	return addr;
-}
-
-/** Read Key table entry
- *
- * Entry is read in to entry
- */
-static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
-	return 0;
-}
-
-static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
-
-	return 0;
-}
-
-static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_eem_64b_entry *entry,
-			       uint32_t index,
-			       enum tf_em_table_type table_type,
-			       enum tf_dir dir)
-{
-	int rc;
-	struct tf_eem_64b_entry table_entry;
-
-	rc = tf_em_read_entry(tbl_scope_cb,
-			      &table_entry,
-			      TF_EM_KEY_RECORD_SIZE,
-			      index,
-			      table_type,
-			      dir);
-
-	if (rc != 0)
-		return -EINVAL;
-
-	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
-		if (entry != NULL) {
-			if (memcmp(&table_entry,
-				   entry,
-				   TF_EM_KEY_RECORD_SIZE) == 0)
-				return -EEXIST;
-		} else {
-			return -EEXIST;
-		}
-
-		return -EBUSY;
-	}
-
-	return 0;
-}
-
-static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t *in_key,
-				    struct tf_eem_64b_entry *key_entry)
+static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+				   uint8_t	       *in_key,
+				   struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
 
-	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
 		key_entry->hdr.pointer = result->pointer;
 	else
 		key_entry->hdr.pointer = result->pointer;
 
 	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-}
-
-/* tf_em_select_inject_table
- *
- * Returns:
- * 0 - Key does not exist in either table and can be inserted
- *		at "index" in table "table".
- * EEXIST  - Key does exist in table at "index" in table "table".
- * TF_ERR     - Something went horribly wrong.
- */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
-					  enum tf_dir dir,
-					  struct tf_eem_64b_entry *entry,
-					  uint32_t key0_hash,
-					  uint32_t key1_hash,
-					  uint32_t *index,
-					  enum tf_em_table_type *table)
-{
-	int key0_entry;
-	int key1_entry;
-
-	/*
-	 * Check KEY0 table.
-	 */
-	key0_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key0_hash,
-					 TF_KEY0_TABLE,
-					 dir);
 
-	/*
-	 * Check KEY1 table.
-	 */
-	key1_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key1_hash,
-					 TF_KEY1_TABLE,
-					 dir);
-
-	if (key0_entry == -EEXIST) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return -EEXIST;
-	} else if (key1_entry == -EEXIST) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return -EEXIST;
-	} else if (key0_entry == 0) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return 0;
-	} else if (key1_entry == 0) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return 0;
-	}
-
-	return -EINVAL;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
 }
 
 /** insert EEM entry API
@@ -368,20 +59,24 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms)
+static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			       struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
 	uint32_t	   key0_hash;
 	uint32_t	   key1_hash;
 	uint32_t	   key0_index;
 	uint32_t	   key1_index;
-	struct tf_eem_64b_entry key_entry;
+	struct cfa_p4_eem_64b_entry key_entry;
 	uint32_t	   index;
-	enum tf_em_table_type table_type;
+	enum hcapi_cfa_em_table_type table_type;
 	uint32_t	   gfid;
-	int		   num_of_entry;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
 
 	/* Get mask to use on hash */
 	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
@@ -389,72 +84,84 @@ int tf_insert_eem_entry(struct tf_session *session,
 	if (!mask)
 		return -EINVAL;
 
-	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
 
-	key0_hash = tf_em_lkup_get_crc32_hash(session,
-				      &parms->key[num_of_entry] - 1,
-				      parms->dir);
-	key0_index = key0_hash & mask;
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
 
-	key1_hash =
-	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-					parms->key);
+	key0_index = key0_hash & mask;
 	key1_index = key1_hash & mask;
 
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
 	/*
 	 * Use the "result" arg to populate all of the key entry then
 	 * store the byte swapped "raw" entry in a local copy ready
 	 * for insertion in to the table.
 	 */
-	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
 				((uint8_t *)parms->key),
 				&key_entry);
 
 	/*
-	 * Find which table to use
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
 	 */
-	if (tf_em_select_inject_table(tbl_scope_cb,
-				      parms->dir,
-				      &key_entry,
-				      key0_index,
-				      key1_index,
-				      &index,
-				      &table_type) == 0) {
-		if (table_type == TF_KEY0_TABLE) {
-			TF_SET_GFID(gfid,
-				    key0_index,
-				    TF_KEY0_TABLE);
-		} else {
-			TF_SET_GFID(gfid,
-				    key1_index,
-				    TF_KEY1_TABLE);
-		}
-
-		/*
-		 * Inject
-		 */
-		if (tf_em_write_entry(tbl_scope_cb,
-				      &key_entry,
-				      TF_EM_KEY_RECORD_SIZE,
-				      index,
-				      table_type,
-				      parms->dir) == 0) {
-			TF_SET_FLOW_ID(parms->flow_id,
-				       gfid,
-				       TF_GFID_TABLE_EXTERNAL,
-				       parms->dir);
-			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-						     0,
-						     0,
-						     0,
-						     index,
-						     0,
-						     table_type);
-			return 0;
-		}
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 =
+			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
 	}
 
-	return -EINVAL;
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
 }
 
 /**
@@ -463,8 +170,8 @@ int tf_insert_eem_entry(struct tf_session *session,
  *  returns:
  *     0 - Success
  */
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms)
+static int tf_insert_em_internal_entry(struct tf                       *tfp,
+				       struct tf_insert_em_entry_parms *parms)
 {
 	int       rc;
 	uint32_t  gfid;
@@ -494,7 +201,8 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	TFP_DRV_LOG(INFO,
+	PMD_DRV_LOG(
+		   ERR,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
@@ -527,8 +235,8 @@ int tf_insert_em_internal_entry(struct tf *tfp,
  * 0
  * -EINVAL
  */
-int tf_delete_em_internal_entry(struct tf *tfp,
-				struct tf_delete_em_entry_parms *parms)
+static int tf_delete_em_internal_entry(struct tf                       *tfp,
+				       struct tf_delete_em_entry_parms *parms)
 {
 	int rc;
 	struct tf_session *session =
@@ -558,46 +266,95 @@ int tf_delete_em_internal_entry(struct tf *tfp,
  *   0
  *   TF_NO_EM_MATCH - entry not found
  */
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms)
+static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_session	   *session;
-	struct tf_tbl_scope_cb	   *tbl_scope_cb;
-	enum tf_em_table_type hash_type;
+	enum hcapi_cfa_em_table_type hash_type;
 	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
 
-	if (parms == NULL)
+	if (parms->flow_handle == 0)
 		return -EINVAL;
 
-	session = (struct tf_session *)tfp->session->core_data;
-	if (session == NULL)
-		return -EINVAL;
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
 
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[
+			(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
-	if (parms->flow_handle == 0)
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
+	}
 
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+	/* Process the EM entry per Table Scope type */
+	if (parms->mem == TF_MEM_EXTERNAL)
+		/* External EEM */
+		return tf_insert_eem_entry
+			(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
 
-	if (tf_em_entry_exists(tbl_scope_cb,
-			       NULL,
-			       index,
-			       hash_type,
-			       parms->dir) == -EEXIST) {
-		tf_em_write_entry(tbl_scope_cb,
-				  &zero_key_entry,
-				  TF_EM_KEY_RECORD_SIZE,
-				  index,
-				  hash_type,
-				  parms->dir);
+	return -EINVAL;
+}
 
-		return 0;
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
 	}
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		return tf_delete_em_internal_entry(tfp, parms);
 
 	return -EINVAL;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index c1805df..2262ae7 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,13 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define SUPPORT_CFA_HW_P4 1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+#define SUPPORT_CFA_HW_ALL 0
+
+#include "hcapi/hcapi_cfa_defs.h"
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -26,56 +33,15 @@
 #define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
 #define TF_EM_INTERNAL_ENTRY_MASK  0x3
 
-/** EEM Entry header
- *
- */
-struct tf_eem_entry_hdr {
-	uint32_t pointer;
-	uint32_t word1;  /*
-			  * The header is made up of two words,
-			  * this is the first word. This field has multiple
-			  * subfields, there is no suitable single name for
-			  * it so just going with word1.
-			  */
-#define TF_LKUP_RECORD_VALID_SHIFT 31
-#define TF_LKUP_RECORD_VALID_MASK 0x80000000
-#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
-#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
-#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
-#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
-#define TF_LKUP_RECORD_RESERVED_SHIFT 17
-#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
-#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
-#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
-#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
-#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
-#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
-#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
-#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
-#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
-};
-
-/** EEM Entry
- *  Each EEM entry is 512-bit (64-bytes)
- */
-struct tf_eem_64b_entry {
-	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
-	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
-};
-
 /** EM Entry
  *  Each EM entry is 512-bit (64-bytes) but ordered differently to
  *  EEM.
  */
 struct tf_em_64b_entry {
 	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
+	struct cfa_p4_eem_entry_hdr hdr;
 	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
 /**
@@ -127,22 +93,14 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
 					  uint32_t tbl_scope_id);
 
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms);
-
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms);
-
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms);
-
-int tf_delete_em_internal_entry(struct tf                       *tfp,
-				struct tf_delete_em_entry_parms *parms);
-
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
-			   enum tf_em_table_type table_type);
+			   enum hcapi_cfa_em_table_type table_type);
+
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
 
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e08a96f..90e1acf 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -184,6 +184,10 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 }
 
 /**
+ * NEW HWRM direct messages
+ */
+
+/**
  * Sends session open request to TF Firmware
  */
 int
@@ -1259,8 +1263,8 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
 	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
-		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.strength = (em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
@@ -1436,22 +1440,20 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 }
 
 int
-tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *params)
+tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *params)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_bulk_input req = { 0 };
-	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_tbl_type_bulk_get_input req = { 0 };
+	struct tf_tbl_type_bulk_get_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	int data_size = 0;
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16((params->dir) |
-		((params->clear_on_read) ?
-		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.flags = tfp_cpu_to_le_16(params->dir);
 	req.type = tfp_cpu_to_le_32(params->type);
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
@@ -1462,7 +1464,7 @@ tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 	MSG_PREP(parms,
 		 TF_KONG_MB,
 		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 HWRM_TFT_TBL_TYPE_BULK_GET,
 		 req,
 		 resp);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 06f52ef..1dad2b9 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -338,7 +338,7 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  * Returns:
  *  0 on Success else internal Truflow error
  */
-int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 9b7f5a0..b7b4451 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -23,29 +23,27 @@
 					    * IDs
 					    */
 #define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
-					    * TCAM. A slices is a WC TCAM entry.
-					    */
+#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
 #define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
 #define TF_NUM_METER             1024      /* < Number of meter instances */
 #define TF_NUM_MIRROR               2      /* < Number of mirror instances */
 #define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
 
-/* Wh+/Brd2 specific HW resources */
+/* Wh+/SR specific HW resources */
 #define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
 					    * entries
 					    */
 
-/* Brd2/Brd4 specific HW resources */
+/* SR/SR2 specific HW resources */
 #define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
 
 
-/* Brd3, Brd4 common HW resources */
+/* Thor, SR2 common HW resources */
 #define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
 					    * templates
 					    */
 
-/* Brd4 specific HW resources */
+/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
 #define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
 #define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
@@ -149,10 +147,11 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-#define TF_RSVD_MIRROR_RX                         1
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         1
+#define TF_RSVD_MIRROR_TX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
@@ -501,13 +500,13 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_METER_INST,
 	TF_RESC_TYPE_HW_MIRROR,
 	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/Brd2 specific HW resources */
+	/* Wh+/SR specific HW resources */
 	TF_RESC_TYPE_HW_SP_TCAM,
-	/* Brd2/Brd4 specific HW resources */
+	/* SR/SR2 specific HW resources */
 	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Brd3, Brd4 common HW resources */
+	/* Thor, SR2 common HW resources */
 	TF_RESC_TYPE_HW_FKB,
-	/* Brd4 specific HW resources */
+	/* SR2 specific HW resources */
 	TF_RESC_TYPE_HW_TBL_SCOPE,
 	TF_RESC_TYPE_HW_EPOCH0,
 	TF_RESC_TYPE_HW_EPOCH1,
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 2264704..d6739b3 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -14,6 +14,7 @@
 #include "tf_resources.h"
 #include "tf_msg.h"
 #include "bnxt.h"
+#include "tfp.h"
 
 /**
  * Internal macro to perform HW resource allocation check between what
@@ -329,13 +330,13 @@ tf_rm_print_hw_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_hw_2_str(i),
 				    hw_query->hw_query[i].max,
 				    tf_rm_rsvd_hw_value(dir, i));
@@ -359,13 +360,13 @@ tf_rm_print_sram_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_sram_2_str(i),
 				    sram_query->sram_query[i].max,
 				    tf_rm_rsvd_sram_value(dir, i));
@@ -1700,7 +1701,7 @@ tf_rm_hw_alloc_validate(enum tf_dir dir,
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed id:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1727,7 +1728,7 @@ tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed idx:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1820,19 +1821,22 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed,"
+			"error_flag:0x%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
@@ -1845,9 +1849,10 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1857,15 +1862,17 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -1903,19 +1910,22 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed,"
+			"error_flag:%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
 		goto cleanup;
 	}
@@ -1931,9 +1941,10 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 					    sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1943,15 +1954,18 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed,"
+			    " rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -2177,7 +2191,7 @@ tf_rm_hw_to_flush(struct tf_session *tfs,
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
 	} else {
-		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
 			    tf_dir_2_str(dir),
 			    free_cnt,
 			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
@@ -2538,8 +2552,8 @@ tf_rm_log_hw_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_hw_2_str(i));
 	}
@@ -2564,8 +2578,8 @@ tf_rm_log_sram_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_sram_2_str(i));
 	}
@@ -2777,9 +2791,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering HW resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering HW resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
@@ -2789,9 +2804,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2805,9 +2821,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
@@ -2818,9 +2835,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2828,18 +2846,20 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, HW free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, HW free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 
 		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, SRAM free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, SRAM free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 	}
 
@@ -2890,14 +2910,14 @@ tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Tcam type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "%s:, Tcam type lookup failed, type:%d\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type lookup failed, type:%d\n",
 			    tf_dir_2_str(dir),
 			    type);
 		return rc;
@@ -3057,15 +3077,15 @@ tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type lookup failed, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	}
@@ -3166,6 +3186,13 @@ tf_rm_convert_tbl_type(enum tf_tbl_type type,
 	return rc;
 }
 
+#if 0
+enum tf_rm_convert_type {
+	TF_RM_CONVERT_ADD_BASE,
+	TF_RM_CONVERT_RM_BASE
+};
+#endif
+
 int
 tf_rm_convert_index(struct tf_session *tfs,
 		    enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 07c3469..7f37f4d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "stack.h"
 #include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
@@ -53,14 +54,14 @@
  *   Pointer to the page table to free
  */
 static void
-tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
 {
 	uint32_t i;
 
 	for (i = 0; i < tp->pg_count; i++) {
 		if (!tp->pg_va_tbl[i]) {
-			PMD_DRV_LOG(WARNING,
-				    "No map for page %d table %016" PRIu64 "\n",
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
 				    i,
 				    (uint64_t)(uintptr_t)tp);
 			continue;
@@ -84,15 +85,14 @@ tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
  *   Pointer to the EM table to free
  */
 static void
-tf_em_free_page_table(struct tf_em_table *tbl)
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int i;
 
 	for (i = 0; i < tbl->num_lvl; i++) {
 		tp = &tbl->pg_tbl[i];
-
-		PMD_DRV_LOG(INFO,
+		TFP_DRV_LOG(INFO,
 			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
 			   TF_EM_PAGE_SIZE,
 			    i,
@@ -124,7 +124,7 @@ tf_em_free_page_table(struct tf_em_table *tbl)
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
 		   uint32_t pg_count,
 		   uint32_t pg_size)
 {
@@ -183,9 +183,9 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_page_table(struct tf_em_table *tbl)
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int rc = 0;
 	int i;
 	uint32_t j;
@@ -197,14 +197,15 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
 					tbl->page_cnt[i],
 					TF_EM_PAGE_SIZE);
 		if (rc) {
-			PMD_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d\n",
-				i);
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
 			goto cleanup;
 		}
 
 		for (j = 0; j < tp->pg_count; j++) {
-			PMD_DRV_LOG(INFO,
+			TFP_DRV_LOG(INFO,
 				"EEM: Allocated page table: size %u lvl %d cnt"
 				" %u VA:%p PA:%p\n",
 				TF_EM_PAGE_SIZE,
@@ -234,8 +235,8 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
  *   Flag controlling if the page table is last
  */
 static void
-tf_em_link_page_table(struct tf_em_page_tbl *tp,
-		      struct tf_em_page_tbl *tp_next,
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
 		      bool set_pte_last)
 {
 	uint64_t *pg_pa = tp_next->pg_pa_tbl;
@@ -270,10 +271,10 @@ tf_em_link_page_table(struct tf_em_page_tbl *tp,
  *   Pointer to EM page table
  */
 static void
-tf_em_setup_page_table(struct tf_em_table *tbl)
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
 	bool set_pte_last = 0;
 	int i;
 
@@ -415,7 +416,7 @@ tf_em_size_page_tbls(int max_lvl,
  *   - ENOMEM - Out of memory
  */
 static int
-tf_em_size_table(struct tf_em_table *tbl)
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
 {
 	uint64_t num_data_pages;
 	uint32_t *page_cnt;
@@ -456,11 +457,10 @@ tf_em_size_table(struct tf_em_table *tbl)
 					  tbl->num_entries,
 					  &num_data_pages);
 	if (max_lvl < 0) {
-		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		PMD_DRV_LOG(WARNING,
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
 			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type,
-			    (uint64_t)num_entries * tbl->entry_size,
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
 			    TF_EM_PAGE_SIZE);
 		return -ENOMEM;
 	}
@@ -474,8 +474,8 @@ tf_em_size_table(struct tf_em_table *tbl)
 	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
 				page_cnt);
 
-	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
 		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
@@ -504,8 +504,8 @@ tf_em_ctx_unreg(struct tf *tfp,
 		struct tf_tbl_scope_cb *tbl_scope_cb,
 		int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 
 	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
@@ -539,8 +539,8 @@ tf_em_ctx_reg(struct tf *tfp,
 	      struct tf_tbl_scope_cb *tbl_scope_cb,
 	      int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int rc = 0;
 	int i;
 
@@ -601,7 +601,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 					TF_MEGABYTE) / (key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
 				    "%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -613,7 +613,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
 				    "%u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -625,7 +625,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx flows "
 				    "requested:%u max:%u\n",
 				    parms->rx_num_flows_in_k * TF_KILOBYTE,
@@ -642,7 +642,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx requested: %u\n",
 				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -658,7 +658,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			(key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Insufficient memory requested:%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -670,7 +670,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -682,7 +682,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx flows "
 				    "requested:%u max:%u\n",
 				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
@@ -696,24 +696,24 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
 		}
 	}
 
-	if (parms->rx_num_flows_in_k != 0 &&
+	if ((parms->rx_num_flows_in_k != 0) &&
 	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Rx key size required: %u\n",
 			    (parms->rx_max_key_sz_in_bits));
 		return -EINVAL;
 	}
 
-	if (parms->tx_num_flows_in_k != 0 &&
+	if ((parms->tx_num_flows_in_k != 0) &&
 	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Tx key size required: %u\n",
 			    (parms->tx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -795,11 +795,10 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -817,9 +816,9 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -833,11 +832,11 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Set failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -891,9 +890,9 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -907,11 +906,11 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Get failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -932,8 +931,8 @@ tf_get_tbl_entry_internal(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 static int
-tf_get_bulk_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc;
 	int id;
@@ -975,7 +974,7 @@ tf_get_bulk_tbl_entry_internal(struct tf *tfp,
 	}
 
 	/* Get the entry */
-	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1006,10 +1005,9 @@ static int
 tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
 			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Alloc with search not supported\n",
-		    parms->dir);
-
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Alloc with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1032,9 +1030,9 @@ static int
 tf_free_tbl_entry_shadow(struct tf_session *tfs,
 			 struct tf_free_tbl_entry_parms *parms)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Free with search not supported\n",
-		    parms->dir);
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Free with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1074,8 +1072,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	parms.alignment = 0;
 
 	if (tfp_calloc(&parms) != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
-			    dir, strerror(-ENOMEM));
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
 		return -ENOMEM;
 	}
 
@@ -1084,8 +1082,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	rc = stack_init(num_entries, parms.mem_va, pool);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1101,13 +1099,13 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	for (i = 0; i < num_entries; i++) {
 		rc = stack_push(pool, j);
 		if (rc != 0) {
-			PMD_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
 				    tf_dir_2_str(dir), strerror(-rc));
 			goto cleanup;
 		}
 
 		if (j < 0) {
-			PMD_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
 				    dir, j);
 			goto cleanup;
 		}
@@ -1116,8 +1114,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 	return 0;
@@ -1168,18 +1166,7 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1188,9 +1175,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-					"%s, table scope not allocated\n",
-					tf_dir_2_str(parms->dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1200,9 +1187,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type);
 		return rc;
 	}
@@ -1233,18 +1220,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	struct bitalloc *session_pool;
 	struct tf_session *tfs;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1254,11 +1230,10 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1276,9 +1251,9 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	if (id == -1) {
 		free_cnt = ba_free_count(session_pool);
 
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d, free:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d, free:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   free_cnt);
 		return -ENOMEM;
@@ -1323,18 +1298,7 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1343,9 +1307,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1355,9 +1319,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_push(pool, index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 	}
@@ -1386,18 +1350,7 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	struct tf_session *tfs;
 	uint32_t index;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1408,9 +1361,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1439,9 +1392,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	/* Check if element was indeed allocated */
 	id = ba_inuse_free(session_pool, index);
 	if (id == -1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -ENOMEM;
@@ -1485,8 +1438,10 @@ tf_free_eem_tbl_scope_cb(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 parms->tbl_scope_id);
 
-	if (tbl_scope_cb == NULL)
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
 		return -EINVAL;
+	}
 
 	/* Free Table control block */
 	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
@@ -1516,23 +1471,17 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	int rc;
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_table *em_tables;
+	struct hcapi_cfa_em_table *em_tables;
 	int index;
 	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 
-	/* check parameters */
-	if (parms == NULL || tfp->session == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
-
 	session = (struct tf_session *)tfp->session->core_data;
 
 	/* Get Table Scope control block from the session pool */
 	index = ba_alloc(session->tbl_scope_pool_rx);
 	if (index == -1) {
-		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
 			    "Control Block\n");
 		return -ENOMEM;
 	}
@@ -1547,8 +1496,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				     dir,
 				     &tbl_scope_cb->em_caps[dir]);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"EEM: Unable to query for EEM capability\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 	}
@@ -1565,8 +1516,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 */
 		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 
@@ -1580,8 +1533,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"TBL: Unable to configure EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1590,8 +1545,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
 
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1604,9 +1561,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 					    em_tables[TF_RECORD_TABLE].num_entries,
 					    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "%d TBL: Unable to allocate idx pools %s\n",
-				    dir,
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup_full;
 		}
@@ -1634,13 +1591,12 @@ tf_set_tbl_entry(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct tf_session *session;
 
-	if (tfp == NULL || parms == NULL || parms->data == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 
@@ -1654,9 +1610,9 @@ tf_set_tbl_entry(struct tf *tfp,
 		tbl_scope_id = parms->tbl_scope_id;
 
 		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Table scope not allocated\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Table scope not allocated\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1665,18 +1621,22 @@ tf_set_tbl_entry(struct tf *tfp,
 		 */
 		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
 
-		if (tbl_scope_cb == NULL)
-			return -EINVAL;
+		if (tbl_scope_cb == NULL) {
+			TFP_DRV_LOG(ERR,
+				    "%s, table scope error\n",
+				    tf_dir_2_str(parms->dir));
+				return -EINVAL;
+		}
 
 		/* External table, implicitly the Action table */
-		base_addr = tf_em_get_table_page(tbl_scope_cb,
-						 parms->dir,
-						 offset,
-						 TF_RECORD_TABLE);
+		base_addr = (void *)(uintptr_t)hcapi_get_table_page(
+			&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE],
+			offset);
+
 		if (base_addr == NULL) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Base address lookup failed\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Base address lookup failed\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1688,11 +1648,11 @@ tf_set_tbl_entry(struct tf *tfp,
 		/* Internal table type processing */
 		rc = tf_set_tbl_entry_internal(tfp, parms);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Set failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Set failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 		}
 	}
 
@@ -1706,31 +1666,24 @@ tf_get_tbl_entry(struct tf *tfp,
 {
 	int rc = 0;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	if (parms->type == TF_TBL_TYPE_EXT) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, External table type not supported\n",
-			    parms->dir);
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
 
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
 		rc = tf_get_tbl_entry_internal(tfp, parms);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Get failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 	}
 
 	return rc;
@@ -1738,8 +1691,8 @@ tf_get_tbl_entry(struct tf *tfp,
 
 /* API defined in tf_core.h */
 int
-tf_get_bulk_tbl_entry(struct tf *tfp,
-		 struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc = 0;
 
@@ -1754,7 +1707,7 @@ tf_get_bulk_tbl_entry(struct tf *tfp,
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
-		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
 		if (rc)
 			TFP_DRV_LOG(ERR,
 				    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1773,11 +1726,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	rc = tf_alloc_eem_tbl_scope(tfp, parms);
 
@@ -1791,11 +1740,7 @@ tf_free_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	/* free table scope and all associated resources */
 	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
@@ -1813,11 +1758,7 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	/*
 	 * No shadow copy support for external tables, allocate and return
 	 */
@@ -1827,13 +1768,6 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1849,9 +1783,9 @@ tf_alloc_tbl_entry(struct tf *tfp,
 
 	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 
 	return rc;
 }
@@ -1866,11 +1800,8 @@ tf_free_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
 	/*
 	 * No shadow of external tables so just free the entry
 	 */
@@ -1880,13 +1811,6 @@ tf_free_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1903,16 +1827,16 @@ tf_free_tbl_entry(struct tf *tfp,
 	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
 
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 	return rc;
 }
 
 
 static void
-tf_dump_link_page_table(struct tf_em_page_tbl *tp,
-			struct tf_em_page_tbl *tp_next)
+tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+			struct hcapi_cfa_em_page_tbl *tp_next)
 {
 	uint64_t *pg_va;
 	uint32_t i;
@@ -1951,9 +1875,9 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 {
 	struct tf_session      *session;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_page_tbl *tp;
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 	int j;
 	int dir;
@@ -1967,7 +1891,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		TFP_DRV_LOG(ERR, "No table scope\n");
+		PMD_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index d78e4fe..2b7456f 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -13,45 +13,6 @@
 
 struct tf_session;
 
-enum tf_pg_tbl_lvl {
-	TF_PT_LVL_0,
-	TF_PT_LVL_1,
-	TF_PT_LVL_2,
-	TF_PT_LVL_MAX
-};
-
-enum tf_em_table_type {
-	TF_KEY0_TABLE,
-	TF_KEY1_TABLE,
-	TF_RECORD_TABLE,
-	TF_EFC_TABLE,
-	TF_MAX_TABLE
-};
-
-struct tf_em_page_tbl {
-	uint32_t	pg_count;
-	uint32_t	pg_size;
-	void		**pg_va_tbl;
-	uint64_t	*pg_pa_tbl;
-};
-
-struct tf_em_table {
-	int				type;
-	uint32_t			num_entries;
-	uint16_t			ctx_id;
-	uint32_t			entry_size;
-	int				num_lvl;
-	uint32_t			page_cnt[TF_PT_LVL_MAX];
-	uint64_t			num_data_pages;
-	void				*l0_addr;
-	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
-};
-
-struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[TF_MAX_TABLE];
-};
-
 /** table scope control block content */
 struct tf_em_caps {
 	uint32_t flags;
@@ -74,18 +35,14 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps          em_caps[TF_DIR_MAX];
 	struct stack               ext_act_pool[TF_DIR_MAX];
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/**
- * Hardware Page sizes supported for EEM:
- *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- *
- * Round-down other page sizes to the lower hardware page
- * size supported.
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
  */
 #define TF_EM_PAGE_SIZE_4K 12
 #define TF_EM_PAGE_SIZE_8K 13
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 17/50] net/bnxt: implement support for TCAM access
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (15 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 16/50] net/bnxt: add core changes for EM and EEM lookups Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 18/50] net/bnxt: multiple device implementation Somnath Kotur
                   ` (33 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Jay Ding <jay.ding@broadcom.com>

Implement TCAM alloc, free, bind, and unbind functions
Update tf_core, tf_msg, etc.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      | 258 +++++++++++++----------------
 drivers/net/bnxt/tf_core/tf_device.h    |  14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |  25 +--
 drivers/net/bnxt/tf_core/tf_msg.c       |  31 ++--
 drivers/net/bnxt/tf_core/tf_msg.h       |   4 +-
 drivers/net/bnxt/tf_core/tf_tcam.c      | 285 +++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tcam.h      |  66 ++++++--
 7 files changed, 480 insertions(+), 203 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 648d0d1..29522c6 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,43 +19,6 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
-			       enum tf_device_type device,
-			       uint16_t key_sz_in_bits,
-			       uint16_t *num_slice_per_row)
-{
-	uint16_t key_bytes;
-	uint16_t slice_sz = 0;
-
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
-#define CFA_P4_WC_TCAM_SLICE_SIZE     12
-
-	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
-		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
-		if (device == TF_DEVICE_TYPE_WH) {
-			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
-			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
-		} else {
-			TFP_DRV_LOG(ERR,
-				    "Unsupported device type %d\n",
-				    device);
-			return -ENOTSUP;
-		}
-
-		if (key_bytes > *num_slice_per_row * slice_sz) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Key size %d is not supported\n",
-				    tf_tcam_tbl_2_str(tcam_tbl_type),
-				    key_bytes);
-			return -ENOTSUP;
-		}
-	} else { /* for other type of tcam */
-		*num_slice_per_row = 1;
-	}
-
-	return 0;
-}
-
 /**
  * Create EM Tbl pool of memory indexes.
  *
@@ -918,49 +881,56 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_alloc_parms aparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_alloc_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+	aparms.dir = parms->dir;
+	aparms.type = parms->tcam_tbl_type;
+	aparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	aparms.priority = parms->priority;
+	rc = dev->ops->tf_dev_alloc_tcam(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+			    strerror(-rc));
+		return rc;
 	}
 
-	index *= num_slice_per_row;
+	parms->idx = aparms.idx;
 
-	parms->idx = index;
 	return 0;
 }
 
@@ -969,55 +939,60 @@ tf_set_tcam_entry(struct tf *tfp,
 		  struct tf_set_tcam_entry_parms *parms)
 {
 	int rc;
-	int id;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_set_parms sparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_set_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	/* Verify that the entry has been previously allocated */
-	index = parms->idx / num_slice_per_row;
+	sparms.dir = parms->dir;
+	sparms.type = parms->tcam_tbl_type;
+	sparms.idx = parms->idx;
+	sparms.key = parms->key;
+	sparms.mask = parms->mask;
+	sparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	sparms.result = parms->result;
+	sparms.result_size = TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	rc = dev->ops->tf_dev_set_tcam(tfp, &sparms);
+	if (rc) {
 		TFP_DRV_LOG(ERR,
-		   "%s: %s: Invalid or not allocated index, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-		   parms->idx);
-		return -EINVAL;
+			    "%s: TCAM set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
-	rc = tf_msg_tcam_entry_set(tfp, parms);
-
-	return rc;
+	return 0;
 }
 
 int
@@ -1033,59 +1008,52 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row = 1;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_free_parms fparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 0,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = parms->idx / num_slice_per_row;
-
-	rc = ba_inuse(session_pool, index);
-	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+	if (dev->ops->tf_dev_free_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    index);
-		return -EINVAL;
+			    strerror(-rc));
+		return rc;
 	}
 
-	ba_free(session_pool, index);
-
-	rc = tf_msg_tcam_entry_free(tfp, parms);
+	fparms.dir = parms->dir;
+	fparms.type = parms->tcam_tbl_type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 1501b20..32d9a54 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -116,8 +116,11 @@ struct tf_dev_ops {
 	 * [in] tfp
 	 *   Pointer to TF handle
 	 *
-	 * [out] slice_size
-	 *   Pointer to slice size the device supports
+	 * [in] type
+	 *   TCAM table type
+	 *
+	 * [in] key_sz
+	 *   Key size
 	 *
 	 * [out] num_slices_per_row
 	 *   Pointer to number of slices per row the device supports
@@ -126,9 +129,10 @@ struct tf_dev_ops {
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
-					 uint16_t *slice_size,
-					 uint16_t *num_slices_per_row);
+	int (*tf_dev_get_tcam_slice_info)(struct tf *tfp,
+					  enum tf_tcam_tbl_type type,
+					  uint16_t key_sz,
+					  uint16_t *num_slices_per_row);
 
 	/**
 	 * Allocation of an identifier element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index f4bd95f..77fb693 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -56,18 +56,21 @@ tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
  *   - (-EINVAL) on failure.
  */
 static int
-tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
-			     uint16_t *slice_size,
-			     uint16_t *num_slices_per_row)
+tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
+			      enum tf_tcam_tbl_type type,
+			      uint16_t key_sz,
+			      uint16_t *num_slices_per_row)
 {
-#define CFA_P4_WC_TCAM_SLICE_SIZE       12
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
 
-	if (slice_size == NULL || num_slices_per_row == NULL)
-		return -EINVAL;
-
-	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
-	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+	if (type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
+			return -ENOTSUP;
+	} else { /* for other type of tcam */
+		*num_slices_per_row = 1;
+	}
 
 	return 0;
 }
@@ -77,7 +80,7 @@ tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
  */
 const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
-	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 90e1acf..f223850 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -9,6 +9,7 @@
 #include <string.h>
 
 #include "tf_msg_common.h"
+#include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
 #include "tf_session.h"
@@ -1479,27 +1480,19 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_set_tcam_entry_parms *parms)
+		      struct tf_tcam_set_parms *parms)
 {
 	int rc;
 	struct tfp_send_msg_parms mparms = { 0 };
 	struct hwrm_tf_tcam_set_input req = { 0 };
 	struct hwrm_tf_tcam_set_output resp = { 0 };
-	uint16_t key_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
-	uint16_t result_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
@@ -1507,11 +1500,11 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	if (parms->dir == TF_DIR_TX)
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	req.key_size = key_bytes;
-	req.mask_offset = key_bytes;
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
 	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * key_bytes;
-	req.result_size = result_bytes;
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
 	data_size = 2 * req.key_size + req.result_size;
 
 	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
@@ -1529,9 +1522,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 			   sizeof(buf.pa_addr));
 	}
 
-	tfp_memcpy(&data[0], parms->key, key_bytes);
-	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
-	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1553,7 +1546,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 int
 tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_free_tcam_entry_parms *in_parms)
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
 	struct hwrm_tf_tcam_free_input req =  { 0 };
@@ -1561,7 +1554,7 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1dad2b9..a3e0f7b 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -247,7 +247,7 @@ int tf_msg_em_op(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_set(struct tf *tfp,
-			  struct tf_set_tcam_entry_parms *parms);
+			  struct tf_tcam_set_parms *parms);
 
 /**
  * Sends tcam entry 'free' to the Firmware.
@@ -262,7 +262,7 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_free(struct tf *tfp,
-			   struct tf_free_tcam_entry_parms *parms);
+			   struct tf_tcam_free_parms *parms);
 
 /**
  * Sends Set message of a Table Type element to the firmware.
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 3ad99dd..b9dba53 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -3,16 +3,24 @@
  * All rights reserved.
  */
 
+#include <string.h>
 #include <rte_common.h>
 
 #include "tf_tcam.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_rm_new.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_session.h"
+#include "tf_msg.h"
 
 struct tf;
 
 /**
  * TCAM DBs.
  */
-/* static void *tcam_db[TF_DIR_MAX]; */
+static void *tcam_db[TF_DIR_MAX];
 
 /**
  * TCAM Shadow DBs
@@ -22,7 +30,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -33,19 +41,131 @@ int
 tf_tcam_bind(struct tf *tfp __rte_unused,
 	     struct tf_tcam_cfg_parms *parms __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
+		db_cfg.rm_db = tcam_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: TCAM DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_tcam_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tcam_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tcam_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
-tf_tcam_alloc(struct tf *tfp __rte_unused,
-	      struct tf_tcam_alloc_parms *parms __rte_unused)
+tf_tcam_alloc(struct tf *tfp,
+	      struct tf_tcam_alloc_parms *parms)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Allocate requested element */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = (uint32_t *)&parms->idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed tcam, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	parms->idx *= num_slice_per_row;
+
 	return 0;
 }
 
@@ -53,6 +173,92 @@ int
 tf_tcam_free(struct tf *tfp __rte_unused,
 	     struct tf_tcam_free_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  0,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tcam_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx / num_slice_per_row;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
@@ -67,6 +273,77 @@ int
 tf_tcam_set(struct tf *tfp __rte_unused,
 	    struct tf_tcam_set_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry is not allocated, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 68c25eb..67c3bcb 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -51,9 +51,17 @@ struct tf_tcam_alloc_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint16_t idx;
 };
 
 /**
@@ -71,7 +79,7 @@ struct tf_tcam_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t idx;
+	uint16_t idx;
 	/**
 	 * [out] Reference count after free, only valid if session has been
 	 * created with shadow_copy.
@@ -90,7 +98,7 @@ struct tf_tcam_alloc_search_parms {
 	/**
 	 * [in] TCAM table type
 	 */
-	enum tf_tcam_tbl_type tcam_tbl_type;
+	enum tf_tcam_tbl_type type;
 	/**
 	 * [in] Enable search for matching entry
 	 */
@@ -100,9 +108,9 @@ struct tf_tcam_alloc_search_parms {
 	 */
 	uint8_t *key;
 	/**
-	 * [in] key size in bits (if search)
+	 * [in] key size (if search)
 	 */
-	uint16_t key_sz_in_bits;
+	uint16_t key_size;
 	/**
 	 * [in] Mask data to match on (if search)
 	 */
@@ -139,17 +147,29 @@ struct tf_tcam_set_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [in] Entry data
+	 * [in] Entry index to write to
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [in] Entry size
+	 * [in] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to write to
+	 * [in] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
@@ -165,17 +185,29 @@ struct tf_tcam_get_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [out] Entry data
+	 * [in] Entry index to read
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [out] Entry size
+	 * [out] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to read
+	 * [out] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [out] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [out] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 18/50] net/bnxt: multiple device implementation
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (16 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 17/50] net/bnxt: implement support for TCAM access Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 19/50] net/bnxt: update identifier with remap support Somnath Kotur
                   ` (32 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Michael Wildt <michael.wildt@broadcom.com>

Implement the Identifier, Table Type and the Resource Manager
modules.
Integrate Resource Manager with HCAPI.
Update open/close session.
Move to direct msgs for qcaps and resv messages.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 751 ++++++++++---------------------
 drivers/net/bnxt/tf_core/tf_core.h       |  97 ++--
 drivers/net/bnxt/tf_core/tf_device.c     |  10 +-
 drivers/net/bnxt/tf_core/tf_device.h     |   1 +
 drivers/net/bnxt/tf_core/tf_device_p4.c  |  26 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |  29 +-
 drivers/net/bnxt/tf_core/tf_identifier.h |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  45 +-
 drivers/net/bnxt/tf_core/tf_msg.h        |   1 +
 drivers/net/bnxt/tf_core/tf_rm.c         |   7 -
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 225 +++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  11 +-
 drivers/net/bnxt/tf_core/tf_session.c    |  52 +--
 drivers/net/bnxt/tf_core/tf_session.h    |   2 +-
 drivers/net/bnxt/tf_core/tf_tbl.c        | 610 -------------------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c   | 252 ++++++++++-
 drivers/net/bnxt/tf_core/tf_tbl_type.h   |   2 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |  12 +-
 drivers/net/bnxt/tf_core/tf_util.h       |  45 +-
 drivers/net/bnxt/tf_core/tfp.c           |   4 +-
 20 files changed, 879 insertions(+), 1307 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 29522c6..3e23d05 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,284 +19,15 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- * [in] num_entries
- *   number of entries to write
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i, j;
-	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
-			    strerror(-ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Fill pool with indexes
-	 */
-	j = num_entries - 1;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-		j--;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- *
- * Return:
- */
-static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
-{
-	struct stack *pool = &session->em_pool[dir];
-	uint32_t *ptr;
-
-	ptr = stack_items(pool);
-
-	tfp_free(ptr);
-}
-
 int
-tf_open_session(struct tf                    *tfp,
+tf_open_session(struct tf *tfp,
 		struct tf_open_session_parms *parms)
 {
 	int rc;
-	struct tf_session *session;
-	struct tfp_calloc_parms alloc_parms;
-	unsigned int domain, bus, slot, device;
-	uint8_t fw_session_id;
-	int dir;
-
-	TF_CHECK_PARMS(tfp, parms);
-
-	/* Filter out any non-supported device types on the Core
-	 * side. It is assumed that the Firmware will be supported if
-	 * firmware open session succeeds.
-	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH) {
-		TFP_DRV_LOG(ERR,
-			    "Unsupported device type %d\n",
-			    parms->device_type);
-		return -ENOTSUP;
-	}
-
-	/* Build the beginning of session_id */
-	rc = sscanf(parms->ctrl_chan_name,
-		    "%x:%x:%x.%d",
-		    &domain,
-		    &bus,
-		    &slot,
-		    &device);
-	if (rc != 4) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to scan device ctrl_chan_name\n");
-		return -EINVAL;
-	}
-
-	/* open FW session and get a new session_id */
-	rc = tf_msg_session_open(tfp,
-				 parms->ctrl_chan_name,
-				 &fw_session_id);
-	if (rc) {
-		/* Log error */
-		if (rc == -EEXIST)
-			TFP_DRV_LOG(ERR,
-				    "Session is already open, rc:%s\n",
-				    strerror(-rc));
-		else
-			TFP_DRV_LOG(ERR,
-				    "Open message send failed, rc:%s\n",
-				    strerror(-rc));
-
-		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
-		return rc;
-	}
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session_info);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
-
-	/* Allocate core data for the session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session->core_data = alloc_parms.mem_va;
-
-	session = (struct tf_session *)tfp->session->core_data;
-	tfp_memcpy(session->ctrl_chan_name,
-		   parms->ctrl_chan_name,
-		   TF_SESSION_NAME_MAX);
-
-	/* Initialize Session */
-	session->dev = NULL;
-	tf_rm_init(tfp);
-
-	/* Construct the Session ID */
-	session->session_id.internal.domain = domain;
-	session->session_id.internal.bus = bus;
-	session->session_id.internal.device = device;
-	session->session_id.internal.fw_session_id = fw_session_id;
-
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Shadow DB configuration */
-	if (parms->shadow_copy) {
-		/* Ignore shadow_copy setting */
-		session->shadow_copy = 0;/* parms->shadow_copy; */
-#if (TF_SHADOW == 1)
-		rc = tf_rm_shadow_db_init(tfs);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%s",
-				    strerror(-rc));
-		/* Add additional processing */
-#endif /* TF_SHADOW */
-	}
-
-	/* Adjust the Session with what firmware allowed us to get */
-	rc = tf_rm_allocate_validate(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Rm allocate validate failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Initialize EM pool */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session,
-				       (enum tf_dir)dir,
-				       TF_SESSION_EM_POOL_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EM Pool initialization failed\n");
-			goto cleanup_close;
-		}
-	}
-
-	session->ref_count++;
-
-	/* Return session ID */
-	parms->session_id = session->session_id;
-
-	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    parms->session_id.internal.domain,
-		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
-
-	return 0;
-
- cleanup:
-	tfp_free(tfp->session->core_data);
-	tfp_free(tfp->session);
-	tfp->session = NULL;
-	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
-}
-
-int
-tf_open_session_new(struct tf *tfp,
-		    struct tf_open_session_parms *parms)
-{
-	int rc;
 	unsigned int domain, bus, slot, device;
 	struct tf_session_open_session_parms oparms;
 
-	TF_CHECK_PARMS(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
@@ -347,33 +78,8 @@ tf_open_session_new(struct tf *tfp,
 }
 
 int
-tf_attach_session(struct tf *tfp __rte_unused,
-		  struct tf_attach_session_parms *parms __rte_unused)
-{
-#if (TF_SHARED == 1)
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/* - Open the shared memory for the attach_chan_name
-	 * - Point to the shared session for this Device instance
-	 * - Check that session is valid
-	 * - Attach to the firmware so it can record there is more
-	 *   than one client of the session.
-	 */
-
-	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_attach(tfp,
-					   parms->ctrl_chan_name,
-					   parms->session_id);
-	}
-#endif /* TF_SHARED */
-	return -1;
-}
-
-int
-tf_attach_session_new(struct tf *tfp,
-		      struct tf_attach_session_parms *parms)
+tf_attach_session(struct tf *tfp,
+		  struct tf_attach_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
@@ -438,65 +144,6 @@ int
 tf_close_session(struct tf *tfp)
 {
 	int rc;
-	int rc_close = 0;
-	struct tf_session *tfs;
-	union tf_session_id session_id;
-	int dir;
-
-	TF_CHECK_TFP_SESSION(tfp);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Cleanup if we're last user of the session */
-	if (tfs->ref_count == 1) {
-		/* Cleanup any outstanding resources */
-		rc_close = tf_rm_close(tfp);
-	}
-
-	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Message send failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		/* Update the ref_count */
-		tfs->ref_count--;
-	}
-
-	session_id = tfs->session_id;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Free EM pool */
-		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, (enum tf_dir)dir);
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
-
-	TFP_DRV_LOG(INFO,
-		    "Session closed, session_id:%d\n",
-		    session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    session_id.internal.domain,
-		    session_id.internal.bus,
-		    session_id.internal.device,
-		    session_id.internal.fw_session_id);
-
-	return rc_close;
-}
-
-int
-tf_close_session_new(struct tf *tfp)
-{
-	int rc;
 	struct tf_session_close_session_parms cparms = { 0 };
 	union tf_session_id session_id = { 0 };
 	uint8_t ref_count;
@@ -620,76 +267,9 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
-int tf_alloc_identifier(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	int id;
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	if (rc) {
-		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	id = ba_alloc(session_pool);
-
-	if (id == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		return -ENOMEM;
-	}
-	parms->id = id;
-	return 0;
-}
-
 int
-tf_alloc_identifier_new(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
+tf_alloc_identifier(struct tf *tfp,
+		    struct tf_alloc_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -732,7 +312,7 @@ tf_alloc_identifier_new(struct tf *tfp,
 	}
 
 	aparms.dir = parms->dir;
-	aparms.ident_type = parms->ident_type;
+	aparms.type = parms->ident_type;
 	aparms.id = &id;
 	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
 	if (rc) {
@@ -748,79 +328,9 @@ tf_alloc_identifier_new(struct tf *tfp,
 	return 0;
 }
 
-int tf_free_identifier(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	int rc;
-	int ba_rc;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: %s Identifier pool access failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	ba_rc = ba_inuse(session_pool, (int)parms->id);
-
-	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    parms->id);
-		return -EINVAL;
-	}
-
-	ba_free(session_pool, (int)parms->id);
-
-	return 0;
-}
-
 int
-tf_free_identifier_new(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
+tf_free_identifier(struct tf *tfp,
+		   struct tf_free_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -862,12 +372,12 @@ tf_free_identifier_new(struct tf *tfp,
 	}
 
 	fparms.dir = parms->dir;
-	fparms.ident_type = parms->ident_type;
+	fparms.type = parms->ident_type;
 	fparms.id = parms->id;
 	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Identifier allocation failed, rc:%s\n",
+			    "%s: Identifier free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
@@ -1057,3 +567,242 @@ tf_free_tcam_entry(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_alloc_parms aparms;
+	uint32_t idx;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_tbl_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.type = parms->type;
+	aparms.idx = &idx;
+	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_tbl_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.type = parms->type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table free failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_set_parms sparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&sparms, 0, sizeof(struct tf_tbl_set_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.data = parms->data;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_parms gparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&gparms, 0, sizeof(struct tf_tbl_get_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.data = parms->data;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_get_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index bb456bb..a7a7bd3 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -385,33 +385,86 @@ struct tf {
 };
 
 /**
+ * Identifier resource definition
+ */
+struct tf_identifier_resources {
+	/**
+	 * Array of TF Identifiers where each entry is expected to be
+	 * set to the requested resource number of that specific type.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t cnt[TF_IDENT_TYPE_MAX];
+};
+
+/**
+ * Table type resource definition
+ */
+struct tf_tbl_resources {
+	/**
+	 * Array of TF Table types where each entry is expected to be
+	 * set to the requeste resource number of that specific
+	 * type. The index used is tf_tbl_type.
+	 */
+	uint16_t cnt[TF_TBL_TYPE_MAX];
+};
+
+/**
+ * TCAM type resource definition
+ */
+struct tf_tcam_resources {
+	/**
+	 * Array of TF TCAM types where each entry is expected to be
+	 * set to the requested resource number of that specific
+	 * type. The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t cnt[TF_TCAM_TBL_TYPE_MAX];
+};
+
+/**
+ * EM type resource definition
+ */
+struct tf_em_resources {
+	/**
+	 * Array of TF EM table types where each entry is expected to
+	 * be set to the requested resource number of that specific
+	 * type. The index used is tf_em_tbl_type.
+	 */
+	uint16_t cnt[TF_EM_TBL_TYPE_MAX];
+};
+
+/**
  * tf_session_resources parameter definition.
  */
 struct tf_session_resources {
-	/** [in] Requested Identifier Resources
+	/**
+	 * [in] Requested Identifier Resources
 	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
+	 * Number of identifier resources requested for the
+	 * session.
 	 */
-	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested Index Table resource counts
+	struct tf_identifier_resources ident_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested Index Table resource counts
 	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
+	 * The number of index table resources requested for the
+	 * session.
 	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested TCAM Table resource counts
+	struct tf_tbl_resources tbl_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested TCAM Table resource counts
 	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
+	 * The number of TCAM table resources requested for the
+	 * session.
 	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested EM resource counts
+
+	struct tf_tcam_resources tcam_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested EM resource counts
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * The number of internal EM table resources requested for the
+	 * session.
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+	struct tf_em_resources em_cnt[TF_DIR_MAX];
 };
 
 /**
@@ -497,9 +550,6 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
-int tf_open_session_new(struct tf *tfp,
-			struct tf_open_session_parms *parms);
-
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -565,8 +615,6 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
-int tf_attach_session_new(struct tf *tfp,
-			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -576,7 +624,6 @@ int tf_attach_session_new(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
-int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -631,8 +678,6 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
-int tf_alloc_identifier_new(struct tf *tfp,
-			    struct tf_alloc_identifier_parms *parms);
 
 /**
  * free identifier resource
@@ -645,8 +690,6 @@ int tf_alloc_identifier_new(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
-int tf_free_identifier_new(struct tf *tfp,
-			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
@@ -1277,7 +1320,7 @@ struct tf_bulk_get_tbl_entry_parms {
  * provided data buffer is too small for the data type requested.
  */
 int tf_bulk_get_tbl_entry(struct tf *tfp,
-		     struct tf_bulk_get_tbl_entry_parms *parms);
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 4c46cad..b474e8c 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -43,6 +43,14 @@ dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
 	/* Initialize the modules */
 
 	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
@@ -78,7 +86,7 @@ dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
-	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 32d9a54..c31bf23 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -407,6 +407,7 @@ struct tf_dev_ops {
 /**
  * Supported device operation structures
  */
+extern const struct tf_dev_ops tf_dev_ops_p4_init;
 extern const struct tf_dev_ops tf_dev_ops_p4;
 
 #endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 77fb693..9e332c5 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -78,6 +78,26 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 /**
  * Truflow P4 device specific functions
  */
+const struct tf_dev_ops tf_dev_ops_p4_init = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
+	.tf_dev_alloc_ident = NULL,
+	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_tbl = NULL,
+	.tf_dev_alloc_search_tbl = NULL,
+	.tf_dev_set_tbl = NULL,
+	.tf_dev_get_tbl = NULL,
+	.tf_dev_alloc_tcam = NULL,
+	.tf_dev_free_tcam = NULL,
+	.tf_dev_alloc_search_tcam = NULL,
+	.tf_dev_set_tcam = NULL,
+	.tf_dev_get_tcam = NULL,
+};
+
+/**
+ * Truflow P4 device specific functions
+ */
 const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
 	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
@@ -85,14 +105,14 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
-	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
-	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
-	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_get_tcam = NULL,
 	.tf_dev_insert_em_entry = tf_em_insert_entry,
 	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index e89f976..ee07a6a 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -45,19 +45,22 @@ tf_ident_bind(struct tf *tfp,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
-		db_cfg.rm_db = ident_db[i];
+		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
+		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "%s: Identifier DB creation failed\n",
 				    tf_dir_2_str(i));
+
 			return rc;
 		}
 	}
 
 	init = 1;
 
+	printf("Identifier - initialized\n");
+
 	return 0;
 }
 
@@ -73,8 +76,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 	/* Bail if nothing has been initialized done silent as to
 	 * allow for creation cleanup.
 	 */
-	if (!init)
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Identifier DBs created\n");
 		return -EINVAL;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
@@ -96,6 +102,7 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 	       struct tf_ident_alloc_parms *parms)
 {
 	int rc;
+	uint32_t id;
 	struct tf_rm_allocate_parms aparms = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -109,17 +116,19 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 
 	/* Allocate requested element */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
-	aparms.index = (uint32_t *)&parms->id;
+	aparms.db_index = parms->type;
+	aparms.index = &id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Failed allocate, type:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type);
+			    parms->type);
 		return rc;
 	}
 
+	*parms->id = id;
+
 	return 0;
 }
 
@@ -143,7 +152,7 @@ tf_ident_free(struct tf *tfp __rte_unused,
 
 	/* Check if element is in use */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
+	aparms.db_index = parms->type;
 	aparms.index = parms->id;
 	aparms.allocated = &allocated;
 	rc = tf_rm_is_allocated(&aparms);
@@ -154,21 +163,21 @@ tf_ident_free(struct tf *tfp __rte_unused,
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
 
 	/* Free requested element */
 	fparms.rm_db = ident_db[parms->dir];
-	fparms.db_index = parms->ident_type;
+	fparms.db_index = parms->type;
 	fparms.index = parms->id;
 	rc = tf_rm_free(&fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Free failed, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index 1c5319b..6e36c52 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -43,7 +43,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [out] Identifier allocated
 	 */
@@ -61,7 +61,7 @@ struct tf_ident_free_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [in] ID to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index f223850..5ec2f83 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -12,6 +12,7 @@
 #include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
+#include "tf_common.h"
 #include "tf_session.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -935,13 +936,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	struct tf_rm_resc_req_entry *data;
 	int dma_size;
 
-	if (size == 0 || query == NULL || resv_strategy == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Resource QCAPS parameter error, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-EINVAL));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS3(tfp, query, resv_strategy);
 
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
@@ -962,7 +957,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.qcaps_size = size;
-	req.qcaps_addr = qcaps_buf.pa_addr;
+	req.qcaps_addr = tfp_cpu_to_le_64(qcaps_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
 	parms.req_data = (uint32_t *)&req;
@@ -980,18 +975,29 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: QCAPS message error, rc:%s\n",
+			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+
+	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_cpu_to_le_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
+
+		printf("type: %d(0x%x) %d %d\n",
+		       query[i].type,
+		       query[i].type,
+		       query[i].min,
+		       query[i].max);
+
 	}
 
 	*resv_strategy = resp.flags &
@@ -1021,6 +1027,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	struct tf_rm_resc_entry *resv_data;
 	int dma_size;
 
+	TF_CHECK_PARMS3(tfp, request, resv);
+
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1053,8 +1061,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
 	}
 
-	req.req_addr = req_buf.pa_addr;
-	req.resp_addr = resv_buf.pa_addr;
+	req.req_addr = tfp_cpu_to_le_64(req_buf.pa_addr);
+	req.resc_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
 	parms.req_data = (uint32_t *)&req;
@@ -1072,18 +1080,28 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Alloc message error, rc:%s\n",
+			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("\nRESV\n");
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
 		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
 		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
 		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+
+		printf("%d type: %d(0x%x) %d %d\n",
+		       i,
+		       resv[i].type,
+		       resv[i].type,
+		       resv[i].start,
+		       resv[i].stride);
 	}
 
 	tf_msg_free_dma_buf(&req_buf);
@@ -1459,7 +1477,8 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
 
-	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	data_size = params->num_entries * params->entry_sz_in_bytes;
+
 	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
 
 	MSG_PREP(parms,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index a3e0f7b..fb635f6 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_rm_new.h"
+#include "tf_tcam.h"
 
 struct tf;
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index d6739b3..b6fe2f1 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -3186,13 +3186,6 @@ tf_rm_convert_tbl_type(enum tf_tbl_type type,
 	return rc;
 }
 
-#if 0
-enum tf_rm_convert_type {
-	TF_RM_CONVERT_ADD_BASE,
-	TF_RM_CONVERT_RM_BASE
-};
-#endif
-
 int
 tf_rm_convert_index(struct tf_session *tfs,
 		    enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 7cadb23..6abf79a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -10,6 +10,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_rm_new.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
 #include "tf_device.h"
@@ -65,6 +66,46 @@ struct tf_rm_new_db {
 	struct tf_rm_element *db;
 };
 
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] cfg
+ *   Pointer to the DB configuration
+ *
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
+ *
+ * [in] count
+ *   Number of DB configuration elements
+ *
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static void
+tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
+{
+	int i;
+	uint16_t cnt = 0;
+
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
+	}
+
+	*valid_count = cnt;
+}
 
 /**
  * Resource Manager Adjust of base index definitions.
@@ -132,6 +173,7 @@ tf_rm_create_db(struct tf *tfp,
 {
 	int rc;
 	int i;
+	int j;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	uint16_t max_types;
@@ -143,6 +185,9 @@ tf_rm_create_db(struct tf *tfp,
 	struct tf_rm_new_db *rm_db;
 	struct tf_rm_element *db;
 	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -177,10 +222,19 @@ tf_rm_create_db(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/* Process capabilities against db requirements */
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
 
 	/* Alloc request, alignment already set */
-	cparms.nitems = parms->num_elements;
+	cparms.nitems = (size_t)hcapi_items;
 	cparms.size = sizeof(struct tf_rm_resc_req_entry);
 	rc = tfp_calloc(&cparms);
 	if (rc)
@@ -195,15 +249,24 @@ tf_rm_create_db(struct tf *tfp,
 	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
 
 	/* Build the request */
-	for (i = 0; i < parms->num_elements; i++) {
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
 		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			req[i].type = parms->cfg[i].hcapi_type;
-			/* Check that we can get the full amount allocated */
-			if (parms->alloc_num[i] <=
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
 			    query[parms->cfg[i].hcapi_type].max) {
-				req[i].min = parms->alloc_num[i];
-				req[i].max = parms->alloc_num[i];
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
 			} else {
 				TFP_DRV_LOG(ERR,
 					    "%s: Resource failure, type:%d\n",
@@ -211,19 +274,16 @@ tf_rm_create_db(struct tf *tfp,
 					    parms->cfg[i].hcapi_type);
 				TFP_DRV_LOG(ERR,
 					"req:%d, avail:%d\n",
-					parms->alloc_num[i],
+					parms->alloc_cnt[i],
 					query[parms->cfg[i].hcapi_type].max);
 				return -EINVAL;
 			}
-		} else {
-			/* Skip the element */
-			req[i].type = CFA_RESOURCE_TYPE_INVALID;
 		}
 	}
 
 	rc = tf_msg_session_resc_alloc(tfp,
 				       parms->dir,
-				       parms->num_elements,
+				       hcapi_items,
 				       req,
 				       resv);
 	if (rc)
@@ -246,42 +306,74 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
 	db = rm_db->db;
-	for (i = 0; i < parms->num_elements; i++) {
-		/* If allocation failed for a single entry the DB
-		 * creation is considered a failure.
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
+
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
 		 */
-		if (parms->alloc_num[i] != resv[i].stride) {
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
 				    "%s: Alloc failed, type:%d\n",
 				    tf_dir_2_str(parms->dir),
-				    i);
+				    db[i].cfg_type);
 			TFP_DRV_LOG(ERR,
 				    "req:%d, alloc:%d\n",
-				    parms->alloc_num[i],
-				    resv[i].stride);
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
 			goto fail;
 		}
-
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-		db[i].alloc.entry.start = resv[i].start;
-		db[i].alloc.entry.stride = resv[i].stride;
-
-		/* Create pool */
-		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
-			     sizeof(struct bitalloc));
-		/* Alloc request, alignment already set */
-		cparms.nitems = pool_size;
-		cparms.size = sizeof(struct bitalloc);
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-		db[i].pool = (struct bitalloc *)cparms.mem_va;
 	}
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
-	parms->rm_db = (void *)rm_db;
+	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
@@ -307,13 +399,15 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 	int i;
 	struct tf_rm_new_db *rm_db;
 
+	TF_CHECK_PARMS1(parms);
+
 	/* Traverse the DB and clear each pool.
 	 * NOTE:
 	 *   Firmware is not cleared. It will be cleared on close only.
 	 */
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db->pool);
+		tfp_free((void *)rm_db->db[i].pool);
 
 	tfp_free((void *)parms->rm_db);
 
@@ -325,11 +419,11 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc = 0;
 	int id;
+	uint32_t index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -339,6 +433,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		TFP_DRV_LOG(ERR,
@@ -353,15 +458,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 				TF_RM_ADJUST_ADD_BASE,
 				parms->db_index,
 				id,
-				parms->index);
+				&index);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc adjust of base index failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -1;
+		return -EINVAL;
 	}
 
+	*parms->index = index;
+
 	return rc;
 }
 
@@ -373,8 +480,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -384,6 +490,17 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -409,8 +526,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -420,6 +536,17 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -442,8 +569,7 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -465,8 +591,7 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 6d8234d..ebf38c4 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -135,13 +135,16 @@ struct tf_rm_create_db_parms {
 	 */
 	struct tf_rm_element_cfg *cfg;
 	/**
-	 * Allocation number array. Array size is num_elements.
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
 	 */
-	uint16_t *alloc_num;
+	uint16_t *alloc_cnt;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *rm_db;
+	void **rm_db;
 };
 
 /**
@@ -382,7 +385,7 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
  * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI type.
+ * requested HCAPI RM type.
  *
  * [in] parms
  *   Pointer to get hcapi parameters
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 2f769d8..bac9c76 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -95,21 +95,11 @@ tf_session_open_session(struct tf *tfp,
 		      parms->open_cfg->device_type,
 		      session->shadow_copy,
 		      &parms->open_cfg->resources,
-		      session->dev);
+		      &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
 
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
 	session->ref_count++;
 
 	return 0;
@@ -119,40 +109,12 @@ tf_session_open_session(struct tf *tfp,
 	tfp_free(tfp->session);
 	tfp->session = NULL;
 	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
 }
 
 int
 tf_session_attach_session(struct tf *tfp __rte_unused,
 			  struct tf_session_attach_session_parms *parms __rte_unused)
 {
-#if 0
-
-	/* A shared session is similar to single session. It consists
-	 * of two parts the tf_session_info element which remains
-	 * private to the caller and the session within this element
-	 * which is shared. The session it self holds the dynamic
-	 * data, i.e. the device and its sub modules.
-	 *
-	 * Firmware side is updated about any sharing as well.
-	 */
-
-	/* - Open the shared memory for the attach_chan_name
-	 * - Point to the shared session for this Device instance
-	 * - Check that session is valid
-	 * - Attach to the firmware so it can record there is more
-	 *   than one client of the session.
-	 */
-
-	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_attach(tfp,
-					   parms->ctrl_chan_name,
-					   parms->session_id);
-	}
-#endif /* 0 */
 	int rc = -EOPNOTSUPP;
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -255,17 +217,7 @@ int
 tf_session_get_device(struct tf_session *tfs,
 		      struct tf_dev_info **tfd)
 {
-	int rc;
-
-	if (tfs->dev == NULL) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR,
-			    "Device not created, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	*tfd = tfs->dev;
+	*tfd = &tfs->dev;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 9279251..705bb09 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -97,7 +97,7 @@ struct tf_session {
 	uint8_t ref_count;
 
 	/** Device handle */
-	struct tf_dev_info *dev;
+	struct tf_dev_info dev;
 
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 7f37f4d..b0a932b 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -760,163 +760,6 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 }
 
 /**
- * Internal function to set a Table Entry. Supports all internal Table Types
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_set_tbl_entry_internal(struct tf *tfp,
-			  struct tf_set_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
  *
@@ -1143,266 +986,6 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
-/**
- * Allocate External Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_external(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	uint32_t index;
-	struct tf_session *tfs;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	/* Allocate an element
-	 */
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type);
-		return rc;
-	}
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Allocate Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	int free_cnt;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	id = ba_alloc(session_pool);
-	if (id == -1) {
-		free_cnt = ba_free_count(session_pool);
-
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d, free:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   free_cnt);
-		return -ENOMEM;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_ADD_BASE,
-			    id,
-			    &index);
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Free External Tbl entry to the session pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry freed
- *
- * - Failure, entry not successfully freed for these reasons
- *  -ENOMEM
- *  -EOPNOTSUPP
- *  -EINVAL
- */
-static int
-tf_free_tbl_entry_pool_external(struct tf *tfp,
-				struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_session *tfs;
-	uint32_t index;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	index = parms->idx;
-
-	rc = stack_push(pool, index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, consistency error, stack full, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-	}
-	return rc;
-}
-
-/**
- * Free Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_free_tbl_entry_pool_internal(struct tf *tfp,
-		       struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	int id;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	uint32_t index;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Check if element was indeed allocated */
-	id = ba_inuse_free(session_pool, index);
-	if (id == -1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -ENOMEM;
-	}
-
-	return rc;
-}
-
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
 tbl_scope_cb_find(struct tf_session *session,
@@ -1584,113 +1167,6 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 
 /* API defined in tf_core.h */
 int
-tf_set_tbl_entry(struct tf *tfp,
-		 struct tf_set_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		void *base_addr;
-		uint32_t offset = parms->idx;
-		uint32_t tbl_scope_id;
-
-		session = (struct tf_session *)(tfp->session->core_data);
-
-		tbl_scope_id = parms->tbl_scope_id;
-
-		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			TFP_DRV_LOG(ERR,
-				    "%s, Table scope not allocated\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		/* Get the table scope control block associated with the
-		 * external pool
-		 */
-		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
-
-		if (tbl_scope_cb == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, table scope error\n",
-				    tf_dir_2_str(parms->dir));
-				return -EINVAL;
-		}
-
-		/* External table, implicitly the Action table */
-		base_addr = (void *)(uintptr_t)hcapi_get_table_page(
-			&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE],
-			offset);
-
-		if (base_addr == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Base address lookup failed\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		offset %= TF_EM_PAGE_SIZE;
-		rte_memcpy((char *)base_addr + offset,
-			   parms->data,
-			   parms->data_sz_in_bytes);
-	} else {
-		/* Internal table type processing */
-		rc = tf_set_tbl_entry_internal(tfp, parms);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Set failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-		}
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_get_tbl_entry(struct tf *tfp,
-		 struct tf_get_tbl_entry_parms *parms)
-{
-	int rc = 0;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
-		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
-			    tf_dir_2_str(parms->dir));
-
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_get_tbl_entry_internal(tfp, parms);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s, Get failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
 tf_bulk_get_tbl_entry(struct tf *tfp,
 		 struct tf_bulk_get_tbl_entry_parms *parms)
 {
@@ -1748,92 +1224,6 @@ tf_free_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_entry(struct tf *tfp,
-		   struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	/*
-	 * No shadow copy support for external tables, allocate and return
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
-		/* Entry found and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_entry(struct tf *tfp,
-		  struct tf_free_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/*
-	 * No shadow of external tables so just free the entry
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_free_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_free_tbl_entry_shadow(tfs, parms);
-		/* Entry free'ed and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
-
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-	return rc;
-}
-
-
 static void
 tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
 			struct hcapi_cfa_em_page_tbl *tp_next)
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index b79706f..51f8f07 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -6,13 +6,18 @@
 #include <rte_common.h>
 
 #include "tf_tbl_type.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Table DBs.
  */
-/* static void *tbl_db[TF_DIR_MAX]; */
+static void *tbl_db[TF_DIR_MAX];
 
 /**
  * Table Shadow DBs
@@ -22,7 +27,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -30,29 +35,164 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_bind(struct tf *tfp __rte_unused,
-	    struct tf_tbl_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
 	return 0;
 }
 
 int
 tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Table DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms __rte_unused)
+	     struct tf_tbl_alloc_parms *parms)
 {
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
 	return 0;
 }
 
 int
 tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms __rte_unused)
+	    struct tf_tbl_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
 	return 0;
 }
 
@@ -64,15 +204,107 @@ tf_tbl_alloc_search(struct tf *tfp __rte_unused,
 }
 
 int
-tf_tbl_set(struct tf *tfp __rte_unused,
-	   struct tf_tbl_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
 int
-tf_tbl_get(struct tf *tfp __rte_unused,
-	   struct tf_tbl_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index 11f2aa3..3474489 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -55,7 +55,7 @@ struct tf_tbl_alloc_parms {
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint32_t *idx;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b9dba53..e0fac31 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -38,8 +38,8 @@ static uint8_t init;
 /* static uint8_t shadow_init; */
 
 int
-tf_tcam_bind(struct tf *tfp __rte_unused,
-	     struct tf_tcam_cfg_parms *parms __rte_unused)
+tf_tcam_bind(struct tf *tfp,
+	     struct tf_tcam_cfg_parms *parms)
 {
 	int rc;
 	int i;
@@ -59,8 +59,8 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
-		db_cfg.rm_db = tcam_db[i];
+		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
+		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
@@ -72,11 +72,13 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 
 	init = 1;
 
+	printf("TCAM - initialized\n");
+
 	return 0;
 }
 
 int
-tf_tcam_unbind(struct tf *tfp __rte_unused)
+tf_tcam_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index 4099629..ad8edaf 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -10,32 +10,57 @@
 
 /**
  * Helper function converting direction to text string
+ *
+ * [in] dir
+ *   Receive or transmit direction identifier
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the direction
  */
-const char
-*tf_dir_2_str(enum tf_dir dir);
+const char *tf_dir_2_str(enum tf_dir dir);
 
 /**
  * Helper function converting identifier to text string
+ *
+ * [in] id_type
+ *   Identifier type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the identifier
  */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
+const char *tf_ident_2_str(enum tf_identifier_type id_type);
 
 /**
  * Helper function converting tcam type to text string
+ *
+ * [in] tcam_type
+ *   TCAM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the tcam
  */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+const char *tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
 
 /**
  * Helper function converting tbl type to text string
+ *
+ * [in] tbl_type
+ *   Table type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the table type
  */
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
 
 /**
  * Helper function converting em tbl type to text string
+ *
+ * [in] em_type
+ *   EM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
  */
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
 #endif /* _TF_UTIL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 3bce3ad..69d1c9a 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -102,13 +102,13 @@ tfp_calloc(struct tfp_calloc_parms *parms)
 				    (parms->nitems * parms->size),
 				    parms->alignment);
 	if (parms->mem_va == NULL) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_va\n");
 		return -ENOMEM;
 	}
 
 	parms->mem_pa = (void *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
 	if (parms->mem_pa == (void *)((uintptr_t)RTE_BAD_IOVA)) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_pa\n");
 		return -ENOMEM;
 	}
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 19/50] net/bnxt: update identifier with remap support
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (17 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 18/50] net/bnxt: multiple device implementation Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 20/50] net/bnxt: update RM with residual checker Somnath Kotur
                   ` (31 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Michael Wildt <michael.wildt@broadcom.com>

- Add Identifier L2 CTXT Remap to the P4 device and updated the
  cfa_resource_types.h to get the type support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 110 ++++++++++++++------------
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   2 +-
 2 files changed, 60 insertions(+), 52 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 11e8892..058d8cc 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -20,46 +20,48 @@
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_REMAP   0x1UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x2UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x3UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x4UL
 /* Wildcard TCAM Profile Id */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x5UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x6UL
 /* Meter Profile */
-#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x7UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+#define CFA_RESOURCE_TYPE_P59_METER           0x8UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x9UL
 /* Source Properties TCAM */
-#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0xaUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xbUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xcUL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
-#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
 /* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
@@ -105,30 +107,32 @@
 #define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
 #define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
@@ -176,26 +180,28 @@
 #define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
 #define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
@@ -243,24 +249,26 @@
 #define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
 #define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 5cd02b2..235d81f 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,7 +12,7 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 20/50] net/bnxt: update RM with residual checker
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (18 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 19/50] net/bnxt: update identifier with remap support Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 21/50] net/bnxt: support two level priority for TCAMs Somnath Kotur
                   ` (30 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Michael Wildt <michael.wildt@broadcom.com>

- Add residual checker to the TF Host RM as well as new RM APIs. On
  close it will scan the DB and check of any remaining elements. If
  found they will be logged and FW msg sent for FW to scrub that
  specific type of resources.
- Update the module bind to be aware of the module type, for each of
  the modules.
- Add additional type 2 string util functions.
- Fix the device naming to be in compliance with TF.
- Update the device unbind order as to assure TCAMs gets flushed
  first.
- Update the close functionality such that the session gets
  closed after the device is unbound.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device.c     |  53 +++---
 drivers/net/bnxt/tf_core/tf_device.h     |  25 ++-
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   1 -
 drivers/net/bnxt/tf_core/tf_identifier.c |  10 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  67 +++++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |   7 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 287 +++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  45 ++++-
 drivers/net/bnxt/tf_core/tf_session.c    |  58 ++++---
 drivers/net/bnxt/tf_core/tf_tbl_type.c   |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.h       |   4 +
 drivers/net/bnxt/tf_core/tf_util.c       |  55 ++++--
 drivers/net/bnxt/tf_core/tf_util.h       |  32 ++++
 14 files changed, 561 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index b474e8c..441d0c6 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -10,7 +10,7 @@
 struct tf;
 
 /* Forward declarations */
-static int dev_unbind_p4(struct tf *tfp);
+static int tf_dev_unbind_p4(struct tf *tfp);
 
 /**
  * Device specific bind function, WH+
@@ -32,10 +32,10 @@ static int dev_unbind_p4(struct tf *tfp);
  *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp,
-	    bool shadow_copy,
-	    struct tf_session_resources *resources,
-	    struct tf_dev_info *dev_handle)
+tf_dev_bind_p4(struct tf *tfp,
+	       bool shadow_copy,
+	       struct tf_session_resources *resources,
+	       struct tf_dev_info *dev_handle)
 {
 	int rc;
 	int frc;
@@ -93,7 +93,7 @@ dev_bind_p4(struct tf *tfp,
 
  fail:
 	/* Cleanup of already created modules */
-	frc = dev_unbind_p4(tfp);
+	frc = tf_dev_unbind_p4(tfp);
 	if (frc)
 		return frc;
 
@@ -111,7 +111,7 @@ dev_bind_p4(struct tf *tfp,
  *   - (-EINVAL) on failure.
  */
 static int
-dev_unbind_p4(struct tf *tfp)
+tf_dev_unbind_p4(struct tf *tfp)
 {
 	int rc = 0;
 	bool fail = false;
@@ -119,25 +119,28 @@ dev_unbind_p4(struct tf *tfp)
 	/* Unbind all the support modules. As this is only done on
 	 * close we only report errors as everything has to be cleaned
 	 * up regardless.
+	 *
+	 * In case of residuals TCAMs are cleaned up first as to
+	 * invalidate the pipeline in a clean manner.
 	 */
-	rc = tf_ident_unbind(tfp);
+	rc = tf_tcam_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Identifier\n");
+			    "Device unbind failed, TCAM\n");
 		fail = true;
 	}
 
-	rc = tf_tbl_unbind(tfp);
+	rc = tf_ident_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Table Type\n");
+			    "Device unbind failed, Identifier\n");
 		fail = true;
 	}
 
-	rc = tf_tcam_unbind(tfp);
+	rc = tf_tbl_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, TCAM\n");
+			    "Device unbind failed, Table Type\n");
 		fail = true;
 	}
 
@@ -148,18 +151,18 @@ dev_unbind_p4(struct tf *tfp)
 }
 
 int
-dev_bind(struct tf *tfp __rte_unused,
-	 enum tf_device_type type,
-	 bool shadow_copy,
-	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_handle)
+tf_dev_bind(struct tf *tfp __rte_unused,
+	    enum tf_device_type type,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_bind_p4(tfp,
-				   shadow_copy,
-				   resources,
-				   dev_handle);
+		return tf_dev_bind_p4(tfp,
+				      shadow_copy,
+				      resources,
+				      dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
@@ -168,12 +171,12 @@ dev_bind(struct tf *tfp __rte_unused,
 }
 
 int
-dev_unbind(struct tf *tfp,
-	   struct tf_dev_info *dev_handle)
+tf_dev_unbind(struct tf *tfp,
+	      struct tf_dev_info *dev_handle)
 {
 	switch (dev_handle->type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_unbind_p4(tfp);
+		return tf_dev_unbind_p4(tfp);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c31bf23..c8feac5 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -15,6 +15,17 @@ struct tf;
 struct tf_session;
 
 /**
+ *
+ */
+enum tf_device_module_type {
+	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	TF_DEVICE_MODULE_TYPE_TABLE,
+	TF_DEVICE_MODULE_TYPE_TCAM,
+	TF_DEVICE_MODULE_TYPE_EM,
+	TF_DEVICE_MODULE_TYPE_MAX
+};
+
+/**
  * The Device module provides a general device template. A supported
  * device type should implement one or more of the listed function
  * pointers according to its capabilities.
@@ -60,11 +71,11 @@ struct tf_dev_info {
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_bind(struct tf *tfp,
-	     enum tf_device_type type,
-	     bool shadow_copy,
-	     struct tf_session_resources *resources,
-	     struct tf_dev_info *dev_handle);
+int tf_dev_bind(struct tf *tfp,
+		enum tf_device_type type,
+		bool shadow_copy,
+		struct tf_session_resources *resources,
+		struct tf_dev_info *dev_handle);
 
 /**
  * Device release handles cleanup of the device specific information.
@@ -80,8 +91,8 @@ int dev_bind(struct tf *tfp,
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_unbind(struct tf *tfp,
-	       struct tf_dev_info *dev_handle);
+int tf_dev_unbind(struct tf *tfp,
+		  struct tf_dev_info *dev_handle);
 
 /**
  * Truflow device specific function hooks structure
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 235d81f..411e216 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -77,5 +77,4 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
-
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index ee07a6a..b197bb2 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -39,12 +39,12 @@ tf_ident_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_IDENTIFIER;
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
 		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
@@ -86,8 +86,10 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 		fparms.dir = i;
 		fparms.rm_db = ident_db[i];
 		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "rm free failed on unbind\n");
+		}
 
 		ident_db[i] = NULL;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 5ec2f83..5020433 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1110,6 +1110,69 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	return rc;
 }
 
+int
+tf_msg_session_resc_flush(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_flush_input req = { 0 };
+	struct hwrm_tf_session_resc_flush_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	TF_CHECK_PARMS2(tfp, resv);
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.flush_size = size;
+
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv_data[i].type = tfp_cpu_to_le_32(resv[i].type);
+		resv_data[i].start = tfp_cpu_to_le_16(resv[i].start);
+		resv_data[i].stride = tfp_cpu_to_le_16(resv[i].stride);
+	}
+
+	req.flush_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_FLUSH;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1511,9 +1574,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
-	if (rc != 0)
-		return rc;
+	req.type = parms->type;
 
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fb635f6..1ff1044 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -182,6 +182,13 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 			      struct tf_rm_resc_entry *resv);
 
 /**
+ * Sends session resource flush request to TF Firmware
+ */
+int tf_msg_session_resc_flush(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_entry *resv);
+/**
  * Sends EM internal insert request to Firmware
  */
 int tf_msg_insert_em_internal_entry(struct tf *tfp,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 6abf79a..02b4b5c 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -61,6 +61,11 @@ struct tf_rm_new_db {
 	enum tf_dir dir;
 
 	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+
+	/**
 	 * The DB consists of an array of elements
 	 */
 	struct tf_rm_element *db;
@@ -167,6 +172,178 @@ tf_rm_adjust_index(struct tf_rm_element *db,
 	return rc;
 }
 
+/**
+ * Logs an array of found residual entries to the console.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] type
+ *   Type of Device Module
+ *
+ * [in] count
+ *   Number of entries in the residual array
+ *
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
+ */
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
+{
+	int i;
+
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
+	 */
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
+	}
+}
+
+/**
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
+ *
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
+{
+	int rc;
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
+	}
+
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
+		}
+		*resv_size = found;
+	}
+
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
 int
 tf_rm_create_db(struct tf *tfp,
 		struct tf_rm_create_db_parms *parms)
@@ -373,6 +550,7 @@ tf_rm_create_db(struct tf *tfp,
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
@@ -392,20 +570,69 @@ tf_rm_create_db(struct tf *tfp,
 }
 
 int
-tf_rm_free_db(struct tf *tfp __rte_unused,
+tf_rm_free_db(struct tf *tfp,
 	      struct tf_rm_free_db_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
 	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
 
-	TF_CHECK_PARMS1(parms);
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	/* Traverse the DB and clear each pool.
-	 * NOTE:
-	 *   Firmware is not cleared. It will be cleared on close only.
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
 	 */
+
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
+	 */
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
+	}
+
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -417,7 +644,7 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 int
 tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int id;
 	uint32_t index;
 	struct tf_rm_new_db *rm_db;
@@ -446,11 +673,12 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
+		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
 			    "%s: Allocation failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -ENOMEM;
+		return rc;
 	}
 
 	/* Adjust for any non zero start value */
@@ -475,7 +703,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 int
 tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -521,7 +749,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 int
 tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -565,7 +793,6 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 int
 tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -579,15 +806,16 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
-	parms->info = &rm_db->db[parms->db_index].alloc;
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
-	return rc;
+	return 0;
 }
 
 int
 tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -603,5 +831,36 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
 
+	return 0;
+}
+
+int
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
+{
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
+	}
+
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
 	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index ebf38c4..a40296e 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -8,6 +8,7 @@
 
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
 
@@ -57,9 +58,9 @@ struct tf_rm_new_entry {
 enum tf_rm_elem_cfg_type {
 	/** No configuration */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled' */
+	/** HCAPI 'controlled', uses a Pool for internal storage */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled' */
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
 	TF_RM_ELEM_CFG_PRIVATE,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
@@ -123,7 +124,11 @@ struct tf_rm_alloc_info {
  */
 struct tf_rm_create_db_parms {
 	/**
-	 * [in] Receive or transmit direction
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
 	 */
 	enum tf_dir dir;
 	/**
@@ -264,6 +269,25 @@ struct tf_rm_get_hcapi_parms {
 };
 
 /**
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
+/**
  * @page rm Resource Manager
  *
  * @ref tf_rm_create_db
@@ -279,6 +303,8 @@ struct tf_rm_get_hcapi_parms {
  * @ref tf_rm_get_info
  *
  * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
 
 /**
@@ -396,4 +422,17 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
+ *
+ * [in] parms
+ *   Pointer to get inuse parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
+
 #endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index bac9c76..ab9e05f 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -91,11 +91,11 @@ tf_session_open_session(struct tf *tfp,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
-	rc = dev_bind(tfp,
-		      parms->open_cfg->device_type,
-		      session->shadow_copy,
-		      &parms->open_cfg->resources,
-		      &session->dev);
+	rc = tf_dev_bind(tfp,
+			 parms->open_cfg->device_type,
+			 session->shadow_copy,
+			 &parms->open_cfg->resources,
+			 &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
@@ -151,6 +151,8 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	tfs->ref_count--;
+
 	/* Record the session we're closing so the caller knows the
 	 * details.
 	 */
@@ -164,6 +166,32 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	if (tfs->ref_count > 0) {
+		/* In case we're attached only the session client gets
+		 * closed.
+		 */
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "FW Session close failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		return 0;
+	}
+
+	/* Final cleanup as we're last user of the session */
+
+	/* Unbind the device */
+	rc = tf_dev_unbind(tfp, tfd);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
 	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
@@ -173,23 +201,9 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	tfs->ref_count--;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Unbind the device */
-		rc = dev_unbind(tfp, tfd);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Device unbind failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index 51f8f07..bdf7d20 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -51,11 +51,12 @@ tf_tbl_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
 		db_cfg.rm_db = &tbl_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index e0fac31..2f4441d 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -54,11 +54,12 @@ tf_tcam_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
 		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 67c3bcb..5090dfd 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -147,6 +147,10 @@ struct tf_tcam_set_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
 	 * [in] Entry index to write to
 	 */
 	uint32_t idx;
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index a901054..16c43eb 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -7,8 +7,8 @@
 
 #include "tf_util.h"
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
+const char *
+tf_dir_2_str(enum tf_dir dir)
 {
 	switch (dir) {
 	case TF_DIR_RX:
@@ -20,8 +20,8 @@ const char
 	}
 }
 
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
+const char *
+tf_ident_2_str(enum tf_identifier_type id_type)
 {
 	switch (id_type) {
 	case TF_IDENT_TYPE_L2_CTXT:
@@ -39,8 +39,8 @@ const char
 	}
 }
 
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+const char *
+tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
 {
 	switch (tcam_type) {
 	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
@@ -60,8 +60,8 @@ const char
 	}
 }
 
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+const char *
+tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 {
 	switch (tbl_type) {
 	case TF_TBL_TYPE_FULL_ACT_RECORD:
@@ -131,8 +131,8 @@ const char
 	}
 }
 
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+const char *
+tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
 {
 	switch (em_type) {
 	case TF_EM_TBL_TYPE_EM_RECORD:
@@ -143,3 +143,38 @@ const char
 		return "Invalid EM type";
 	}
 }
+
+const char *
+tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
+				    uint16_t mod_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return tf_ident_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return tf_tcam_tbl_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return tf_em_tbl_type_2_str(mod_type);
+	default:
+		return "Invalid Device Module type";
+	}
+}
+
+const char *
+tf_device_module_type_2_str(enum tf_device_module_type dm_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return "Identifer";
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return "Table";
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return "TCAM";
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return "EM";
+	default:
+		return "Invalid Device Module type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index ad8edaf..c97e2a6 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -7,6 +7,7 @@
 #define _TF_UTIL_H_
 
 #include "tf_core.h"
+#include "tf_device.h"
 
 /**
  * Helper function converting direction to text string
@@ -63,4 +64,35 @@ const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
  */
 const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
+/**
+ * Helper function converting device module type and module type to
+ * text string.
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_subtype_2_str
+					(enum tf_device_module_type dm_type,
+					 uint16_t mod_type);
+
+/**
+ * Helper function converting device module type to text string
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_2_str(enum tf_device_module_type dm_type);
+
 #endif /* _TF_UTIL_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 21/50] net/bnxt: support two level priority for TCAMs
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (19 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 20/50] net/bnxt: update RM with residual checker Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 22/50] net/bnxt: support EM and TCAM lookup with table scope Somnath Kotur
                   ` (29 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Jay Ding <jay.ding@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/bitalloc.c  | 107 +++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h  |   5 ++
 drivers/net/bnxt/tf_core/tf_rm_new.c |   9 ++-
 drivers/net/bnxt/tf_core/tf_rm_new.h |   8 +++
 drivers/net/bnxt/tf_core/tf_tcam.c   |   1 +
 5 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
index fb4df9a..918cabf 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.c
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -7,6 +7,40 @@
 
 #define BITALLOC_MAX_LEVELS 6
 
+
+/* Finds the last bit set plus 1, equivalent to gcc __builtin_fls */
+static int
+ba_fls(bitalloc_word_t v)
+{
+	int c = 32;
+
+	if (!v)
+		return 0;
+
+	if (!(v & 0xFFFF0000u)) {
+		v <<= 16;
+		c -= 16;
+	}
+	if (!(v & 0xFF000000u)) {
+		v <<= 8;
+		c -= 8;
+	}
+	if (!(v & 0xF0000000u)) {
+		v <<= 4;
+		c -= 4;
+	}
+	if (!(v & 0xC0000000u)) {
+		v <<= 2;
+		c -= 2;
+	}
+	if (!(v & 0x80000000u)) {
+		v <<= 1;
+		c -= 1;
+	}
+
+	return c;
+}
+
 /* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
 static int
 ba_ffs(bitalloc_word_t v)
@@ -120,6 +154,79 @@ ba_alloc(struct bitalloc *pool)
 	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
 }
 
+/**
+ * Help function to alloc entry from highest available index
+ *
+ * Searching the pool from highest index for the empty entry.
+ *
+ * [in] pool
+ *   Pointer to the resource pool
+ *
+ * [in] offset
+ *   Offset of the storage in the pool
+ *
+ * [in] words
+ *   Number of words in this level
+ *
+ * [in] size
+ *   Number of entries in this level
+ *
+ * [in] index
+ *   Index of words that has the entry
+ *
+ * [in] clear
+ *   Indicate if a bit needs to be clear due to the entry is allocated
+ *
+ * Returns:
+ *     0 - Success
+ *    -1 - Failure
+ */
+static int
+ba_alloc_reverse_helper(struct bitalloc *pool,
+			int offset,
+			int words,
+			unsigned int size,
+			int index,
+			int *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int loc = ba_fls(storage[index]);
+	int r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_reverse_helper(pool,
+					    offset + words + 1,
+					    storage[words],
+					    size * 32,
+					    index * 32 + loc,
+					    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_reverse(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_reverse_helper(pool, 0, 1, 32, 0, &clear);
+}
+
 static int
 ba_alloc_index_helper(struct bitalloc *pool,
 		      int              offset,
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
index 563c853..2825bb3 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.h
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -73,6 +73,11 @@ int ba_alloc(struct bitalloc *pool);
 int ba_alloc_index(struct bitalloc *pool, int index);
 
 /**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc_reverse(struct bitalloc *pool);
+
+/**
  * Query a particular index in a pool to check if its in use.
  *
  * Returns -1 on invalid index, 1 if the index is allocated, 0 if it
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 02b4b5c..de8f119 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -671,7 +671,14 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 		return rc;
 	}
 
-	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index a40296e..5cb6889 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -185,6 +185,14 @@ struct tf_rm_allocate_parms {
 	 * i.e. Full Action Record offsets.
 	 */
 	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 2f4441d..260fb15 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -157,6 +157,7 @@ tf_tcam_alloc(struct tf *tfp,
 	/* Allocate requested element */
 	aparms.rm_db = tcam_db[parms->dir];
 	aparms.db_index = parms->type;
+	aparms.priority = parms->priority;
 	aparms.index = (uint32_t *)&parms->idx;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 22/50] net/bnxt: support EM and TCAM lookup with table scope
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (20 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 21/50] net/bnxt: support two level priority for TCAMs Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 23/50] net/bnxt: update table get to use new design Somnath Kotur
                   ` (28 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Support for table scope within the EM module
- Support for host and system memory
- Update TCAM set/free.
- Replace TF device type by HCAPI RM type.
- Update TCAM set and free for HCAPI RM type

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |    5 +-
 drivers/net/bnxt/tf_core/Makefile             |    5 +-
 drivers/net/bnxt/tf_core/cfa_resource_types.h |    8 +-
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  864 +----------------
 drivers/net/bnxt/tf_core/tf_core.c            |  100 +-
 drivers/net/bnxt/tf_core/tf_device.c          |   50 +-
 drivers/net/bnxt/tf_core/tf_device.h          |   86 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |   14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   20 +-
 drivers/net/bnxt/tf_core/tf_em.c              |  360 -------
 drivers/net/bnxt/tf_core/tf_em.h              |  310 +++++-
 drivers/net/bnxt/tf_core/tf_em_common.c       |  281 ++++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  107 +++
 drivers/net/bnxt/tf_core/tf_em_host.c         | 1149 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  312 +++++++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  118 +++
 drivers/net/bnxt/tf_core/tf_msg.c             | 1247 +++++--------------------
 drivers/net/bnxt/tf_core/tf_msg.h             |  233 +++--
 drivers/net/bnxt/tf_core/tf_rm.c              |   89 +-
 drivers/net/bnxt/tf_core/tf_rm_new.c          |   40 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1133 ----------------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |   39 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |   25 +-
 drivers/net/bnxt/tf_core/tf_tcam.h            |    4 +
 drivers/net/bnxt/tf_core/tf_util.c            |    4 +-
 25 files changed, 3033 insertions(+), 3570 deletions(-)
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 33e6ebd..35038dc 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -28,7 +28,10 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_msg.c',
 	'tf_core/rand.c',
 	'tf_core/stack.c',
-	'tf_core/tf_em.c',
+        'tf_core/tf_em_common.c',
+        'tf_core/tf_em_host.c',
+        'tf_core/tf_em_internal.c',
+        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index ecd5aac..6ae5c34 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -12,8 +12,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 058d8cc..6e79fac 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -202,7 +202,9 @@
 #define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
 #define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
@@ -269,7 +271,9 @@
 #define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
 #define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 1e78296..26836e4 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -13,20 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_SESSION_ATTACH = 712,
-	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
-	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
-	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
-	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
-	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
-	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
-	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
-	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
-	HWRM_TFT_TBL_SCOPE_CFG = 731,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
-	HWRM_TFT_TBL_TYPE_SET = 823,
-	HWRM_TFT_TBL_TYPE_GET = 824,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
@@ -66,858 +54,8 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
-struct tf_session_attach_input;
-struct tf_session_hw_resc_qcaps_input;
-struct tf_session_hw_resc_qcaps_output;
-struct tf_session_hw_resc_alloc_input;
-struct tf_session_hw_resc_alloc_output;
-struct tf_session_hw_resc_free_input;
-struct tf_session_hw_resc_flush_input;
-struct tf_session_sram_resc_qcaps_input;
-struct tf_session_sram_resc_qcaps_output;
-struct tf_session_sram_resc_alloc_input;
-struct tf_session_sram_resc_alloc_output;
-struct tf_session_sram_resc_free_input;
-struct tf_session_sram_resc_flush_input;
-struct tf_tbl_type_set_input;
-struct tf_tbl_type_get_input;
-struct tf_tbl_type_get_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
-/* Input params for session attach */
-typedef struct tf_session_attach_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* Session Name */
-	char				 session_name[TF_SESSION_NAME_MAX];
-} tf_session_attach_input_t, *ptf_session_attach_input_t;
-
-/* Input params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
-
-/* Output params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_output {
-	/* Control Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Unused */
-	uint8_t			  unused[4];
-	/* Minimum guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_min;
-	/* Maximum non-guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_max;
-	/* Minimum guaranteed number of profile functions */
-	uint16_t			 prof_func_min;
-	/* Maximum non-guaranteed number of profile functions */
-	uint16_t			 prof_func_max;
-	/* Minimum guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_min;
-	/* Maximum non-guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_max;
-	/* Minimum guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_min;
-	/* Maximum non-guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_max;
-	/* Minimum guaranteed number of EM records entries */
-	uint16_t			 em_record_entries_min;
-	/* Maximum non-guaranteed number of EM record entries */
-	uint16_t			 em_record_entries_max;
-	/* Minimum guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_min;
-	/* Maximum non-guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_max;
-	/* Minimum guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_min;
-	/* Maximum non-guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_max;
-	/* Minimum guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_min;
-	/* Maximum non-guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_max;
-	/* Minimum guaranteed number of meter instances */
-	uint16_t			 meter_inst_min;
-	/* Maximum non-guaranteed number of meter instances */
-	uint16_t			 meter_inst_max;
-	/* Minimum guaranteed number of mirrors */
-	uint16_t			 mirrors_min;
-	/* Maximum non-guaranteed number of mirrors */
-	uint16_t			 mirrors_max;
-	/* Minimum guaranteed number of UPAR */
-	uint16_t			 upar_min;
-	/* Maximum non-guaranteed number of UPAR */
-	uint16_t			 upar_max;
-	/* Minimum guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_min;
-	/* Maximum non-guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_max;
-	/* Minimum guaranteed number of L2 Functions */
-	uint16_t			 l2_func_min;
-	/* Maximum non-guaranteed number of L2 Functions */
-	uint16_t			 l2_func_max;
-	/* Minimum guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_min;
-	/* Maximum non-guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_max;
-	/* Minimum guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_min;
-	/* Maximum non-guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_max;
-	/* Minimum guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_min;
-	/* Maximum non-guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_max;
-	/* Minimum guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_min;
-	/* Maximum non-guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_max;
-	/* Minimum guaranteed number of metadata */
-	uint16_t			 metadata_min;
-	/* Maximum non-guaranteed number of metadata */
-	uint16_t			 metadata_max;
-	/* Minimum guaranteed number of CT states */
-	uint16_t			 ct_state_min;
-	/* Maximum non-guaranteed number of CT states */
-	uint16_t			 ct_state_max;
-	/* Minimum guaranteed number of range profiles */
-	uint16_t			 range_prof_min;
-	/* Maximum non-guaranteed number range profiles */
-	uint16_t			 range_prof_max;
-	/* Minimum guaranteed number of range entries */
-	uint16_t			 range_entries_min;
-	/* Maximum non-guaranteed number of range entries */
-	uint16_t			 range_entries_max;
-	/* Minimum guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_min;
-	/* Maximum non-guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_max;
-} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
-
-/* Input params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of L2 CTX TCAM entries to be allocated */
-	uint16_t			 num_l2_ctx_tcam_entries;
-	/* Number of profile functions to be allocated */
-	uint16_t			 num_prof_func_entries;
-	/* Number of profile TCAM entries to be allocated */
-	uint16_t			 num_prof_tcam_entries;
-	/* Number of EM profile ids to be allocated */
-	uint16_t			 num_em_prof_id;
-	/* Number of EM records entries to be allocated */
-	uint16_t			 num_em_record_entries;
-	/* Number of WC profiles ids to be allocated */
-	uint16_t			 num_wc_tcam_prof_id;
-	/* Number of WC TCAM entries to be allocated */
-	uint16_t			 num_wc_tcam_entries;
-	/* Number of meter profiles to be allocated */
-	uint16_t			 num_meter_profiles;
-	/* Number of meter instances to be allocated */
-	uint16_t			 num_meter_inst;
-	/* Number of mirrors to be allocated */
-	uint16_t			 num_mirrors;
-	/* Number of UPAR to be allocated */
-	uint16_t			 num_upar;
-	/* Number of SP TCAM entries to be allocated */
-	uint16_t			 num_sp_tcam_entries;
-	/* Number of L2 functions to be allocated */
-	uint16_t			 num_l2_func;
-	/* Number of flexible key templates to be allocated */
-	uint16_t			 num_flex_key_templ;
-	/* Number of table scopes to be allocated */
-	uint16_t			 num_tbl_scope;
-	/* Number of epoch0 entries to be allocated */
-	uint16_t			 num_epoch0_entries;
-	/* Number of epoch1 entries to be allocated */
-	uint16_t			 num_epoch1_entries;
-	/* Number of metadata to be allocated */
-	uint16_t			 num_metadata;
-	/* Number of CT states to be allocated */
-	uint16_t			 num_ct_state;
-	/* Number of range profiles to be allocated */
-	uint16_t			 num_range_prof;
-	/* Number of range Entries to be allocated */
-	uint16_t			 num_range_entries;
-	/* Number of LAG table entries to be allocated */
-	uint16_t			 num_lag_tbl_entries;
-} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
-
-/* Output params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_output {
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
-
-/* Input params for session resource HW free */
-typedef struct tf_session_hw_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
-
-/* Input params for session resource HW flush */
-typedef struct tf_session_hw_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
-
-/* Input params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
-
-/* Output params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_output {
-	/* Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Minimum guaranteed number of Full Action */
-	uint16_t			 full_action_min;
-	/* Maximum non-guaranteed number of Full Action */
-	uint16_t			 full_action_max;
-	/* Minimum guaranteed number of MCG */
-	uint16_t			 mcg_min;
-	/* Maximum non-guaranteed number of MCG */
-	uint16_t			 mcg_max;
-	/* Minimum guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_min;
-	/* Maximum non-guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_max;
-	/* Minimum guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_min;
-	/* Maximum non-guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_max;
-	/* Minimum guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_min;
-	/* Maximum non-guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_max;
-	/* Minimum guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_min;
-	/* Maximum non-guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_max;
-	/* Minimum guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_max;
-	/* Minimum guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_max;
-	/* Minimum guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_min;
-	/* Maximum non-guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_max;
-	/* Minimum guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_min;
-	/* Maximum non-guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_max;
-	/* Minimum guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_min;
-	/* Maximum non-guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_max;
-	/* Minimum guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_min;
-	/* Maximum non-guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_max;
-	/* Minimum guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_min;
-	/* Maximum non-guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_max;
-} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
-
-/* Input params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_input {
-	/* FW Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of full action SRAM entries to be allocated */
-	uint16_t			 num_full_action;
-	/* Number of multicast groups to be allocated */
-	uint16_t			 num_mcg;
-	/* Number of Encap 8B entries to be allocated */
-	uint16_t			 num_encap_8b;
-	/* Number of Encap 16B entries to be allocated */
-	uint16_t			 num_encap_16b;
-	/* Number of Encap 64B entries to be allocated */
-	uint16_t			 num_encap_64b;
-	/* Number of SP SMAC entries to be allocated */
-	uint16_t			 num_sp_smac;
-	/* Number of SP SMAC IPv4 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv4;
-	/* Number of SP SMAC IPv6 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv6;
-	/* Number of Counter 64B entries to be allocated */
-	uint16_t			 num_counter_64b;
-	/* Number of NAT source ports to be allocated */
-	uint16_t			 num_nat_sport;
-	/* Number of NAT destination ports to be allocated */
-	uint16_t			 num_nat_dport;
-	/* Number of NAT source iPV4 addresses to be allocated */
-	uint16_t			 num_nat_s_ipv4;
-	/* Number of NAT destination IPV4 addresses to be allocated */
-	uint16_t			 num_nat_d_ipv4;
-} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
-
-/* Output params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_output {
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
-
-/* Input params for session resource SRAM free */
-typedef struct tf_session_sram_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
-
-/* Input params for session resource SRAM flush */
-typedef struct tf_session_sram_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
-
-/* Input params for table type set */
-typedef struct tf_tbl_type_set_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Size of the data to set in bytes */
-	uint16_t			 size;
-	/* Data to set */
-	uint8_t			  data[TF_BULK_SEND];
-	/* Index to set */
-	uint32_t			 index;
-} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
-
-/* Input params for table type get */
-typedef struct tf_tbl_type_get_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Index to get */
-	uint32_t			 index;
-} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
-
-/* Output params for table type get */
-typedef struct tf_tbl_type_get_output {
-	/* Size of the data read in bytes */
-	uint16_t			 size;
-	/* Data read */
-	uint8_t			  data[TF_BULK_RECV];
-} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3e23d05..8b3e15c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -208,7 +208,15 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL &&
+		dev->ops->tf_dev_insert_ext_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL &&
+		dev->ops->tf_dev_insert_int_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM insert failed, rc:%s\n",
@@ -217,7 +225,7 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	return -EINVAL;
+	return 0;
 }
 
 /** Delete EM hash entry API
@@ -255,7 +263,13 @@ int tf_delete_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		rc = dev->ops->tf_dev_delete_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		rc = dev->ops->tf_dev_delete_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM delete failed, rc:%s\n",
@@ -806,3 +820,83 @@ tf_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl_scope != NULL) {
+		rc = dev->ops->tf_dev_alloc_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Alloc table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl_scope) {
+		rc = dev->ops->tf_dev_free_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Free table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 441d0c6..20b0c59 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,6 +6,7 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
+#include "tf_em.h"
 
 struct tf;
 
@@ -42,10 +43,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_ident_cfg_parms ident_cfg;
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
-
-	dev_handle->type = TF_DEVICE_TYPE_WH;
-	/* Initial function initialization */
-	dev_handle->ops = &tf_dev_ops_p4_init;
+	struct tf_em_cfg_parms em_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -86,6 +84,36 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * EEM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_ext_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
+
+	rc = tf_em_ext_common_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EEM initialization failure\n");
+		goto fail;
+	}
+
+	/*
+	 * EM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_int_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = 0; /* Not used by EM */
+
+	rc = tf_em_int_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EM initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -144,6 +172,20 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_em_ext_common_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EEM\n");
+		fail = true;
+	}
+
+	rc = tf_em_int_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EM\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c8feac5..2712d10 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -15,12 +15,24 @@ struct tf;
 struct tf_session;
 
 /**
- *
+ * Device module types
  */
 enum tf_device_module_type {
+	/**
+	 * Identifier module
+	 */
 	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	/**
+	 * Table type module
+	 */
 	TF_DEVICE_MODULE_TYPE_TABLE,
+	/**
+	 * TCAM module
+	 */
 	TF_DEVICE_MODULE_TYPE_TCAM,
+	/**
+	 * EM module
+	 */
 	TF_DEVICE_MODULE_TYPE_EM,
 	TF_DEVICE_MODULE_TYPE_MAX
 };
@@ -395,8 +407,8 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_insert_em_entry)(struct tf *tfp,
-				      struct tf_insert_em_entry_parms *parms);
+	int (*tf_dev_insert_int_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
 
 	/**
 	 * Delete EM hash entry API
@@ -411,8 +423,72 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_delete_em_entry)(struct tf *tfp,
-				      struct tf_delete_em_entry_parms *parms);
+	int (*tf_dev_delete_int_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Insert EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_ext_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_ext_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Allocate EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope alloc parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_alloc_tbl_scope)(struct tf *tfp,
+				      struct tf_alloc_tbl_scope_parms *parms);
+
+	/**
+	 * Free EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope free parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
+				     struct tf_free_tbl_scope_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9e332c5..127c655 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -93,6 +93,12 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = NULL,
 	.tf_dev_get_tcam = NULL,
+	.tf_dev_insert_int_em_entry = NULL,
+	.tf_dev_delete_int_em_entry = NULL,
+	.tf_dev_insert_ext_em_entry = NULL,
+	.tf_dev_delete_ext_em_entry = NULL,
+	.tf_dev_alloc_tbl_scope = NULL,
+	.tf_dev_free_tbl_scope = NULL,
 };
 
 /**
@@ -113,6 +119,10 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = NULL,
-	.tf_dev_insert_em_entry = tf_em_insert_entry,
-	.tf_dev_delete_em_entry = tf_em_delete_entry,
+	.tf_dev_insert_int_em_entry = tf_em_insert_int_entry,
+	.tf_dev_delete_int_em_entry = tf_em_delete_int_entry,
+	.tf_dev_insert_ext_em_entry = tf_em_insert_ext_entry,
+	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
+	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
+	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 411e216..da6dd65 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -36,13 +36,12 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
@@ -77,4 +76,17 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
+
+struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
+	/* CFA_RESOURCE_TYPE_P4_EM_REC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+};
+
+struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_REC },
+	/* CFA_RESOURCE_TYPE_P4_TBL_SCOPE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
deleted file mode 100644
index 7b430fa..0000000
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ /dev/null
@@ -1,360 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-#include <rte_common.h>
-#include <rte_errno.h>
-#include <rte_log.h>
-
-#include "tf_core.h"
-#include "tf_em.h"
-#include "tf_msg.h"
-#include "tfp.h"
-#include "lookup3.h"
-#include "tf_ext_flow_handle.h"
-
-#include "bnxt.h"
-
-
-static uint32_t tf_em_get_key_mask(int num_entries)
-{
-	uint32_t mask = num_entries - 1;
-
-	if (num_entries & 0x7FFF)
-		return 0;
-
-	if (num_entries > (128 * 1024 * 1024))
-		return 0;
-
-	return mask;
-}
-
-static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
-				   uint8_t	       *in_key,
-				   struct cfa_p4_eem_64b_entry *key_entry)
-{
-	key_entry->hdr.word1 = result->word1;
-
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
-
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
-#endif
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
-			       struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t	   mask;
-	uint32_t	   key0_hash;
-	uint32_t	   key1_hash;
-	uint32_t	   key0_index;
-	uint32_t	   key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t	   index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t	   gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 =
-			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/**
- * Insert EM internal entry API
- *
- *  returns:
- *     0 - Success
- */
-static int tf_insert_em_internal_entry(struct tf                       *tfp,
-				       struct tf_insert_em_entry_parms *parms)
-{
-	int       rc;
-	uint32_t  gfid;
-	uint16_t  rptr_index = 0;
-	uint8_t   rptr_entry = 0;
-	uint8_t   num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-	uint32_t index;
-
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
-		return rc;
-	}
-
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
-	rc = tf_msg_insert_em_internal_entry(tfp,
-					     parms,
-					     &rptr_index,
-					     &rptr_entry,
-					     &num_of_entries);
-	if (rc != 0)
-		return -1;
-
-	PMD_DRV_LOG(
-		   ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
-		   rptr_index,
-		   rptr_entry,
-		   num_of_entries);
-
-	TF_SET_GFID(gfid,
-		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
-		     rptr_entry),
-		    0); /* N/A for internal table */
-
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_INTERNAL,
-		       parms->dir);
-
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     num_of_entries,
-				     0,
-				     0,
-				     rptr_index,
-				     rptr_entry,
-				     0);
-	return 0;
-}
-
-/** Delete EM internal entry API
- *
- * returns:
- * 0
- * -EINVAL
- */
-static int tf_delete_em_internal_entry(struct tf                       *tfp,
-				       struct tf_delete_em_entry_parms *parms)
-{
-	int rc;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-
-	rc = tf_msg_delete_em_entry(tfp, parms);
-
-	/* Return resource to pool */
-	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
-
-	return rc;
-}
-
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[
-			(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL)
-		/* External EEM */
-		return tf_insert_eem_entry
-			(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
-
-	return -EINVAL;
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		return tf_delete_em_internal_entry(tfp, parms);
-
-	return -EINVAL;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 2262ae7..cf799c2 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
@@ -19,6 +20,9 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+#define TF_EM_MAX_MASK 0x7FFF
+#define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
+
 /*
  * Used to build GFID:
  *
@@ -44,6 +48,47 @@ struct tf_em_64b_entry {
 	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
+/** EEM Memory Type
+ *
+ */
+enum tf_mem_type {
+	TF_EEM_MEM_TYPE_INVALID,
+	TF_EEM_MEM_TYPE_HOST,
+	TF_EEM_MEM_TYPE_SYSTEM
+};
+
+/**
+ * tf_em_cfg_parms definition
+ */
+struct tf_em_cfg_parms {
+	/**
+	 * [in] Num entries in resource config
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Resource config
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+	/**
+	 * [in] Memory type.
+	 */
+	enum tf_mem_type mem_type;
+};
+
+/**
+ * @page table Table
+ *
+ * @ref tf_alloc_eem_tbl_scope
+ *
+ * @ref tf_free_eem_tbl_scope_cb
+ *
+ * @ref tbl_scope_cb_find
+ */
+
 /**
  * Allocates EEM Table scope
  *
@@ -78,29 +123,258 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 			     struct tf_free_tbl_scope_parms *parms);
 
 /**
- * Function to search for table scope control block structure
- * with specified table scope ID.
+ * Insert record in to internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_int_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_int_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
+
+/**
+ * Insert record in to external EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Insert record from external EEM table
  *
- * [in] session
- *   Session to use for the search of the table scope control block
- * [in] tbl_scope_id
- *   Table scope ID to search for
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
  *
  * Returns:
- *  Pointer to the found table scope control block struct or NULL if
- *  table scope control block struct not found
+ *   0       - Success
+ *   -EINVAL - Parameter error
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+int tf_em_delete_ext_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
 
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum hcapi_cfa_em_table_type table_type);
+/**
+ * Insert record in to external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_sys_entry(struct tf *tfp,
+			       struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_ext_sys_entry(struct tf *tfp,
+			       struct tf_delete_em_entry_parms *parms);
 
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms);
+/**
+ * Bind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_bind(struct tf *tfp,
+		   struct tf_em_cfg_parms *parms);
 
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms);
+/**
+ * Unbind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_unbind(struct tf *tfp);
+
+/**
+ * Common bind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_bind(struct tf *tfp,
+			  struct tf_em_cfg_parms *parms);
+
+/**
+ * Common unbind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_unbind(struct tf *tfp);
+
+/**
+ * Alloc for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Alloc for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common free for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_free(struct tf *tfp,
+			  struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common alloc for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_alloc(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
new file mode 100644
index 0000000..ba6aa7a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_device.h"
+#include "tf_ext_flow_handle.h"
+#include "cfa_resource_types.h"
+
+#include "bnxt.h"
+
+
+/**
+ * EM DBs.
+ */
+void *eem_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Host or system
+ */
+static enum tf_mem_type mem_type;
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+	struct tf_rm_is_allocated_parms parms;
+	int allocated;
+
+	/* Check that id is valid */
+	parms.rm_db = eem_db[TF_DIR_RX];
+	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.allocated = &allocated;
+
+	i = tf_rm_is_allocated(&parms);
+
+	if (i < 0 || !allocated)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+int
+tf_create_tbl_pool_external(enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i;
+	int32_t j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes in reverse
+	 */
+	j = (num_entries - 1) * entry_sz_bytes;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+				    tf_dir_2_str(dir), strerror(-rc));
+			goto cleanup;
+		}
+
+		if (j < 0) {
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+				    dir, j);
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ */
+void
+tf_destroy_tbl_pool_external(enum tf_dir dir,
+			     struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_act_pool_mem =
+		tbl_scope_cb->ext_act_pool_mem[dir];
+
+	tfp_free(ext_act_pool_mem);
+}
+
+uint32_t
+tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & TF_EM_MAX_MASK)
+		return 0;
+
+	if (num_entries > TF_EM_MAX_ENTRY)
+		return 0;
+
+	return mask;
+}
+
+void
+tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+		       uint8_t *in_key,
+		       struct cfa_p4_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
+}
+
+int
+tf_em_ext_common_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+		db_cfg.rm_db = &eem_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	mem_type = parms->mem_type;
+	init = 1;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = eem_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		eem_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_alloc(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_alloc(tfp, parms);
+	else
+		return tf_em_ext_system_alloc(tfp, parms);
+}
+
+int
+tf_em_ext_common_free(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_free(tfp, parms);
+	else
+		return tf_em_ext_system_free(tfp, parms);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
new file mode 100644
index 0000000..45699a7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_COMMON_H_
+#define _TF_EM_COMMON_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *   table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+/**
+ * Create and initialize a stack to use for action entries
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ * [in] num_entries
+ *   Number of EEM entries
+ * [in] entry_sz_bytes
+ *   Size of the entry
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memory
+ *   -EINVAL - Failure
+ */
+int tf_create_tbl_pool_external(enum tf_dir dir,
+				struct tf_tbl_scope_cb *tbl_scope_cb,
+				uint32_t num_entries,
+				uint32_t entry_sz_bytes);
+
+/**
+ * Delete and cleanup action record allocation stack
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ *
+ */
+void tf_destroy_tbl_pool_external(enum tf_dir dir,
+				  struct tf_tbl_scope_cb *tbl_scope_cb);
+
+/**
+ * Get hash mask for current EEM table size
+ *
+ * [in] num_entries
+ *   Number of EEM entries
+ */
+uint32_t tf_em_get_key_mask(int num_entries);
+
+/**
+ * Populate key_entry
+ *
+ * [in] result
+ *   Entry data
+ * [in] in_key
+ *   Key data
+ * [out] key_entry
+ *   Completed key record
+ */
+void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+			    uint8_t	       *in_key,
+			    struct cfa_p4_eem_64b_entry *key_entry);
+
+/**
+ * Find base page address for offset into specified table type
+ *
+ * [in] tbl_scope_cb
+ *   Table scope
+ * [in] dir
+ *   Direction
+ * [in] Offset
+ *   Offset in to table
+ * [in] table_type
+ *   Table type
+ *
+ * Returns:
+ *
+ * 0                                 - Failure
+ * Void pointer to page base address - Success
+ */
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum hcapi_cfa_em_table_type table_type);
+
+#endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
new file mode 100644
index 0000000..11899c6
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -0,0 +1,1149 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * EM DBs.
+ */
+extern void *eem_db[TF_DIR_MAX];
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)(uintptr_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+		TFP_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			TFP_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(void *)(uintptr_t)tp->pg_va_tbl[j],
+				(void *)(uintptr_t)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if ((parms->rx_num_flows_in_k != 0) &&
+	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if ((parms->tx_num_flows_in_k != 0) &&
+	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries
+		= parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size
+		= parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries
+		= 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries
+		= parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries
+		= 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 =
+			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[
+			(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry(
+		tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_host_alloc(struct tf *tfp,
+		     struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *em_tables;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
+	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
+				   parms->hw_flow_cache_flush_timer,
+				   dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(dir,
+					    tbl_scope_cb,
+					    em_tables[TF_RECORD_TABLE].num_entries,
+					    em_tables[TF_RECORD_TABLE].entry_size);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_host_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_host_free(struct tf *tfp,
+		    struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
+		return -EINVAL;
+	}
+
+	/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
new file mode 100644
index 0000000..9be91ad
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -0,0 +1,312 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/**
+ * EM DBs.
+ */
+static void *em_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	rc = tfp_calloc(&parms);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	if (ptr != NULL)
+		tfp_free(ptr);
+}
+
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int
+tf_em_insert_int_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	int rc;
+	uint32_t gfid;
+	uint16_t rptr_index = 0;
+	uint8_t rptr_entry = 0;
+	uint8_t num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc) {
+		PMD_DRV_LOG
+		  (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc)
+		return -1;
+
+	PMD_DRV_LOG
+		  (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     (uint32_t)num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int
+tf_em_delete_int_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+int
+tf_em_int_bind(struct tf *tfp,
+	       struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		tf_create_em_pool(session,
+				  i,
+				  TF_SESSION_EM_POOL_SIZE);
+	}
+
+	/*
+	 * I'm not sure that this code is needed.
+	 * leaving for now until resolved
+	 */
+	if (parms->num_elements) {
+		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+
+		for (i = 0; i < TF_DIR_MAX; i++) {
+			db_cfg.dir = i;
+			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+			db_cfg.rm_db = &em_db[i];
+			rc = tf_rm_create_db(tfp, &db_cfg);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: EM DB creation failed\n",
+					    tf_dir_2_str(i));
+
+				return rc;
+			}
+		}
+	}
+
+	init = 1;
+	return 0;
+}
+
+int
+tf_em_int_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		tf_free_em_pool(session, i);
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = em_db[i];
+		if (em_db[i] != NULL) {
+			rc = tf_rm_free_db(tfp, &fparms);
+			if (rc)
+				return rc;
+		}
+
+		em_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
new file mode 100644
index 0000000..ee18a0c
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_insert_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_delete_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_sys_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_sys_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
+		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_em_ext_system_free(struct tf *tfp __rte_unused,
+		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 5020433..d8b80bc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -19,82 +19,6 @@
 #include "tf_em.h"
 
 /**
- * Endian converts min and max values from the HW response to the query
- */
-#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
-	(query)->hw_query[index].min =                                       \
-		tfp_le_to_cpu_16(response. element ## _min);                 \
-	(query)->hw_query[index].max =                                       \
-		tfp_le_to_cpu_16(response. element ## _max);                 \
-} while (0)
-
-/**
- * Endian converts the number of entries from the alloc to the request
- */
-#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
-	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(hw_entry[index].start);                     \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
-	hw_entry[index].start =                                              \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	hw_entry[index].stride =                                             \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
-/**
- * Endian converts min and max values from the SRAM response to the
- * query
- */
-#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
-	(query)->sram_query[index].min =                                     \
-		tfp_le_to_cpu_16(response.element ## _min);                  \
-	(query)->sram_query[index].max =                                     \
-		tfp_le_to_cpu_16(response.element ## _max);                  \
-} while (0)
-
-/**
- * Endian converts the number of entries from the action (alloc) to
- * the request
- */
-#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
-	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(sram_entry[index].start);                   \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
-	sram_entry[index].start =                                            \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	sram_entry[index].stride =                                           \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
-/**
  * This is the MAX data we can transport across regular HWRM
  */
 #define TF_PCI_BUF_SIZE_MAX 88
@@ -107,39 +31,6 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
-static int
-tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
-		   uint32_t *hwrm_type)
-{
-	int rc = 0;
-
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	return rc;
-}
-
 /**
  * Allocates a DMA buffer that can be used for message transfer.
  *
@@ -185,13 +76,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
-/**
- * NEW HWRM direct messages
- */
+/* HWRM Direct messages */
 
-/**
- * Sends session open request to TF Firmware
- */
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
@@ -222,9 +108,6 @@ tf_msg_session_open(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends session attach request to TF Firmware
- */
 int
 tf_msg_session_attach(struct tf *tfp __rte_unused,
 		      char *ctrl_chan_name __rte_unused,
@@ -233,9 +116,6 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
-/**
- * Sends session close request to TF Firmware
- */
 int
 tf_msg_session_close(struct tf *tfp)
 {
@@ -261,14 +141,11 @@ tf_msg_session_close(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session query config request to TF Firmware
- */
 int
 tf_msg_session_qcfg(struct tf *tfp)
 {
 	int rc;
-	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
@@ -289,636 +166,6 @@ tf_msg_session_qcfg(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_query *query)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_qcaps_input req = { 0 };
-	struct tf_session_hw_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(query, 0, sizeof(*query));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
-			    resp, meter_inst);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
-			     struct tf_rm_entry *hw_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_alloc_input req = { 0 };
-	struct tf_session_hw_resc_alloc_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			   l2_ctx_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			   prof_func_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			   prof_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			   em_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
-			   em_record_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			   wc_tcam_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
-			   wc_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
-			   meter_profiles);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
-			   meter_inst);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
-			   mirrors);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
-			   upar);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
-			   sp_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
-			   l2_func);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
-			   flex_key_templ);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			   tbl_scope);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
-			   epoch0_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
-			   epoch1_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
-			   metadata);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
-			   ct_state);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			   range_prof);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			   range_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			   lag_tbl_entries);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
-			    meter_inst);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_free(struct tf *tfp,
-			    enum tf_dir dir,
-			    struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_HW_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_flush(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_query *query __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_qcaps_input req = { 0 };
-	struct tf_session_sram_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
-			      full_action);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
-			      sp_smac_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
-			      sp_smac_ipv6);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
-			       struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_alloc_input req = { 0 };
-	struct tf_session_sram_resc_alloc_output resp;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(&resp, 0, sizeof(resp));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			     full_action);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
-			     mcg);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			     encap_8b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			     encap_16b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			     encap_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			     sp_smac);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			     req, sp_smac_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			     req, sp_smac_ipv6);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
-			     req, counter_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			     nat_sport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			     nat_dport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			     nat_s_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			     nat_d_ipv4);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
-			      resp, full_action);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			      resp, sp_smac_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			      resp, sp_smac_ipv6);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
-			      enum tf_dir dir,
-			      struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_flush(struct tf *tfp,
-			       enum tf_dir dir,
-			       struct tf_rm_entry *sram_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
 int
 tf_msg_session_resc_qcaps(struct tf *tfp,
 			  enum tf_dir dir,
@@ -973,7 +220,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -981,14 +228,14 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
 	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
-		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
@@ -1078,7 +325,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -1087,14 +334,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	}
 
 	printf("\nRESV\n");
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
-		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
-		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
-		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+		resv[i].type = tfp_le_to_cpu_32(resv_data[i].type);
+		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
+		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
@@ -1173,24 +420,112 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem register request to Firmware
- */
-int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id)
+int
+tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
 {
 	int rc;
-	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
-	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_insert_input req = { 0 };
+	struct hwrm_tf_em_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.strength = (em_result->hdr.word1 &
+			CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+			CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return 0;
+}
+
+int
+tf_msg_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_delete_input req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return 0;
+}
+
+int
+tf_msg_em_mem_rgtr(struct tf *tfp,
+		   int page_lvl,
+		   int page_size,
+		   uint64_t dma_addr,
+		   uint16_t *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
 
-	req.page_level = page_lvl;
-	req.page_size = page_size;
-	req.page_dir = tfp_cpu_to_le_64(dma_addr);
-
 	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
@@ -1208,11 +543,9 @@ int tf_msg_em_mem_rgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem unregister request to Firmware
- */
-int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t  *ctx_id)
+int
+tf_msg_em_mem_unrgtr(struct tf *tfp,
+		     uint16_t *ctx_id)
 {
 	int rc;
 	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
@@ -1233,12 +566,10 @@ int tf_msg_em_mem_unrgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM qcaps request to Firmware
- */
-int tf_msg_em_qcaps(struct tf *tfp,
-		    int dir,
-		    struct tf_em_caps *em_caps)
+int
+tf_msg_em_qcaps(struct tf *tfp,
+		int dir,
+		struct tf_em_caps *em_caps)
 {
 	int rc;
 	struct hwrm_tf_ext_em_qcaps_input  req = {0};
@@ -1273,17 +604,15 @@ int tf_msg_em_qcaps(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM config request to Firmware
- */
-int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t   num_entries,
-		  uint16_t   key0_ctx_id,
-		  uint16_t   key1_ctx_id,
-		  uint16_t   record_ctx_id,
-		  uint16_t   efc_ctx_id,
-		  uint8_t    flush_interval,
-		  int        dir)
+int
+tf_msg_em_cfg(struct tf *tfp,
+	      uint32_t num_entries,
+	      uint16_t key0_ctx_id,
+	      uint16_t key1_ctx_id,
+	      uint16_t record_ctx_id,
+	      uint16_t efc_ctx_id,
+	      uint8_t flush_interval,
+	      int dir)
 {
 	int rc;
 	struct hwrm_tf_ext_em_cfg_input  req = {0};
@@ -1317,41 +646,23 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM internal insert request to Firmware
- */
-int tf_msg_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *em_parms,
-				uint16_t *rptr_index,
-				uint8_t *rptr_entry,
-				uint8_t *num_of_entries)
+int
+tf_msg_em_op(struct tf *tfp,
+	     int dir,
+	     uint16_t op)
 {
-	int                         rc;
-	struct tfp_send_msg_parms        parms = { 0 };
-	struct hwrm_tf_em_insert_input   req = { 0 };
-	struct hwrm_tf_em_insert_output  resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_em_64b_entry *em_result =
-		(struct tf_em_64b_entry *)em_parms->em_record;
+	int rc;
+	struct hwrm_tf_ext_em_op_input req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	tfp_memcpy(req.em_key,
-		   em_parms->key,
-		   ((em_parms->key_sz_in_bits + 7) / 8));
-
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength = (em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
-		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
-	req.em_key_bitlen = em_parms->key_sz_in_bits;
-	req.action_ptr = em_result->hdr.pointer;
-	req.em_record_idx = *rptr_index;
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
 
-	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1360,75 +671,86 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &parms);
-	if (rc)
-		return rc;
-
-	*rptr_entry = resp.rptr_entry;
-	*rptr_index = resp.rptr_index;
-	*num_of_entries = resp.num_of_entries;
-
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM delete insert request to Firmware
- */
-int tf_msg_delete_em_entry(struct tf *tfp,
-			   struct tf_delete_em_entry_parms *em_parms)
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_tcam_set_parms *parms)
 {
-	int                             rc;
-	struct tfp_send_msg_parms       parms = { 0 };
-	struct hwrm_tf_em_delete_input  req = { 0 };
-	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.type = parms->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
+	data_size = 2 * req.key_size + req.result_size;
 
-	parms.tf_type = HWRM_TF_EM_DELETE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
+		data = buf.va_addr;
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
+	}
+
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp,
-				 &parms);
+				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
-	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+cleanup:
+	tf_msg_free_dma_buf(&buf);
 
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM operation request to Firmware
- */
-int tf_msg_em_op(struct tf *tfp,
-		 int dir,
-		 uint16_t op)
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input req = {0};
-	struct hwrm_tf_ext_em_op_output resp = {0};
-	uint32_t flags;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
 
-	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_32(flags);
-	req.op = tfp_cpu_to_le_16(op);
+	req.type = in_parms->hcapi_type;
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
 
-	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.tf_type = HWRM_TF_TCAM_FREE;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1443,21 +765,32 @@ int tf_msg_em_op(struct tf *tfp,
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_set_input req = { 0 };
+	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_set_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
 	req.index = tfp_cpu_to_le_32(index);
 
@@ -1465,13 +798,15 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 		   data,
 		   size);
 
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_TBL_TYPE_SET,
-			 req);
+	parms.tf_type = HWRM_TF_TBL_TYPE_SET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1481,32 +816,43 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 int
 tf_msg_get_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_get_input req = { 0 };
+	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_input req = { 0 };
-	struct tf_tbl_type_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_TBL_TYPE_GET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1521,6 +867,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/* HWRM Tunneled messages */
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  struct tf_bulk_get_tbl_entry_parms *params)
@@ -1561,96 +909,3 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
-
-int
-tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_tcam_set_parms *parms)
-{
-	int rc;
-	struct tfp_send_msg_parms mparms = { 0 };
-	struct hwrm_tf_tcam_set_input req = { 0 };
-	struct hwrm_tf_tcam_set_output resp = { 0 };
-	struct tf_msg_dma_buf buf = { 0 };
-	uint8_t *data = NULL;
-	int data_size = 0;
-
-	req.type = parms->type;
-
-	req.idx = tfp_cpu_to_le_16(parms->idx);
-	if (parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
-
-	req.key_size = parms->key_size;
-	req.mask_offset = parms->key_size;
-	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * parms->key_size;
-	req.result_size = parms->result_size;
-	data_size = 2 * req.key_size + req.result_size;
-
-	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
-		/* use pci buffer */
-		data = &req.dev_data[0];
-	} else {
-		/* use dma buffer */
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_alloc_dma_buf(&buf, data_size);
-		if (rc)
-			goto cleanup;
-		data = buf.va_addr;
-		tfp_memcpy(&req.dev_data[0],
-			   &buf.pa_addr,
-			   sizeof(buf.pa_addr));
-	}
-
-	tfp_memcpy(&data[0], parms->key, parms->key_size);
-	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
-	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
-
-	mparms.tf_type = HWRM_TF_TCAM_SET;
-	mparms.req_data = (uint32_t *)&req;
-	mparms.req_size = sizeof(req);
-	mparms.resp_data = (uint32_t *)&resp;
-	mparms.resp_size = sizeof(resp);
-	mparms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &mparms);
-	if (rc)
-		goto cleanup;
-
-cleanup:
-	tf_msg_free_dma_buf(&buf);
-
-	return rc;
-}
-
-int
-tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_tcam_free_parms *in_parms)
-{
-	int rc;
-	struct hwrm_tf_tcam_free_input req =  { 0 };
-	struct hwrm_tf_tcam_free_output resp = { 0 };
-	struct tfp_send_msg_parms parms = { 0 };
-
-	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
-	if (rc != 0)
-		return rc;
-
-	req.count = 1;
-	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
-	if (in_parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
-
-	parms.tf_type = HWRM_TF_TCAM_FREE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &parms);
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1ff1044..8e276d4 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -16,6 +16,8 @@
 
 struct tf;
 
+/* HWRM Direct messages */
+
 /**
  * Sends session open request to Firmware
  *
@@ -29,7 +31,7 @@ struct tf;
  *   Pointer to the fw_session_id that is allocated on firmware side
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
@@ -46,7 +48,7 @@ int tf_msg_session_open(struct tf *tfp,
  *   time of session open
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
@@ -59,75 +61,23 @@ int tf_msg_session_attach(struct tf *tfp,
  *   Pointer to session handle
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_close(struct tf *tfp);
 
 /**
  * Sends session query config request to TF Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_qcfg(struct tf *tfp);
 
 /**
  * Sends session HW resource query capability request to TF Firmware
- */
-int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_query *hw_query);
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int tf_msg_session_hw_resc_alloc(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_alloc *hw_alloc,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int tf_msg_session_hw_resc_free(struct tf *tfp,
-				enum tf_dir dir,
-				struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int tf_msg_session_hw_resc_flush(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_query *sram_query);
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int tf_msg_session_sram_resc_alloc(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_alloc *sram_alloc,
-				   struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int tf_msg_session_sram_resc_free(struct tf *tfp,
-				  enum tf_dir dir,
-				  struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int tf_msg_session_sram_resc_flush(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session HW resource query capability request to TF Firmware
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -183,6 +133,21 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 
 /**
  * Sends session resource flush request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements that needs to be flushed
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_resc_flush(struct tf *tfp,
 			      enum tf_dir dir,
@@ -190,6 +155,24 @@ int tf_msg_session_resc_flush(struct tf *tfp,
 			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to em insert parameter list
+ *
+ * [in] rptr_index
+ *   Record ptr index
+ *
+ * [in] rptr_entry
+ *   Record ptr entry
+ *
+ * [in] num_of_entries
+ *   Number of entries to insert
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    struct tf_insert_em_entry_parms *params,
@@ -198,26 +181,75 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    uint8_t *num_of_entries);
 /**
  * Sends EM internal delete request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] em_parms
+ *   Pointer to em delete parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms);
+
 /**
  * Sends EM mem register request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] page_lvl
+ *   Page level
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] dma_addr
+ *   DMA Address for the memory page
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id);
+		       int page_lvl,
+		       int page_size,
+		       uint64_t dma_addr,
+		       uint16_t *ctx_id);
 
 /**
  * Sends EM mem unregister request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t     *ctx_id);
+			 uint16_t *ctx_id);
 
 /**
  * Sends EM qcaps request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] em_caps
+ *   Pointer to EM capabilities
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_qcaps(struct tf *tfp,
 		    int dir,
@@ -225,22 +257,63 @@ int tf_msg_em_qcaps(struct tf *tfp,
 
 /**
  * Sends EM config request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] num_entries
+ *   EM Table, key 0, number of entries to configure
+ *
+ * [in] key0_ctx_id
+ *   EM Table, Key 0 context id
+ *
+ * [in] key1_ctx_id
+ *   EM Table, Key 1 context id
+ *
+ * [in] record_ctx_id
+ *   EM Table, Record context id
+ *
+ * [in] efc_ctx_id
+ *   EM Table, EFC Table context id
+ *
+ * [in] flush_interval
+ *   Flush pending HW cached flows every 1/10th of value set in
+ *   seconds, both idle and active flows are flushed from the HW
+ *   cache. If set to 0, this feature will be disabled.
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t      num_entries,
-		  uint16_t      key0_ctx_id,
-		  uint16_t      key1_ctx_id,
-		  uint16_t      record_ctx_id,
-		  uint16_t      efc_ctx_id,
-		  uint8_t       flush_interval,
-		  int           dir);
+		  uint32_t num_entries,
+		  uint16_t key0_ctx_id,
+		  uint16_t key1_ctx_id,
+		  uint16_t record_ctx_id,
+		  uint16_t efc_ctx_id,
+		  uint8_t flush_interval,
+		  int dir);
 
 /**
  * Sends EM operation request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] op
+ *   CFA Operator
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op);
+		 int dir,
+		 uint16_t op);
 
 /**
  * Sends tcam entry 'set' to the Firmware.
@@ -281,7 +354,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to set
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to set
  *
  * [in] size
@@ -298,7 +371,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  */
 int tf_msg_set_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
@@ -312,7 +385,7 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to get
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to get
  *
  * [in] size
@@ -329,11 +402,13 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  */
 int tf_msg_get_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
 
+/* HWRM Tunneled messages */
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index b6fe2f1..e0a84e6 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -1818,16 +1818,8 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_entries = tfs->resc.tx.hw_entry;
 
 	/* Query for Session HW Resources */
-	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
 
+	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1846,16 +1838,6 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
 
 	/* Allocate Session HW Resources */
-	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform HW allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -1906,17 +1888,7 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	else
 		sram_entries = tfs->resc.tx.sram_entry;
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
+	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1934,20 +1906,6 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
 		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform SRAM allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -2798,17 +2756,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
-			rc = tf_msg_session_hw_resc_flush(tfp,
-							  i,
-							  hw_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
 		}
 
 		/* Check for any not previously freed SRAM resources
@@ -2828,38 +2775,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
-
-			rc = tf_msg_session_sram_resc_flush(tfp,
-							    i,
-							    sram_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
-		}
-
-		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, HW free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
-		}
-
-		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, SRAM free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
 		}
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index de8f119..2d9be65 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -95,7 +95,9 @@ struct tf_rm_new_db {
  *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
 			       uint16_t *reservations,
 			       uint16_t count,
 			       uint16_t *valid_count)
@@ -107,6 +109,26 @@ tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
 		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
 		    reservations[i] > 0)
 			cnt++;
+
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
+		 */
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
+			TFP_DRV_LOG(ERR,
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+		}
 	}
 
 	*valid_count = cnt;
@@ -405,7 +427,9 @@ tf_rm_create_db(struct tf *tfp,
 	 * the DB holds them all as to give a fast lookup. We can also
 	 * remove entries where there are no request for elements.
 	 */
-	tf_rm_count_hcapi_reservations(parms->cfg,
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
 				       parms->alloc_cnt,
 				       parms->num_elements,
 				       &hcapi_items);
@@ -507,6 +531,11 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
 			/* Create pool */
 			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
 				     sizeof(struct bitalloc));
@@ -548,11 +577,16 @@ tf_rm_create_db(struct tf *tfp,
 		}
 	}
 
-	rm_db->num_entries = i;
+	rm_db->num_entries = parms->num_elements;
 	rm_db->dir = parms->dir;
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index b0a932b..b5ce860 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -26,739 +26,6 @@
 #include "stack.h"
 #include "tf_common.h"
 
-#define PTU_PTE_VALID          0x1UL
-#define PTU_PTE_LAST           0x2UL
-#define PTU_PTE_NEXT_TO_LAST   0x4UL
-
-/* Number of pointers per page_size */
-#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
-
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define	TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
-/**
- * Function to free a page table
- *
- * [in] tp
- *   Pointer to the page table to free
- */
-static void
-tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
-{
-	uint32_t i;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		if (!tp->pg_va_tbl[i]) {
-			TFP_DRV_LOG(WARNING,
-				    "No mapping for page: %d table: %016" PRIu64 "\n",
-				    i,
-				    (uint64_t)(uintptr_t)tp);
-			continue;
-		}
-
-		tfp_free(tp->pg_va_tbl[i]);
-		tp->pg_va_tbl[i] = NULL;
-	}
-
-	tp->pg_count = 0;
-	tfp_free(tp->pg_va_tbl);
-	tp->pg_va_tbl = NULL;
-	tfp_free(tp->pg_pa_tbl);
-	tp->pg_pa_tbl = NULL;
-}
-
-/**
- * Function to free an EM table
- *
- * [in] tbl
- *   Pointer to the EM table to free
- */
-static void
-tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-		TFP_DRV_LOG(INFO,
-			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
-			   TF_EM_PAGE_SIZE,
-			    i,
-			    tp->pg_count);
-
-		tf_em_free_pg_tbl(tp);
-	}
-
-	tbl->l0_addr = NULL;
-	tbl->l0_dma_addr = 0;
-	tbl->num_lvl = 0;
-	tbl->num_data_pages = 0;
-}
-
-/**
- * Allocation of page tables
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] pg_count
- *   Page count to allocate
- *
- * [in] pg_size
- *   Size of each page
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
-		   uint32_t pg_count,
-		   uint32_t pg_size)
-{
-	uint32_t i;
-	struct tfp_calloc_parms parms;
-
-	parms.nitems = pg_count;
-	parms.size = sizeof(void *);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0)
-		return -ENOMEM;
-
-	tp->pg_va_tbl = parms.mem_va;
-
-	if (tfp_calloc(&parms) != 0) {
-		tfp_free(tp->pg_va_tbl);
-		return -ENOMEM;
-	}
-
-	tp->pg_pa_tbl = parms.mem_va;
-
-	tp->pg_count = 0;
-	tp->pg_size = pg_size;
-
-	for (i = 0; i < pg_count; i++) {
-		parms.nitems = 1;
-		parms.size = pg_size;
-		parms.alignment = TF_EM_PAGE_ALIGNMENT;
-
-		if (tfp_calloc(&parms) != 0)
-			goto cleanup;
-
-		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
-		tp->pg_va_tbl[i] = parms.mem_va;
-
-		memset(tp->pg_va_tbl[i], 0, pg_size);
-		tp->pg_count++;
-	}
-
-	return 0;
-
-cleanup:
-	tf_em_free_pg_tbl(tp);
-	return -ENOMEM;
-}
-
-/**
- * Allocates EM page tables
- *
- * [in] tbl
- *   Table to allocate pages for
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int rc = 0;
-	int i;
-	uint32_t j;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-
-		rc = tf_em_alloc_pg_tbl(tp,
-					tbl->page_cnt[i],
-					TF_EM_PAGE_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d, rc:%s\n",
-				i,
-				strerror(-rc));
-			goto cleanup;
-		}
-
-		for (j = 0; j < tp->pg_count; j++) {
-			TFP_DRV_LOG(INFO,
-				"EEM: Allocated page table: size %u lvl %d cnt"
-				" %u VA:%p PA:%p\n",
-				TF_EM_PAGE_SIZE,
-				i,
-				tp->pg_count,
-				(uint32_t *)tp->pg_va_tbl[j],
-				(uint32_t *)(uintptr_t)tp->pg_pa_tbl[j]);
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_free_page_table(tbl);
-	return rc;
-}
-
-/**
- * Links EM page tables
- *
- * [in] tp
- *   Pointer to page table
- *
- * [in] tp_next
- *   Pointer to the next page table
- *
- * [in] set_pte_last
- *   Flag controlling if the page table is last
- */
-static void
-tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-		      struct hcapi_cfa_em_page_tbl *tp_next,
-		      bool set_pte_last)
-{
-	uint64_t *pg_pa = tp_next->pg_pa_tbl;
-	uint64_t *pg_va;
-	uint64_t valid;
-	uint32_t k = 0;
-	uint32_t i;
-	uint32_t j;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			if (k == tp_next->pg_count - 2 && set_pte_last)
-				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
-			else if (k == tp_next->pg_count - 1 && set_pte_last)
-				valid = PTU_PTE_LAST | PTU_PTE_VALID;
-			else
-				valid = PTU_PTE_VALID;
-
-			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
-			if (++k >= tp_next->pg_count)
-				return;
-		}
-	}
-}
-
-/**
- * Setup a EM page table
- *
- * [in] tbl
- *   Pointer to EM page table
- */
-static void
-tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_page_tbl *tp;
-	bool set_pte_last = 0;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl - 1; i++) {
-		tp = &tbl->pg_tbl[i];
-		tp_next = &tbl->pg_tbl[i + 1];
-		if (i == tbl->num_lvl - 2)
-			set_pte_last = 1;
-		tf_em_link_page_table(tp, tp_next, set_pte_last);
-	}
-
-	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
-}
-
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
-/**
- * Unregisters EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- */
-static void
-tf_em_ctx_unreg(struct tf *tfp,
-		struct tf_tbl_scope_cb *tbl_scope_cb,
-		int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
-			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
-			tf_em_free_page_table(tbl);
-		}
-	}
-}
-
-/**
- * Registers EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of Memory
- */
-static int
-tf_em_ctx_reg(struct tf *tfp,
-	      struct tf_tbl_scope_cb *tbl_scope_cb,
-	      int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int rc = 0;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
-
-			if (rc)
-				goto cleanup;
-
-			rc = tf_em_alloc_page_table(tbl);
-			if (rc)
-				goto cleanup;
-
-			tf_em_setup_page_table(tbl);
-			rc = tf_msg_em_mem_rgtr(tfp,
-						tbl->num_lvl - 1,
-						TF_EM_PAGE_SIZE_ENUM,
-						tbl->l0_dma_addr,
-						&tbl->ctx_id);
-			if (rc)
-				goto cleanup;
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	return rc;
-}
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if ((parms->rx_num_flows_in_k != 0) &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if ((parms->tx_num_flows_in_k != 0) &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries
-		= parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size
-		= parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries
-		= 0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries
-		= parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size
-		= parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries
-		= 0;
-
-	return 0;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -881,290 +148,6 @@ tf_free_tbl_entry_shadow(struct tf_session *tfs,
 }
 #endif /* TF_SHADOW */
 
-/**
- * Create External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- * [in] num_entries
- *   number of entries to write
- * [in] entry_sz_bytes
- *   size of each entry
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_tbl_pool_external(enum tf_dir dir,
-			    struct tf_tbl_scope_cb *tbl_scope_cb,
-			    uint32_t num_entries,
-			    uint32_t entry_sz_bytes)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i;
-	int32_t j;
-	int rc = 0;
-	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
-			    tf_dir_2_str(dir), strerror(ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Save the  malloced memory address so that it can
-	 * be freed when the table scope is freed.
-	 */
-	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
-
-	/* Fill pool with indexes in reverse
-	 */
-	j = (num_entries - 1) * entry_sz_bytes;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-				    tf_dir_2_str(dir), strerror(-rc));
-			goto cleanup;
-		}
-
-		if (j < 0) {
-			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
-				    dir, j);
-			goto cleanup;
-		}
-		j -= entry_sz_bytes;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Destroy External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- *
- */
-static void
-tf_destroy_tbl_pool_external(enum tf_dir dir,
-			     struct tf_tbl_scope_cb *tbl_scope_cb)
-{
-	uint32_t *ext_act_pool_mem =
-		tbl_scope_cb->ext_act_pool_mem[dir];
-
-	tfp_free(ext_act_pool_mem);
-}
-
-/* API defined in tf_em.h */
-struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
-{
-	int i;
-
-	/* Check that id is valid */
-	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
-	if (i < 0)
-		return NULL;
-
-	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
-	}
-
-	return NULL;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			 struct tf_free_tbl_scope_parms *parms)
-{
-	int rc = 0;
-	enum tf_dir  dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Table scope error\n");
-		return -EINVAL;
-	}
-
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-
-	/* free table scope locks */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/* Free associated external pools
-		 */
-		tf_destroy_tbl_pool_external(dir,
-					     tbl_scope_cb);
-		tf_msg_em_op(tfp,
-			     dir,
-			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
-
-		/* free table scope and all associated resources */
-		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	}
-
-	return rc;
-}
-
-/* API defined in tf_em.h */
-int
-tf_alloc_eem_tbl_scope(struct tf *tfp,
-		       struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-	enum tf_dir dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_table *em_tables;
-	int index;
-	struct tf_session *session;
-	struct tf_free_tbl_scope_parms free_parms;
-
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* Get Table Scope control block from the session pool */
-	index = ba_alloc(session->tbl_scope_pool_rx);
-	if (index == -1) {
-		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
-			    "Control Block\n");
-		return -ENOMEM;
-	}
-
-	tbl_scope_cb = &session->tbl_scopes[index];
-	tbl_scope_cb->index = index;
-	tbl_scope_cb->tbl_scope_id = index;
-	parms->tbl_scope_id = index;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_msg_em_qcaps(tfp,
-				     dir,
-				     &tbl_scope_cb->em_caps[dir]);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to query for EEM capability,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-	}
-
-	/*
-	 * Validate and setup table sizes
-	 */
-	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
-		goto cleanup;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/*
-		 * Allocate tables and signal configuration to FW
-		 */
-		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-
-		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
-		rc = tf_msg_em_cfg(tfp,
-				   em_tables[TF_KEY0_TABLE].num_entries,
-				   em_tables[TF_KEY0_TABLE].ctx_id,
-				   em_tables[TF_KEY1_TABLE].ctx_id,
-				   em_tables[TF_RECORD_TABLE].ctx_id,
-				   em_tables[TF_EFC_TABLE].ctx_id,
-				   parms->hw_flow_cache_flush_timer,
-				   dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "TBL: Unable to configure EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		rc = tf_msg_em_op(tfp,
-				  dir,
-				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
-
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		/* Allocate the pool of offsets of the external memory.
-		 * Initially, this is a single fixed size pool for all external
-		 * actions related to a single table scope.
-		 */
-		rc = tf_create_tbl_pool_external(dir,
-					    tbl_scope_cb,
-					    em_tables[TF_RECORD_TABLE].num_entries,
-					    em_tables[TF_RECORD_TABLE].entry_size);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s TBL: Unable to allocate idx pools %s\n",
-				    tf_dir_2_str(dir),
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-	}
-
-	return 0;
-
-cleanup_full:
-	free_parms.tbl_scope_id = index;
-	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
-	return -EINVAL;
-
-cleanup:
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-	return -EINVAL;
-}
-
 /* API defined in tf_core.h */
 int
 tf_bulk_get_tbl_entry(struct tf *tfp,
@@ -1194,119 +177,3 @@ tf_bulk_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
-
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_scope(struct tf *tfp,
-		   struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	rc = tf_alloc_eem_tbl_scope(tfp, parms);
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_scope(struct tf *tfp,
-		  struct tf_free_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	/* free table scope and all associated resources */
-	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
-
-	return rc;
-}
-
-static void
-tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-			struct hcapi_cfa_em_page_tbl *tp_next)
-{
-	uint64_t *pg_va;
-	uint32_t i;
-	uint32_t j;
-	uint32_t k = 0;
-
-	printf("pg_count:%d pg_size:0x%x\n",
-	       tp->pg_count,
-	       tp->pg_size);
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-		printf("\t%p\n", (void *)pg_va);
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
-			if (((pg_va[j] & 0x7) ==
-			     tfp_cpu_to_le_64(PTU_PTE_LAST |
-					      PTU_PTE_VALID)))
-				return;
-
-			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
-				printf("** Invalid entry **\n");
-				return;
-			}
-
-			if (++k >= tp_next->pg_count) {
-				printf("** Shouldn't get here **\n");
-				return;
-			}
-		}
-	}
-}
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
-{
-	struct tf_session      *session;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_page_tbl *tp;
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-	int j;
-	int dir;
-
-	printf("called %s\n", __func__);
-
-	/* find session struct */
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* find control block for table scope */
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-
-		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
-			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
-			printf
-	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
-			       j,
-			       tbl->type,
-			       tbl->num_entries,
-			       tbl->entry_size,
-			       tbl->num_lvl);
-			if (tbl->pg_tbl[0].pg_va_tbl &&
-			    tbl->pg_tbl[0].pg_pa_tbl)
-				printf("%p %p\n",
-			       tbl->pg_tbl[0].pg_va_tbl[0],
-			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
-			for (i = 0; i < tbl->num_lvl - 1; i++) {
-				printf("Level:%d\n", i);
-				tp = &tbl->pg_tbl[i];
-				tp_next = &tbl->pg_tbl[i + 1];
-				tf_dump_link_page_table(tp, tp_next);
-			}
-			printf("\n");
-		}
-	}
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index bdf7d20..2f5af60 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -209,8 +209,10 @@ tf_tbl_set(struct tf *tfp,
 	   struct tf_tbl_set_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
 	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -240,9 +242,22 @@ tf_tbl_set(struct tf *tfp,
 	}
 
 	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	rc = tf_msg_set_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
@@ -262,8 +277,10 @@ tf_tbl_get(struct tf *tfp,
 	   struct tf_tbl_get_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
+	uint16_t hcapi_type;
 	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -292,10 +309,24 @@ tf_tbl_get(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Get the entry */
 	rc = tf_msg_get_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 260fb15..a1761ad 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -53,7 +53,6 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	db_cfg.num_elements = parms->num_elements;
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -174,14 +173,15 @@ tf_tcam_alloc(struct tf *tfp,
 }
 
 int
-tf_tcam_free(struct tf *tfp __rte_unused,
-	     struct tf_tcam_free_parms *parms __rte_unused)
+tf_tcam_free(struct tf *tfp,
+	     struct tf_tcam_free_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -253,6 +253,15 @@ tf_tcam_free(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
@@ -281,6 +290,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -338,6 +348,15 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 5090dfd..ee5bacc 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -77,6 +77,10 @@ struct tf_tcam_free_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
 	 * [in] Index to free
 	 */
 	uint16_t idx;
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 16c43eb..5472a9a 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -152,9 +152,9 @@ tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
 	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
 		return tf_ident_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_TABLE:
-		return tf_tcam_tbl_2_str(mod_type);
-	case TF_DEVICE_MODULE_TYPE_TCAM:
 		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tcam_tbl_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_EM:
 		return tf_em_tbl_type_2_str(mod_type);
 	default:
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 23/50] net/bnxt: update table get to use new design
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (21 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 22/50] net/bnxt: support EM and TCAM lookup with table scope Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 24/50] net/bnxt: update RM to support HCAPI only Somnath Kotur
                   ` (27 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Michael Wildt <michael.wildt@broadcom.com>

- Move bulk table get implementation to new Tbl Module design.
- Update messages for bulk table get
- Retrieve specified table element using bulk mechanism
- Remove deprecated resource definitions
- Update device type configuration for P4.
- Update RM DB HCAPI count check and fix EM internal and host
  code such that EM DBs can be created correctly.
- Update error logging to be info on unbind in the different modules.
- Move RTE RSVD out of tf_resources.h

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_hw.h       |    2 +-
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h      |    2 +-
 drivers/net/bnxt/hcapi/hcapi_cfa.h        |    2 +
 drivers/net/bnxt/meson.build              |    3 +-
 drivers/net/bnxt/tf_core/Makefile         |    2 -
 drivers/net/bnxt/tf_core/tf_common.h      |   55 +-
 drivers/net/bnxt/tf_core/tf_core.c        |   86 +-
 drivers/net/bnxt/tf_core/tf_device.h      |   24 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c   |    4 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h   |    5 +-
 drivers/net/bnxt/tf_core/tf_em.h          |   88 +-
 drivers/net/bnxt/tf_core/tf_em_common.c   |   29 +-
 drivers/net/bnxt/tf_core/tf_em_internal.c |   59 +-
 drivers/net/bnxt/tf_core/tf_identifier.c  |   14 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |   31 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |    8 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  529 -----
 drivers/net/bnxt/tf_core/tf_rm.c          | 3695 ++++++-----------------------
 drivers/net/bnxt/tf_core/tf_rm.h          |  539 +++--
 drivers/net/bnxt/tf_core/tf_rm_new.c      |  907 -------
 drivers/net/bnxt/tf_core/tf_rm_new.h      |  446 ----
 drivers/net/bnxt/tf_core/tf_session.h     |  214 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  478 +++-
 drivers/net/bnxt/tf_core/tf_tbl.h         |  436 +++-
 drivers/net/bnxt/tf_core/tf_tbl_type.c    |  342 ---
 drivers/net/bnxt/tf_core/tf_tbl_type.h    |  318 ---
 drivers/net/bnxt/tf_core/tf_tcam.c        |   15 +-
 27 files changed, 2088 insertions(+), 6245 deletions(-)
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
index 1c51da8..efaf607 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_hw.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
@@ -5,7 +5,7 @@
 /*
  * Name:  cfa_p40_hw.h
  *
- * Description: header for SWE based on TFLIB2.0
+ * Description: header for SWE based on Truflow
  *
  * Date:  taken from 12/16/19 17:18:12
  *
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
index 4238561..c30e4f4 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -5,7 +5,7 @@
 /*
  * Name:  cfa_p40_tbl.h
  *
- * Description: header for SWE based on TFLIB2.0
+ * Description: header for SWE based on Truflow
  *
  * Date:  12/16/19 17:18:12
  *
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index a27c749..d2a494e 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -244,6 +244,8 @@ int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
 			       struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
+			     struct hcapi_cfa_data *mirror);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 35038dc..7f3ec62 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -41,10 +41,9 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
 	'tf_core/tf_shadow_tcam.c',
-	'tf_core/tf_tbl_type.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm_new.c',
+	'tf_core/tf_rm.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 6ae5c34..b31ed60 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -23,10 +23,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index ec3bca8..b982203 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -6,52 +6,11 @@
 #ifndef _TF_COMMON_H_
 #define _TF_COMMON_H_
 
-/* Helper to check the parms */
-#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "%s: session error\n", \
-				    tf_dir_2_str((parms)->dir)); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_TFP_SESSION(tfp) do { \
-		if ((tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
+/* Helpers to performs parameter check */
 
+/**
+ * Checks 1 parameter against NULL.
+ */
 #define TF_CHECK_PARMS1(parms) do {					\
 		if ((parms) == NULL) {					\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -59,6 +18,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 2 parameters against NULL.
+ */
 #define TF_CHECK_PARMS2(parms1, parms2) do {				\
 		if ((parms1) == NULL || (parms2) == NULL) {		\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -66,6 +28,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 3 parameters against NULL.
+ */
 #define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
 		if ((parms1) == NULL ||					\
 		    (parms2) == NULL ||					\
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8b3e15c..8727900 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -186,7 +186,7 @@ int tf_insert_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -241,7 +241,7 @@ int tf_delete_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -523,7 +523,7 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 	return -EOPNOTSUPP;
 }
 
@@ -821,7 +821,80 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
+int
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_bulk_parms bparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+
+		return rc;
+	}
+
+	/* Internal table type processing */
+
+	if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	bparms.dir = parms->dir;
+	bparms.type = parms->type;
+	bparms.starting_idx = parms->starting_idx;
+	bparms.num_entries = parms->num_entries;
+	bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
+	bparms.physical_mem_addr = parms->physical_mem_addr;
+	rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get bulk failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
@@ -830,7 +903,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -861,7 +934,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
 int
 tf_free_tbl_scope(struct tf *tfp,
 		  struct tf_free_tbl_scope_parms *parms)
@@ -870,7 +942,7 @@ tf_free_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 2712d10..93f3627 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -8,7 +8,7 @@
 
 #include "tf_core.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -293,7 +293,27 @@ struct tf_dev_ops {
 	 *   - (-EINVAL) on failure.
 	 */
 	int (*tf_dev_get_tbl)(struct tf *tfp,
-			       struct tf_tbl_get_parms *parms);
+			      struct tf_tbl_get_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element using 'bulk'
+	 * mechanism.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get bulk parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_bulk_tbl)(struct tf *tfp,
+				   struct tf_tbl_get_bulk_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 127c655..e352667 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -8,7 +8,7 @@
 
 #include "tf_device.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
 
@@ -88,6 +88,7 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
+	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
 	.tf_dev_free_tcam = NULL,
 	.tf_dev_alloc_search_tcam = NULL,
@@ -114,6 +115,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
+	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = NULL,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index da6dd65..473e4ea 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -9,7 +9,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_core.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -41,8 +41,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index cf799c2..7042f44 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -23,6 +23,56 @@
 #define TF_EM_MAX_MASK 0x7FFF
 #define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
 
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
+ */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define PAGE_SIZE TF_EM_PAGE_SIZE_2M
+
+#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /*
  * Used to build GFID:
  *
@@ -80,13 +130,43 @@ struct tf_em_cfg_parms {
 };
 
 /**
- * @page table Table
+ * @page em EM
  *
  * @ref tf_alloc_eem_tbl_scope
  *
  * @ref tf_free_eem_tbl_scope_cb
  *
- * @ref tbl_scope_cb_find
+ * @ref tf_em_insert_int_entry
+ *
+ * @ref tf_em_delete_int_entry
+ *
+ * @ref tf_em_insert_ext_entry
+ *
+ * @ref tf_em_delete_ext_entry
+ *
+ * @ref tf_em_insert_ext_sys_entry
+ *
+ * @ref tf_em_delete_ext_sys_entry
+ *
+ * @ref tf_em_int_bind
+ *
+ * @ref tf_em_int_unbind
+ *
+ * @ref tf_em_ext_common_bind
+ *
+ * @ref tf_em_ext_common_unbind
+ *
+ * @ref tf_em_ext_host_alloc
+ *
+ * @ref tf_em_ext_host_free
+ *
+ * @ref tf_em_ext_system_alloc
+ *
+ * @ref tf_em_ext_system_free
+ *
+ * @ref tf_em_ext_common_free
+ *
+ * @ref tf_em_ext_common_alloc
  */
 
 /**
@@ -328,7 +408,7 @@ int tf_em_ext_host_free(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+			   struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using system memory
@@ -344,7 +424,7 @@ int tf_em_ext_system_alloc(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
+			  struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index ba6aa7a..d0d80da 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -194,12 +194,13 @@ tf_em_ext_common_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Ext DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -210,19 +211,29 @@ tf_em_ext_common_bind(struct tf *tfp,
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
 		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Ext DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_TBL_SCOPE] == 0)
+			continue;
+
 		db_cfg.rm_db = &eem_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
-				    "%s: EM DB creation failed\n",
+				    "%s: EM Ext DB creation failed\n",
 				    tf_dir_2_str(i));
 
 			return rc;
 		}
+		db_exists = 1;
 	}
 
-	mem_type = parms->mem_type;
-	init = 1;
+	if (db_exists) {
+		mem_type = parms->mem_type;
+		init = 1;
+	}
 
 	return 0;
 }
@@ -236,13 +247,11 @@ tf_em_ext_common_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Ext DBs created\n");
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 9be91ad..1c51474 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -225,12 +225,13 @@ tf_em_int_bind(struct tf *tfp,
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 	struct tf_session *session;
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Int DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -242,31 +243,35 @@ tf_em_int_bind(struct tf *tfp,
 				  TF_SESSION_EM_POOL_SIZE);
 	}
 
-	/*
-	 * I'm not sure that this code is needed.
-	 * leaving for now until resolved
-	 */
-	if (parms->num_elements) {
-		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
-
-		for (i = 0; i < TF_DIR_MAX; i++) {
-			db_cfg.dir = i;
-			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
-			db_cfg.rm_db = &em_db[i];
-			rc = tf_rm_create_db(tfp, &db_cfg);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: EM DB creation failed\n",
-					    tf_dir_2_str(i));
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
-				return rc;
-			}
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Int DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
+			continue;
+
+		db_cfg.rm_db = &em_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM Int DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
 		}
+		db_exists = 1;
 	}
 
-	init = 1;
+	if (db_exists)
+		init = 1;
+
 	return 0;
 }
 
@@ -280,13 +285,11 @@ tf_em_int_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Int DBs created\n");
+		return 0;
 	}
 
 	session = (struct tf_session *)tfp->session->core_data;
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index b197bb2..2113710 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -7,7 +7,7 @@
 
 #include "tf_identifier.h"
 #include "tf_common.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_util.h"
 #include "tfp.h"
 
@@ -35,7 +35,7 @@ tf_ident_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "Identifier DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -65,7 +65,7 @@ tf_ident_bind(struct tf *tfp,
 }
 
 int
-tf_ident_unbind(struct tf *tfp __rte_unused)
+tf_ident_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
@@ -73,13 +73,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
+		TFP_DRV_LOG(INFO,
 			    "No Identifier DBs created\n");
-		return -EINVAL;
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index d8b80bc..02d8a49 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -871,26 +871,41 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *params)
+			  enum tf_dir dir,
+			  uint16_t hcapi_type,
+			  uint32_t starting_idx,
+			  uint16_t num_entries,
+			  uint16_t entry_sz_in_bytes,
+			  uint64_t physical_mem_addr)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
 	int data_size = 0;
 
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(params->dir);
-	req.type = tfp_cpu_to_le_32(params->type);
-	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
-	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
+	req.start_index = tfp_cpu_to_le_32(starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
 
-	data_size = params->num_entries * params->entry_sz_in_bytes;
+	data_size = num_entries * entry_sz_in_bytes;
 
-	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+	req.host_addr = tfp_cpu_to_le_64(physical_mem_addr);
 
 	MSG_PREP(parms,
 		 TF_KONG_MB,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8e276d4..7432873 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -11,7 +11,6 @@
 
 #include "tf_tbl.h"
 #include "tf_rm.h"
-#include "tf_rm_new.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -422,6 +421,11 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms);
+			      enum tf_dir dir,
+			      uint16_t hcapi_type,
+			      uint32_t starting_idx,
+			      uint16_t num_entries,
+			      uint16_t entry_sz_in_bytes,
+			      uint64_t physical_mem_addr);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index b7b4451..4688514 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,535 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
-/* Common HW resources for all chip variants */
-#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
-					    * entries
-					    */
-#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
-#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
-					    * TCAM
-					    */
-#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
-					    * IDs
-					    */
-#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
-#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
-#define TF_NUM_METER             1024      /* < Number of meter instances */
-#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
-#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
-
-/* Wh+/SR specific HW resources */
-#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
-					    * entries
-					    */
-
-/* SR/SR2 specific HW resources */
-#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
-
-
-/* Thor, SR2 common HW resources */
-#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
-					    * templates
-					    */
-
-/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
-#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
-#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
-#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
-#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
-					    * States
-					    */
-#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
-#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
-#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
-
-/*
- * Common for the Reserved Resource defines below:
- *
- * - HW Resources
- *   For resources where a priority level plays a role, i.e. l2 ctx
- *   tcam entries, both a number of resources and a begin/end pair is
- *   required. The begin/end is used to assure TFLIB gets the correct
- *   priority setting for that resource.
- *
- *   For EM records there is no priority required thus a number of
- *   resources is sufficient.
- *
- *   Example, TCAM:
- *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
- *     0-63 as HW presents 0 as the highest priority entry.
- *
- * - SRAM Resources
- *   Handled as regular resources as there is no priority required.
- *
- * Common for these resources is that they are handled per direction,
- * rx/tx.
- */
-
-/* HW Resources */
-
-/* L2 CTX */
-#define TF_RSVD_L2_CTXT_TCAM_RX                   64
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
-#define TF_RSVD_L2_CTXT_TCAM_TX                   960
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
-
-/* Profiler */
-#define TF_RSVD_PROF_FUNC_RX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
-#define TF_RSVD_PROF_FUNC_TX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
-
-#define TF_RSVD_PROF_TCAM_RX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
-#define TF_RSVD_PROF_TCAM_TX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
-
-/* EM Profiles IDs */
-#define TF_RSVD_EM_PROF_ID_RX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
-#define TF_RSVD_EM_PROF_ID_TX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
-
-/* EM Records */
-#define TF_RSVD_EM_REC_RX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
-#define TF_RSVD_EM_REC_TX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
-
-/* Wildcard */
-#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
-#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
-
-#define TF_RSVD_WC_TCAM_RX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_WC_TCAM_END_IDX_RX                63
-#define TF_RSVD_WC_TCAM_TX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_WC_TCAM_END_IDX_TX                63
-
-#define TF_RSVD_METER_PROF_RX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_PROF_END_IDX_RX             0
-#define TF_RSVD_METER_PROF_TX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_PROF_END_IDX_TX             0
-
-#define TF_RSVD_METER_INST_RX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_INST_END_IDX_RX             0
-#define TF_RSVD_METER_INST_TX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_INST_END_IDX_TX             0
-
-/* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
-#define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
-#define TF_RSVD_MIRROR_END_IDX_TX                 0
-
-/* UPAR */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_UPAR_RX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
-#define TF_RSVD_UPAR_END_IDX_RX                   0
-#define TF_RSVD_UPAR_TX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
-#define TF_RSVD_UPAR_END_IDX_TX                   0
-
-/* Source Properties */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SP_TCAM_RX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_SP_TCAM_END_IDX_RX                0
-#define TF_RSVD_SP_TCAM_TX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_SP_TCAM_END_IDX_TX                0
-
-/* L2 Func */
-#define TF_RSVD_L2_FUNC_RX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
-#define TF_RSVD_L2_FUNC_END_IDX_RX                0
-#define TF_RSVD_L2_FUNC_TX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
-#define TF_RSVD_L2_FUNC_END_IDX_TX                0
-
-/* FKB */
-#define TF_RSVD_FKB_RX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
-#define TF_RSVD_FKB_END_IDX_RX                    0
-#define TF_RSVD_FKB_TX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
-#define TF_RSVD_FKB_END_IDX_TX                    0
-
-/* TBL Scope */
-#define TF_RSVD_TBL_SCOPE_RX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
-#define TF_RSVD_TBL_SCOPE_TX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
-
-/* EPOCH0 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH0_RX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH0_END_IDX_RX                 0
-#define TF_RSVD_EPOCH0_TX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH0_END_IDX_TX                 0
-
-/* EPOCH1 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH1_RX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH1_END_IDX_RX                 0
-#define TF_RSVD_EPOCH1_TX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH1_END_IDX_TX                 0
-
-/* METADATA */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_METADATA_RX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
-#define TF_RSVD_METADATA_END_IDX_RX               0
-#define TF_RSVD_METADATA_TX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
-#define TF_RSVD_METADATA_END_IDX_TX               0
-
-/* CT_STATE */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_CT_STATE_RX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
-#define TF_RSVD_CT_STATE_END_IDX_RX               0
-#define TF_RSVD_CT_STATE_TX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
-#define TF_RSVD_CT_STATE_END_IDX_TX               0
-
-/* RANGE_PROF */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_PROF_RX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
-#define TF_RSVD_RANGE_PROF_TX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
-
-/* RANGE_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_ENTRY_RX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
-#define TF_RSVD_RANGE_ENTRY_TX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
-
-/* LAG_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_LAG_ENTRY_RX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
-#define TF_RSVD_LAG_ENTRY_TX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
-
-
-/* SRAM - Resources
- * Limited to the types that CFA provides.
- */
-#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
-
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SRAM_MCG_RX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
-/* Multicast Group on TX is not supported */
-#define TF_RSVD_SRAM_MCG_TX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
-
-/* First encap of 8B RX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
-/* First encap of 8B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
-
-#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
-/* First encap of 16B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
-
-/* Encap of 64B on RX is not supported */
-#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
-/* First encap of 64B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_SP_SMAC_RX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
-#define TF_RSVD_SRAM_SP_SMAC_TX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
-
-/* SRAM SP IPV4 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
-
-/* SRAM SP IPV6 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
-/* Not yet supported fully in infra */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
-
-#define TF_RSVD_SRAM_COUNTER_64B_RX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_COUNTER_64B_TX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
-
-#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
-
-#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
-
-/* HW Resource Pool names */
-
-#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
-#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
-#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
-
-#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
-#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
-#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
-
-#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
-#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
-#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
-
-#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
-#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
-#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
-
-#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
-
-#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
-#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
-#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
-
-#define TF_METER_PROF_POOL_NAME           meter_prof_pool
-#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
-#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
-
-#define TF_METER_INST_POOL_NAME           meter_inst_pool
-#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
-#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
-
-#define TF_MIRROR_POOL_NAME               mirror_pool
-#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
-#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
-
-#define TF_UPAR_POOL_NAME                 upar_pool
-#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
-#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
-
-#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
-#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
-#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
-
-#define TF_FKB_POOL_NAME                  fkb_pool
-#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
-#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
-
-#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
-#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
-#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
-
-#define TF_L2_FUNC_POOL_NAME              l2_func_pool
-#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
-#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
-
-#define TF_EPOCH0_POOL_NAME               epoch0_pool
-#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
-#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
-
-#define TF_EPOCH1_POOL_NAME               epoch1_pool
-#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
-#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
-
-#define TF_METADATA_POOL_NAME             metadata_pool
-#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
-#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
-
-#define TF_CT_STATE_POOL_NAME             ct_state_pool
-#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
-#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
-
-#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
-#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
-#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
-
-#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
-#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
-#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
-
-#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
-#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
-#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
-
-/* SRAM Resource Pool names */
-#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
-#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
-#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
-
-#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
-#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
-#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
-
-#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
-#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
-#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
-
-#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
-#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
-#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
-
-#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
-#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
-#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
-
-#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
-#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
-#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
-
-#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
-#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
-#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
-
-#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
-#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
-#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
-
-#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
-#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
-#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
-
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
-
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
-
-/* Sw Resource Pool Names */
-
-#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
-#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
-#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
-
-
-/** HW Resource types
- */
-enum tf_resource_type_hw {
-	/* Common HW resources for all chip variants */
-	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-	TF_RESC_TYPE_HW_PROF_FUNC,
-	TF_RESC_TYPE_HW_PROF_TCAM,
-	TF_RESC_TYPE_HW_EM_PROF_ID,
-	TF_RESC_TYPE_HW_EM_REC,
-	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-	TF_RESC_TYPE_HW_WC_TCAM,
-	TF_RESC_TYPE_HW_METER_PROF,
-	TF_RESC_TYPE_HW_METER_INST,
-	TF_RESC_TYPE_HW_MIRROR,
-	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/SR specific HW resources */
-	TF_RESC_TYPE_HW_SP_TCAM,
-	/* SR/SR2 specific HW resources */
-	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Thor, SR2 common HW resources */
-	TF_RESC_TYPE_HW_FKB,
-	/* SR2 specific HW resources */
-	TF_RESC_TYPE_HW_TBL_SCOPE,
-	TF_RESC_TYPE_HW_EPOCH0,
-	TF_RESC_TYPE_HW_EPOCH1,
-	TF_RESC_TYPE_HW_METADATA,
-	TF_RESC_TYPE_HW_CT_STATE,
-	TF_RESC_TYPE_HW_RANGE_PROF,
-	TF_RESC_TYPE_HW_RANGE_ENTRY,
-	TF_RESC_TYPE_HW_LAG_ENTRY,
-	TF_RESC_TYPE_HW_MAX
-};
-
-/** HW Resource types
- */
-enum tf_resource_type_sram {
-	TF_RESC_TYPE_SRAM_FULL_ACTION,
-	TF_RESC_TYPE_SRAM_MCG,
-	TF_RESC_TYPE_SRAM_ENCAP_8B,
-	TF_RESC_TYPE_SRAM_ENCAP_16B,
-	TF_RESC_TYPE_SRAM_ENCAP_64B,
-	TF_RESC_TYPE_SRAM_SP_SMAC,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-	TF_RESC_TYPE_SRAM_COUNTER_64B,
-	TF_RESC_TYPE_SRAM_NAT_SPORT,
-	TF_RESC_TYPE_SRAM_NAT_DPORT,
-	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-	TF_RESC_TYPE_SRAM_MAX
-};
 
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0a84e6..e0469b6 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -7,3171 +7,916 @@
 
 #include <rte_common.h>
 
+#include <cfa_resource_types.h>
+
 #include "tf_rm.h"
-#include "tf_core.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
-#include "tf_resources.h"
-#include "tf_msg.h"
-#include "bnxt.h"
+#include "tf_device.h"
 #include "tfp.h"
+#include "tf_msg.h"
 
 /**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
- *   enum tf_dir               dir         - Direction to process
- *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
- *                                           in the hw query structure
- *   define                    def_value   - Define value to check against
- *   uint32_t                 *eflag       - Result of the check
- */
-#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
-	if ((dir) == TF_DIR_RX) {					      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	} else {							      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	}								      \
-} while (0)
-
-/**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
- *   enum tf_dir                dir         - Direction to process
- *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
- *                                            in the hw query structure
- *   define                     def_value   - Define value to check against
- *   uint32_t                  *eflag       - Result of the check
- */
-#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
-	if ((dir) == TF_DIR_RX) {					       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	} else {							       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	}								       \
-} while (0)
-
-/**
- * Internal macro to convert a reserved resource define name to be
- * direction specific.
- *
- * Parameters:
- *   enum tf_dir    dir         - Direction to process
- *   string         type        - Type name to append RX or TX to
- *   string         dtype       - Direction specific type
- *
- *
+ * Generic RM Element data type that an RM DB is build upon.
  */
-#define TF_RESC_RSVD(dir, type, dtype) do {	\
-		if ((dir) == TF_DIR_RX)		\
-			(dtype) = type ## _RX;	\
-		else				\
-			(dtype) = type ## _TX;	\
-	} while (0)
-
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
-{
-	switch (hw_type) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		return "L2 ctxt tcam";
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		return "Profile Func";
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		return "Profile tcam";
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		return "EM profile id";
-	case TF_RESC_TYPE_HW_EM_REC:
-		return "EM record";
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		return "WC tcam profile id";
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		return "WC tcam";
-	case TF_RESC_TYPE_HW_METER_PROF:
-		return "Meter profile";
-	case TF_RESC_TYPE_HW_METER_INST:
-		return "Meter instance";
-	case TF_RESC_TYPE_HW_MIRROR:
-		return "Mirror";
-	case TF_RESC_TYPE_HW_UPAR:
-		return "UPAR";
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		return "Source properties tcam";
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		return "L2 Function";
-	case TF_RESC_TYPE_HW_FKB:
-		return "FKB";
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		return "Table scope";
-	case TF_RESC_TYPE_HW_EPOCH0:
-		return "EPOCH0";
-	case TF_RESC_TYPE_HW_EPOCH1:
-		return "EPOCH1";
-	case TF_RESC_TYPE_HW_METADATA:
-		return "Metadata";
-	case TF_RESC_TYPE_HW_CT_STATE:
-		return "Connection tracking state";
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		return "Range profile";
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		return "Range entry";
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		return "LAG";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
-{
-	switch (sram_type) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		return "Full action";
-	case TF_RESC_TYPE_SRAM_MCG:
-		return "MCG";
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		return "Encap 8B";
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		return "Encap 16B";
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		return "Encap 64B";
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		return "Source properties SMAC";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		return "Source properties SMAC IPv4";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		return "Source properties IPv6";
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		return "Counter 64B";
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		return "NAT source port";
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		return "NAT destination port";
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		return "NAT source IPv4";
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		return "NAT destination IPv4";
-	default:
-		return "Invalid identifier";
-	}
-}
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
 
-/**
- * Helper function to perform a HW HCAPI resource type lookup against
- * the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
- */
-static int
-tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
-{
-	uint32_t value = -EOPNOTSUPP;
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
 
-	switch (index) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_REC:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_INST:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
-		break;
-	case TF_RESC_TYPE_HW_MIRROR:
-		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
-		break;
-	case TF_RESC_TYPE_HW_UPAR:
-		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
-		break;
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_FKB:
-		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
-		break;
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH0:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH1:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
-		break;
-	case TF_RESC_TYPE_HW_METADATA:
-		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
-		break;
-	case TF_RESC_TYPE_HW_CT_STATE:
-		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
-		break;
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
-		break;
-	default:
-		break;
-	}
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
 
-	return value;
-}
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
 
 /**
- * Helper function to perform a SRAM HCAPI resource type lookup
- * against the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
+ * TF RM DB definition
  */
-static int
-tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
-{
-	uint32_t value = -EOPNOTSUPP;
-
-	switch (index) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
-		break;
-	case TF_RESC_TYPE_SRAM_MCG:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
-		break;
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
-		break;
-	default:
-		break;
-	}
-
-	return value;
-}
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
 
-/**
- * Helper function to print all the HW resource qcaps errors reported
- * in the error_flag.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] error_flag
- *   Pointer to the hw error flags created at time of the query check
- */
-static void
-tf_rm_print_hw_qcaps_error(enum tf_dir dir,
-			   struct tf_rm_hw_query *hw_query,
-			   uint32_t *error_flag)
-{
-	int i;
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
 
-	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
 
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_hw_2_str(i),
-				    hw_query->hw_query[i].max,
-				    tf_rm_rsvd_hw_value(dir, i));
-	}
-}
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
 
 /**
- * Helper function to print all the SRAM resource qcaps errors
- * reported in the error_flag.
+ * Adjust an index according to the allocation information.
  *
- * [in] dir
- *   Receive or transmit direction
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] error_flag
- *   Pointer to the sram error flags created at time of the query check
- */
-static void
-tf_rm_print_sram_qcaps_error(enum tf_dir dir,
-			     struct tf_rm_sram_query *sram_query,
-			     uint32_t *error_flag)
-{
-	int i;
-
-	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_sram_2_str(i),
-				    sram_query->sram_query[i].max,
-				    tf_rm_rsvd_sram_value(dir, i));
-	}
-}
-
-/**
- * Performs a HW resource check between what firmware capability
- * reports and what the core expects is available.
+ * [in] cfg
+ *   Pointer to the DB configuration
  *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
  *
- * [in] query
- *   Pointer to HW Query data structure. Query holds what the firmware
- *   offers of the HW resources.
+ * [in] count
+ *   Number of DB configuration elements
  *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
  *
  * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
-			    enum tf_dir dir,
-			    uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-			     TF_RSVD_L2_CTXT_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_FUNC,
-			     TF_RSVD_PROF_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_TCAM,
-			     TF_RSVD_PROF_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_PROF_ID,
-			     TF_RSVD_EM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_REC,
-			     TF_RSVD_EM_REC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-			     TF_RSVD_WC_TCAM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM,
-			     TF_RSVD_WC_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_PROF,
-			     TF_RSVD_METER_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_INST,
-			     TF_RSVD_METER_INST,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_MIRROR,
-			     TF_RSVD_MIRROR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_UPAR,
-			     TF_RSVD_UPAR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_SP_TCAM,
-			     TF_RSVD_SP_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_FUNC,
-			     TF_RSVD_L2_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_FKB,
-			     TF_RSVD_FKB,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_TBL_SCOPE,
-			     TF_RSVD_TBL_SCOPE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH0,
-			     TF_RSVD_EPOCH0,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH1,
-			     TF_RSVD_EPOCH1,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METADATA,
-			     TF_RSVD_METADATA,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_CT_STATE,
-			     TF_RSVD_CT_STATE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_PROF,
-			     TF_RSVD_RANGE_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_ENTRY,
-			     TF_RSVD_RANGE_ENTRY,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_LAG_ENTRY,
-			     TF_RSVD_LAG_ENTRY,
-			     error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Performs a SRAM resource check between what firmware capability
- * reports and what the core expects is available.
- *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
- *
- * [in] query
- *   Pointer to SRAM Query data structure. Query holds what the
- *   firmware offers of the SRAM resources.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
- *
- * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
-			      enum tf_dir dir,
-			      uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_FULL_ACTION,
-			       TF_RSVD_SRAM_FULL_ACTION,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_MCG,
-			       TF_RSVD_SRAM_MCG,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_8B,
-			       TF_RSVD_SRAM_ENCAP_8B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_16B,
-			       TF_RSVD_SRAM_ENCAP_16B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_64B,
-			       TF_RSVD_SRAM_ENCAP_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC,
-			       TF_RSVD_SRAM_SP_SMAC,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			       TF_RSVD_SRAM_SP_SMAC_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			       TF_RSVD_SRAM_SP_SMAC_IPV6,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_COUNTER_64B,
-			       TF_RSVD_SRAM_COUNTER_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_SPORT,
-			       TF_RSVD_SRAM_NAT_SPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_DPORT,
-			       TF_RSVD_SRAM_NAT_DPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-			       TF_RSVD_SRAM_NAT_S_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-			       TF_RSVD_SRAM_NAT_D_IPV4,
-			       error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Internal function to mark pool entries used.
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_reserve_range(uint32_t count,
-		    uint32_t rsv_begin,
-		    uint32_t rsv_end,
-		    uint32_t max,
-		    struct bitalloc *pool)
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
 {
-	uint32_t i;
+	int i;
+	uint16_t cnt = 0;
 
-	/* If no resources has been requested we mark everything
-	 * 'used'
-	 */
-	if (count == 0)	{
-		for (i = 0; i < max; i++)
-			ba_alloc_index(pool, i);
-	} else {
-		/* Support 2 main modes
-		 * Reserved range starts from bottom up (with
-		 * pre-reserved value or not)
-		 * - begin = 0 to end xx
-		 * - begin = 1 to end xx
-		 *
-		 * Reserved range starts from top down
-		 * - begin = yy to end max
-		 */
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
 
-		/* Bottom up check, start from 0 */
-		if (rsv_begin == 0) {
-			for (i = rsv_end + 1; i < max; i++)
-				ba_alloc_index(pool, i);
-		}
-
-		/* Bottom up check, start from 1 or higher OR
-		 * Top Down
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
 		 */
-		if (rsv_begin >= 1) {
-			/* Allocate from 0 until start */
-			for (i = 0; i < rsv_begin; i++)
-				ba_alloc_index(pool, i);
-
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
-}
-
-/**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
- */
-static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
-	uint32_t end = 0;
-
-	/* l2 ctxt rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
-
-	/* l2 ctxt tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the profile tcam and profile func
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
-	uint32_t end = 0;
-
-	/* profile func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
-
-	/* profile func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_PROF_TCAM;
-
-	/* profile tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
-
-	/* profile tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the em profile id allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_em_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
-	uint32_t end = 0;
-
-	/* em prof id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
-
-	/* em prof id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the wildcard tcam and profile id
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_wc(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
-	uint32_t end = 0;
-
-	/* wc profile id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
-
-	/* wc profile id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_WC_TCAM;
-
-	/* wc tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_RX);
-
-	/* wc tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the meter resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_meter(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
-	uint32_t end = 0;
-
-	/* meter profiles rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_RX);
-
-	/* meter profiles tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_METER_INST;
-
-	/* meter rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_RX);
-
-	/* meter tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the mirror resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_mirror(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
-	uint32_t end = 0;
-
-	/* mirror rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_RX);
-
-	/* mirror tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the upar resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_upar(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_UPAR;
-	uint32_t end = 0;
-
-	/* upar rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_RX);
-
-	/* upar tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp tcam resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
-	uint32_t end = 0;
-
-	/* sp tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_RX);
-
-	/* sp tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 func resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
-	uint32_t end = 0;
-
-	/* l2 func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
-
-	/* l2 func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the fkb resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_fkb(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_FKB;
-	uint32_t end = 0;
-
-	/* fkb rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_RX);
-
-	/* fkb tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the tbld scope resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
-	uint32_t end = 0;
-
-	/* tbl scope rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
-
-	/* tbl scope tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 epoch resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_epoch(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
-	uint32_t end = 0;
-
-	/* epoch0 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_RX);
-
-	/* epoch0 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_EPOCH1;
-
-	/* epoch1 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_RX);
-
-	/* epoch1 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the metadata resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_metadata(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METADATA;
-	uint32_t end = 0;
-
-	/* metadata rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_RX);
-
-	/* metadata tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the ct state resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_ct_state(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
-	uint32_t end = 0;
-
-	/* ct state rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_RX);
-
-	/* ct state tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the range resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_range(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
-	uint32_t end = 0;
-
-	/* range profile rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
-
-	/* range profile tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
-
-	/* range entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
-
-	/* range entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the lag resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_lag_entry(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
-	uint32_t end = 0;
-
-	/* lag entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
-
-	/* lag entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the full action resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
-	uint16_t end = 0;
-
-	/* full action rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_RX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
-
-	/* full action tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_TX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the multicast group resources
- * allocated that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
-	uint16_t end = 0;
-
-	/* multicast group rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_MCG_RX,
-			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
-
-	/* Multicast Group on TX is not supported */
-}
-
-/**
- * Internal function to mark all the encap resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_encap(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
-	uint16_t end = 0;
-
-	/* encap 8b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_RX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
-
-	/* encap 8b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_TX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
-
-	/* encap 16b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_RX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
-
-	/* encap 16b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_TX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
-
-	/* Encap 64B not supported on RX */
-
-	/* Encap 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_64B_TX,
-			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_sp(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
-	uint16_t end = 0;
-
-	/* sp smac rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_RX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
-
-	/* sp smac tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_TX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-
-	/* SP SMAC IPv4 not supported on RX */
-
-	/* sp smac ipv4 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-
-	/* SP SMAC IPv6 not supported on RX */
-
-	/* sp smac ipv6 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the stat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_stats(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
-	uint16_t end = 0;
-
-	/* counter 64b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_RX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
-
-	/* counter 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_TX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the nat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_nat(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
-	uint16_t end = 0;
-
-	/* nat source port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_RX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
-
-	/* nat source port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_TX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
-
-	/* nat destination port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_RX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
-
-	/* nat destination port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_TX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-
-	/* nat source port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
-
-	/* nat source ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-
-	/* nat destination port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
-
-	/* nat destination ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
-}
-
-/**
- * Internal function used to validate the HW allocated resources
- * against the requested values.
- */
-static int
-tf_rm_hw_alloc_validate(enum tf_dir dir,
-			struct tf_rm_hw_alloc *hw_alloc,
-			struct tf_rm_entry *hw_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				hw_alloc->hw_num[i],
-				hw_entry[i].stride);
-			error = -1;
-		}
-	}
-
-	return error;
-}
-
-/**
- * Internal function used to validate the SRAM allocated resources
- * against the requested values.
- */
-static int
-tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
-			  struct tf_rm_sram_alloc *sram_alloc,
-			  struct tf_rm_entry *sram_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				sram_alloc->sram_num[i],
-				sram_entry[i].stride);
-			error = -1;
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+
 		}
 	}
 
-	return error;
+	*valid_count = cnt;
 }
 
 /**
- * Internal function used to mark all the HW resources allocated that
- * Truflow does not own.
+ * Resource Manager Adjust of base index definitions.
  */
-static void
-tf_rm_reserve_hw(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_l2_ctxt(tfs);
-	tf_rm_rsvd_prof(tfs);
-	tf_rm_rsvd_em_prof(tfs);
-	tf_rm_rsvd_wc(tfs);
-	tf_rm_rsvd_mirror(tfs);
-	tf_rm_rsvd_meter(tfs);
-	tf_rm_rsvd_upar(tfs);
-	tf_rm_rsvd_sp_tcam(tfs);
-	tf_rm_rsvd_l2_func(tfs);
-	tf_rm_rsvd_fkb(tfs);
-	tf_rm_rsvd_tbl_scope(tfs);
-	tf_rm_rsvd_epoch(tfs);
-	tf_rm_rsvd_metadata(tfs);
-	tf_rm_rsvd_ct_state(tfs);
-	tf_rm_rsvd_range(tfs);
-	tf_rm_rsvd_lag_entry(tfs);
-}
-
-/**
- * Internal function used to mark all the SRAM resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_reserve_sram(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_sram_full_action(tfs);
-	tf_rm_rsvd_sram_mcg(tfs);
-	tf_rm_rsvd_sram_encap(tfs);
-	tf_rm_rsvd_sram_sp(tfs);
-	tf_rm_rsvd_sram_stats(tfs);
-	tf_rm_rsvd_sram_nat(tfs);
-}
-
-/**
- * Internal function used to allocate and validate all HW resources.
- */
-static int
-tf_rm_allocate_validate_hw(struct tf *tfp,
-			   enum tf_dir dir)
-{
-	int rc;
-	int i;
-	struct tf_rm_hw_query hw_query;
-	struct tf_rm_hw_alloc hw_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *hw_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		hw_entries = tfs->resc.rx.hw_entry;
-	else
-		hw_entries = tfs->resc.tx.hw_entry;
-
-	/* Query for Session HW Resources */
-
-	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
-	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed,"
-			"error_flag:0x%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
-		goto cleanup;
-	}
-
-	/* Post process HW capability */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
-		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
-
-	/* Allocate Session HW Resources */
-	/* Perform HW allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-
- cleanup:
-
-	return -1;
-}
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
 
 /**
- * Internal function used to allocate and validate all SRAM resources.
+ * Adjust an index according to the allocation information.
  *
- * [in] tfp
- *   Pointer to TF handle
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
  *
  * Returns:
- *   0  - Success
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_allocate_validate_sram(struct tf *tfp,
-			     enum tf_dir dir)
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
 {
-	int rc;
-	int i;
-	struct tf_rm_sram_query sram_query;
-	struct tf_rm_sram_alloc sram_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *sram_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
-
-	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed,"
-			"error_flag:%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
-	}
+	int rc = 0;
+	uint32_t base_index;
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	base_index = db[db_index].alloc.entry.start;
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed,"
-			    " rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
 	}
 
-	return 0;
-
- cleanup:
-
-	return -1;
+	return rc;
 }
 
 /**
- * Helper function used to prune a HW resource array to only hold
- * elements that needs to be flushed.
- *
- * [in] tfs
- *   Session handle
+ * Logs an array of found residual entries to the console.
  *
  * [in] dir
  *   Receive or transmit direction
  *
- * [in] hw_entries
- *   Master HW Resource database
+ * [in] type
+ *   Type of Device Module
  *
- * [in/out] flush_entries
- *   Pruned HW Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master HW
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in] count
+ *   Number of entries in the residual array
  *
- * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
  */
-static int
-tf_rm_hw_to_flush(struct tf_session *tfs,
-		  enum tf_dir dir,
-		  struct tf_rm_entry *hw_entries,
-		  struct tf_rm_entry *flush_entries)
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
 {
-	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
+	int i;
 
-	/* Check all the hw resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
 	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_CTXT_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_INST_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_MIRROR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_UPAR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SP_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_FKB_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
-		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_TBL_SCOPE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
-	} else {
-		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
-			    tf_dir_2_str(dir),
-			    free_cnt,
-			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH0_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH1_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METADATA_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_CT_STATE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_LAG_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
 	}
-
-	return flush_rc;
 }
 
 /**
- * Helper function used to prune a SRAM resource array to only hold
- * elements that needs to be flushed.
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
  *
- * [in] tfs
- *   Session handle
- *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
  *
- * [in] hw_entries
- *   Master SRAM Resource data base
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
  *
- * [in/out] flush_entries
- *   Pruned SRAM Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master SRAM
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
  *
  * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_sram_to_flush(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    struct tf_rm_entry *sram_entries,
-		    struct tf_rm_entry *flush_entries)
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
 {
 	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
-
-	/* Check all the sram resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
-	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_FULL_ACTION_POOL_NAME,
-			rc);
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
-	} else {
-		flush_rc = 1;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
 	}
 
-	/* Only pools for RX direction */
-	if (dir == TF_DIR_RX) {
-		TF_RM_GET_POOLS_RX(tfs, &pool,
-				   TF_SRAM_MCG_POOL_NAME);
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
 		if (rc)
 			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
-		} else {
-			flush_rc = 1;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
 		}
-	} else {
-		/* Always prune TX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		*resv_size = found;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_8B_POOL_NAME,
-			rc);
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
+int
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
+{
+	int rc;
+	int i;
+	int j;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_16B_POOL_NAME,
-			rc);
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-	}
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_SP_SMAC_POOL_NAME,
-			rc);
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
-	}
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
+
+	/* Handle the case where a DB create request really ends up
+	 * being empty. Unsupported (if not rare) case but possible
+	 * that no resources are necessary for a 'direction'.
+	 */
+	if (hcapi_items == 0) {
+		TFP_DRV_LOG(ERR,
+			"%s: DB create request for Zero elements, DB Type:%s\n",
+			tf_dir_2_str(parms->dir),
+			tf_device_module_type_2_str(parms->type));
+
+		parms->rm_db = NULL;
+		return -ENOMEM;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_STATS_64B_POOL_NAME,
-			rc);
+	/* Alloc request, alignment already set */
+	cparms.nitems = (size_t)hcapi_items;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_SPORT_POOL_NAME,
-			rc);
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
-	} else {
-		flush_rc = 1;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_cnt[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		}
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_DPORT_POOL_NAME,
-			rc);
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       hcapi_items,
+				       req,
+				       resv);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_S_IPV4_POOL_NAME,
-			rc);
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db = (void *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_D_IPV4_POOL_NAME,
-			rc);
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
-	return flush_rc;
-}
+	db = rm_db->db;
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
 
-/**
- * Helper function used to generate an error log for the HW types that
- * needs to be flushed. The types should have been cleaned up ahead of
- * invoking tf_close_session.
- *
- * [in] hw_entries
- *   HW Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_hw_flush(enum tf_dir dir,
-		   struct tf_rm_entry *hw_entries)
-{
-	int i;
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
 
-	/* Walk the hw flush array and log the types that wasn't
-	 * cleaned up.
-	 */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entries[i].stride != 0)
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
+		 */
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_hw_2_str(i));
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    db[i].cfg_type);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
+			goto fail;
+		}
 	}
+
+	rm_db->num_entries = parms->num_elements;
+	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
+	*parms->rm_db = (void *)rm_db;
+
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
+	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
-/**
- * Helper function used to generate an error log for the SRAM types
- * that needs to be flushed. The types should have been cleaned up
- * ahead of invoking tf_close_session.
- *
- * [in] sram_entries
- *   SRAM Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_sram_flush(enum tf_dir dir,
-		     struct tf_rm_entry *sram_entries)
+int
+tf_rm_free_db(struct tf *tfp,
+	      struct tf_rm_free_db_parms *parms)
 {
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
+	 */
 
-	/* Walk the sram flush array and log the types that wasn't
-	 * cleaned up.
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
 	 */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entries[i].stride != 0)
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_sram_2_str(i));
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
 	}
-}
 
-void
-tf_rm_init(struct tf *tfp __rte_unused)
-{
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db[i].pool);
 
-	/* This version is host specific and should be checked against
-	 * when attaching as there is no guarantee that a secondary
-	 * would run from same image version.
-	 */
-	tfs->ver.major = TF_SESSION_VER_MAJOR;
-	tfs->ver.minor = TF_SESSION_VER_MINOR;
-	tfs->ver.update = TF_SESSION_VER_UPDATE;
-
-	tfs->session_id.id = 0;
-	tfs->ref_count = 0;
-
-	/* Initialization of Table Scopes */
-	/* ll_init(&tfs->tbl_scope_ll); */
-
-	/* Initialization of HW and SRAM resource DB */
-	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
-
-	/* Initialization of HW Resource Pools */
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records should not be controlled by way of a pool */
-
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Initialization of SRAM Resource Pools
-	 * These pools are set to the TFLIB defined MAX sizes not
-	 * AFM's HW max as to limit the memory consumption
-	 */
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		TF_RSVD_SRAM_FULL_ACTION_RX);
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		TF_RSVD_SRAM_FULL_ACTION_TX);
-	/* Only Multicast Group on RX is supported */
-	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
-		TF_RSVD_SRAM_MCG_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_8B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_8B_TX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_16B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_16B_TX);
-	/* Only Encap 64B on TX is supported */
-	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_64B_TX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		TF_RSVD_SRAM_SP_SMAC_RX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_TX);
-	/* Only SP SMAC IPv4 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-	/* Only SP SMAC IPv6 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
-		TF_RSVD_SRAM_COUNTER_64B_RX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_COUNTER_64B_TX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_SPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_SPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_DPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_DPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_S_IPV4_TX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/* Initialization of pools local to TF Core */
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate_validate(struct tf *tfp)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc;
-	int i;
+	int id;
+	uint32_t index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		rc = tf_rm_allocate_validate_hw(tfp, i);
-		if (rc)
-			return rc;
-		rc = tf_rm_allocate_validate_sram(tfp, i);
-		if (rc)
-			return rc;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	/* With both HW and SRAM allocated and validated we can
-	 * 'scrub' the reservation on the pools.
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
 	 */
-	tf_rm_reserve_hw(tfp);
-	tf_rm_reserve_sram(tfp);
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		rc = -ENOMEM;
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				&index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -EINVAL;
+	}
+
+	*parms->index = index;
 
 	return rc;
 }
 
 int
-tf_rm_close(struct tf *tfp)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
 	int rc;
-	int rc_close = 0;
-	int i;
-	struct tf_rm_entry *hw_entries;
-	struct tf_rm_entry *hw_flush_entries;
-	struct tf_rm_entry *sram_entries;
-	struct tf_rm_entry *sram_flush_entries;
-	struct tf_session *tfs __rte_unused =
-		(struct tf_session *)(tfp->session->core_data);
-
-	struct tf_rm_db flush_resc = tfs->resc;
-
-	/* On close it is assumed that the session has already cleaned
-	 * up all its resources, individually, while destroying its
-	 * flows. No checking is performed thus the behavior is as
-	 * follows.
-	 *
-	 * Session RM will signal FW to release session resources. FW
-	 * will perform invalidation of all the allocated entries
-	 * (assures any outstanding resources has been cleared, then
-	 * free the FW RM instance.
-	 *
-	 * Session will then be freed by tf_close_session() thus there
-	 * is no need to clean each resource pool as the whole session
-	 * is going away.
-	 */
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		if (i == TF_DIR_RX) {
-			hw_entries = tfs->resc.rx.hw_entry;
-			hw_flush_entries = flush_resc.rx.hw_entry;
-			sram_entries = tfs->resc.rx.sram_entry;
-			sram_flush_entries = flush_resc.rx.sram_entry;
-		} else {
-			hw_entries = tfs->resc.tx.hw_entry;
-			hw_flush_entries = flush_resc.tx.hw_entry;
-			sram_entries = tfs->resc.tx.sram_entry;
-			sram_flush_entries = flush_resc.tx.sram_entry;
-		}
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-		/* Check for any not previously freed HW resources and
-		 * flush if required.
-		 */
-		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering HW resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-			/* Log the entries to be flushed */
-			tf_rm_log_hw_flush(i, hw_flush_entries);
-		}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-		/* Check for any not previously freed SRAM resources
-		 * and flush if required.
-		 */
-		rc = tf_rm_sram_to_flush(tfs,
-					 i,
-					 sram_entries,
-					 sram_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
 
-			/* Log the entries to be flushed */
-			tf_rm_log_sram_flush(i, sram_flush_entries);
-		}
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	return rc_close;
-}
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
 
-#if (TF_SHADOW == 1)
-int
-tf_rm_shadow_db_init(struct tf_session *tfs)
-{
-	rc = 1;
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
 
 	return rc;
 }
-#endif /* TF_SHADOW */
 
 int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	int rc;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_L2_CTXT_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_PROF_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_WC_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
 		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
 		return rc;
 	}
 
-	return 0;
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_FULL_ACTION_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_TX)
-			break;
-		TF_RM_GET_POOLS_RX(tfs, pool,
-				   TF_SRAM_MCG_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_8B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_16B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		/* No pools for RX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_SP_SMAC_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_STATS_64B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_SPORT_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_S_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_D_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_INST_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_MIRROR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_UPAR:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_UPAR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH0_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH1_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METADATA:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METADATA_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_CT_STATE_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_ENTRY_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_LAG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_LAG_ENTRY_POOL_NAME,
-				rc);
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-		break;
-	/* No bitalloc pools for these types */
-	case TF_TBL_TYPE_EXT:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	}
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
 	return 0;
 }
 
 int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
-		break;
-	case TF_TBL_TYPE_UPAR:
-		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
-		break;
-	case TF_TBL_TYPE_METADATA:
-		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
-		break;
-	case TF_TBL_TYPE_LAG:
-		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		*hcapi_type = -1;
-		rc = -EOPNOTSUPP;
-	}
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	return rc;
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return 0;
 }
 
 int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index)
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 {
-	int rc;
-	struct tf_rm_resc *resc;
-	uint32_t hcapi_type;
-	uint32_t base_index;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (dir == TF_DIR_RX)
-		resc = &tfs->resc.rx;
-	else if (dir == TF_DIR_TX)
-		resc = &tfs->resc.tx;
-	else
-		return -EOPNOTSUPP;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
-	if (rc)
-		return -1;
-
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-	case TF_TBL_TYPE_MCAST_GROUPS:
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-	case TF_TBL_TYPE_ACT_STATS_64:
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		base_index = resc->sram_entry[hcapi_type].start;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-	case TF_TBL_TYPE_METER_PROF:
-	case TF_TBL_TYPE_METER_INST:
-	case TF_TBL_TYPE_UPAR:
-	case TF_TBL_TYPE_EPOCH0:
-	case TF_TBL_TYPE_EPOCH1:
-	case TF_TBL_TYPE_METADATA:
-	case TF_TBL_TYPE_CT_STATE:
-	case TF_TBL_TYPE_RANGE_PROF:
-	case TF_TBL_TYPE_RANGE_ENTRY:
-	case TF_TBL_TYPE_LAG:
-		base_index = resc->hw_entry[hcapi_type].start;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		return -EOPNOTSUPP;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	switch (c_type) {
-	case TF_RM_CONVERT_RM_BASE:
-		*convert_index = index - base_index;
-		break;
-	case TF_RM_CONVERT_ADD_BASE:
-		*convert_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
 	}
 
-	return 0;
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
+	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 1a09f13..5cb6889 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -3,301 +3,444 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
-#include "tf_resources.h"
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
-struct tf_session;
 
-/* Internal macro to determine appropriate allocation pools based on
- * DIRECTION parm, also performs error checking for DIRECTION parm. The
- * SESSION_POOL and SESSION pointers are set appropriately upon
- * successful return (the GLOBAL_POOL is used to globally manage
- * resource allocation and the SESSION_POOL is used to track the
- * resources that have been allocated to the session)
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
  *
- * parameters:
- *   struct tfp        *tfp
- *   enum tf_dir        direction
- *   struct bitalloc  **session_pool
- *   string             base_pool_name - used to form pointers to the
- *					 appropriate bit allocation
- *					 pools, both directions of the
- *					 session pools must have same
- *					 base name, for example if
- *					 POOL_NAME is feat_pool: - the
- *					 ptr's to the session pools
- *					 are feat_pool_rx feat_pool_tx
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
  *
- *  int                  rc - return code
- *			      0 - Success
- *			     -1 - invalid DIRECTION parm
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
-#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
-		(rc) = 0;						\
-		if ((direction) == TF_DIR_RX) {				\
-			*(session_pool) = (tfs)->pool_name ## _RX;	\
-		} else if ((direction) == TF_DIR_TX) {			\
-			*(session_pool) = (tfs)->pool_name ## _TX;	\
-		} else {						\
-			rc = -1;					\
-		}							\
-	} while (0)
 
-#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _RX)
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_new_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
 
-#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _TX)
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled', uses a Pool for internal storage */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
+	TF_RM_TYPE_MAX
+};
 
 /**
- * Resource query single entry
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
  */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
 };
 
 /**
- * Resource single entry
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
  */
-struct tf_rm_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
 };
 
 /**
- * Resource query array of HW entities
+ * Allocation information for a single element.
  */
-struct tf_rm_hw_query {
-	/** array of HW resource entries */
-	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_new_entry entry;
 };
 
 /**
- * Resource allocation array of HW entities
+ * Create RM DB parameters
  */
-struct tf_rm_hw_alloc {
-	/** array of HW resource entries */
-	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements.
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
+	 */
+	uint16_t *alloc_cnt;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void **rm_db;
 };
 
 /**
- * Resource query array of SRAM entities
+ * Free RM DB parameters
  */
-struct tf_rm_sram_query {
-	/** array of SRAM resource entries */
-	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
 };
 
 /**
- * Resource allocation array of SRAM entities
+ * Allocate RM parameters for a single element
  */
-struct tf_rm_sram_alloc {
-	/** array of SRAM resource entries */
-	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
- * Resource Manager arrays for a single direction
+ * Free RM parameters for a single element
  */
-struct tf_rm_resc {
-	/** array of HW resource entries */
-	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
-	/** array of SRAM resource entries */
-	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t index;
 };
 
 /**
- * Resource Manager Database
+ * Is Allocated parameters for a single element
  */
-struct tf_rm_db {
-	struct tf_rm_resc rx;
-	struct tf_rm_resc tx;
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	int *allocated;
 };
 
 /**
- * Helper function used to convert HW HCAPI resource type to a string.
+ * Get Allocation information for a single element
  */
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
 
 /**
- * Helper function used to convert SRAM HCAPI resource type to a string.
+ * Get HCAPI type parameters for a single element
  */
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
 
 /**
- * Initializes the Resource Manager and the associated database
- * entries for HW and SRAM resources. Must be called before any other
- * Resource Manager functions.
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
+/**
+ * @page rm Resource Manager
  *
- * [in] tfp
- *   Pointer to TF handle
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
-void tf_rm_init(struct tf *tfp);
 
 /**
- * Allocates and validates both HW and SRAM resources per the NVM
- * configuration. If any allocation fails all resources for the
- * session is deallocated.
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_rm_allocate_validate(struct tf *tfp);
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
 
 /**
- * Closes the Resource Manager and frees all allocated resources per
- * the associated database.
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
- *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
-int tf_rm_close(struct tf *tfp);
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
 
-#if (TF_SHADOW == 1)
 /**
- * Initializes Shadow DB of configuration elements
+ * Allocates a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to allocate parameters
  *
- * Returns:
- *  0  - Success
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
-int tf_rm_shadow_db_init(struct tf_session *tfs);
-#endif /* TF_SHADOW */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Tcam table type.
- *
- * Function will print error msg if tcam type is unsupported or lookup
- * failed.
+ * Free's a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to free parameters
  *
- * [in] type
- *   Type of the object
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool);
+int tf_rm_free(struct tf_rm_free_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Table type.
- *
- * Function will print error msg if table type is unsupported or
- * lookup failed.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] type
- *   Type of the object
+ * Performs an allocation verification check on a specified element.
  *
- * [in] dir
- *    Receive or transmit direction
+ * [in] parms
+ *   Pointer to is allocated parameters
  *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool);
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
 
 /**
- * Converts the TF Table Type to internal HCAPI_TYPE
- *
- * [in] type
- *   Type to be converted
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
  *
- * [in/out] hcapi_type
- *   Converted type
+ * [in] parms
+ *   Pointer to get info parameters
  *
- * Returns:
- *  0           - Success will set the *hcapi_type
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type);
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
- * TF RM Convert of index methods.
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-enum tf_rm_convert_type {
-	/** Adds the base of the Session Pool to the index */
-	TF_RM_CONVERT_ADD_BASE,
-	/** Removes the Session Pool base from the index */
-	TF_RM_CONVERT_RM_BASE
-};
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
 /**
- * Provides conversion of the Table Type index in relation to the
- * Session Pool base.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in] type
- *   Type of the object
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
  *
- * [in] c_type
- *   Type of conversion to perform
+ * [in] parms
+ *   Pointer to get inuse parameters
  *
- * [in] index
- *   Index to be converted
- *
- * [in/out]  convert_index
- *   Pointer to the converted index
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index);
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
deleted file mode 100644
index 2d9be65..0000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ /dev/null
@@ -1,907 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-
-#include <rte_common.h>
-
-#include <cfa_resource_types.h>
-
-#include "tf_rm_new.h"
-#include "tf_common.h"
-#include "tf_util.h"
-#include "tf_session.h"
-#include "tf_device.h"
-#include "tfp.h"
-#include "tf_msg.h"
-
-/**
- * Generic RM Element data type that an RM DB is build upon.
- */
-struct tf_rm_element {
-	/**
-	 * RM Element configuration type. If Private then the
-	 * hcapi_type can be ignored. If Null then the element is not
-	 * valid for the device.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/**
-	 * HCAPI RM Type for the element.
-	 */
-	uint16_t hcapi_type;
-
-	/**
-	 * HCAPI RM allocated range information for the element.
-	 */
-	struct tf_rm_alloc_info alloc;
-
-	/**
-	 * Bit allocator pool for the element. Pool size is controlled
-	 * by the struct tf_session_resources at time of session creation.
-	 * Null indicates that the element is not used for the device.
-	 */
-	struct bitalloc *pool;
-};
-
-/**
- * TF RM DB definition
- */
-struct tf_rm_new_db {
-	/**
-	 * Number of elements in the DB
-	 */
-	uint16_t num_entries;
-
-	/**
-	 * Direction this DB controls.
-	 */
-	enum tf_dir dir;
-
-	/**
-	 * Module type, used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-
-	/**
-	 * The DB consists of an array of elements
-	 */
-	struct tf_rm_element *db;
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] cfg
- *   Pointer to the DB configuration
- *
- * [in] reservations
- *   Pointer to the allocation values associated with the module
- *
- * [in] count
- *   Number of DB configuration elements
- *
- * [out] valid_count
- *   Number of HCAPI entries with a reservation value greater than 0
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static void
-tf_rm_count_hcapi_reservations(enum tf_dir dir,
-			       enum tf_device_module_type type,
-			       struct tf_rm_element_cfg *cfg,
-			       uint16_t *reservations,
-			       uint16_t count,
-			       uint16_t *valid_count)
-{
-	int i;
-	uint16_t cnt = 0;
-
-	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
-		    reservations[i] > 0)
-			cnt++;
-
-		/* Only log msg if a type is attempted reserved and
-		 * not supported. We ignore EM module as its using a
-		 * split configuration array thus it would fail for
-		 * this type of check.
-		 */
-		if (type != TF_DEVICE_MODULE_TYPE_EM &&
-		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
-		    reservations[i] > 0) {
-			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-		}
-	}
-
-	*valid_count = cnt;
-}
-
-/**
- * Resource Manager Adjust of base index definitions.
- */
-enum tf_rm_adjust_type {
-	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
-	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [in] action
- *   Adjust action
- *
- * [in] db_index
- *   DB index for the element type
- *
- * [in] index
- *   Index to convert
- *
- * [out] adj_index
- *   Adjusted index
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_adjust_index(struct tf_rm_element *db,
-		   enum tf_rm_adjust_type action,
-		   uint32_t db_index,
-		   uint32_t index,
-		   uint32_t *adj_index)
-{
-	int rc = 0;
-	uint32_t base_index;
-
-	base_index = db[db_index].alloc.entry.start;
-
-	switch (action) {
-	case TF_RM_ADJUST_RM_BASE:
-		*adj_index = index - base_index;
-		break;
-	case TF_RM_ADJUST_ADD_BASE:
-		*adj_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	return rc;
-}
-
-/**
- * Logs an array of found residual entries to the console.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] type
- *   Type of Device Module
- *
- * [in] count
- *   Number of entries in the residual array
- *
- * [in] residuals
- *   Pointer to an array of residual entries. Array is index same as
- *   the DB in which this function is used. Each entry holds residual
- *   value for that entry.
- */
-static void
-tf_rm_log_residuals(enum tf_dir dir,
-		    enum tf_device_module_type type,
-		    uint16_t count,
-		    uint16_t *residuals)
-{
-	int i;
-
-	/* Walk the residual array and log the types that wasn't
-	 * cleaned up to the console.
-	 */
-	for (i = 0; i < count; i++) {
-		if (residuals[i] != 0)
-			TFP_DRV_LOG(ERR,
-				"%s, %s was not cleaned up, %d outstanding\n",
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i),
-				residuals[i]);
-	}
-}
-
-/**
- * Performs a check of the passed in DB for any lingering elements. If
- * a resource type was found to not have been cleaned up by the caller
- * then its residual values are recorded, logged and passed back in an
- * allocate reservation array that the caller can pass to the FW for
- * cleanup.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [out] resv_size
- *   Pointer to the reservation size of the generated reservation
- *   array.
- *
- * [in/out] resv
- *   Pointer Pointer to a reservation array. The reservation array is
- *   allocated after the residual scan and holds any found residual
- *   entries. Thus it can be smaller than the DB that the check was
- *   performed on. Array must be freed by the caller.
- *
- * [out] residuals_present
- *   Pointer to a bool flag indicating if residual was present in the
- *   DB
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
-		      uint16_t *resv_size,
-		      struct tf_rm_resc_entry **resv,
-		      bool *residuals_present)
-{
-	int rc;
-	int i;
-	int f;
-	uint16_t count;
-	uint16_t found;
-	uint16_t *residuals = NULL;
-	uint16_t hcapi_type;
-	struct tf_rm_get_inuse_count_parms iparms;
-	struct tf_rm_get_alloc_info_parms aparms;
-	struct tf_rm_get_hcapi_parms hparms;
-	struct tf_rm_alloc_info info;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_entry *local_resv = NULL;
-
-	/* Create array to hold the entries that have residuals */
-	cparms.nitems = rm_db->num_entries;
-	cparms.size = sizeof(uint16_t);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	residuals = (uint16_t *)cparms.mem_va;
-
-	/* Traverse the DB and collect any residual elements */
-	iparms.rm_db = rm_db;
-	iparms.count = &count;
-	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
-		iparms.db_index = i;
-		rc = tf_rm_get_inuse_count(&iparms);
-		/* Not a device supported entry, just skip */
-		if (rc == -ENOTSUP)
-			continue;
-		if (rc)
-			goto cleanup_residuals;
-
-		if (count) {
-			found++;
-			residuals[i] = count;
-			*residuals_present = true;
-		}
-	}
-
-	if (*residuals_present) {
-		/* Populate a reduced resv array with only the entries
-		 * that have residuals.
-		 */
-		cparms.nitems = found;
-		cparms.size = sizeof(struct tf_rm_resc_entry);
-		cparms.alignment = 0;
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-
-		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-		aparms.rm_db = rm_db;
-		hparms.rm_db = rm_db;
-		hparms.hcapi_type = &hcapi_type;
-		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
-			if (residuals[i] == 0)
-				continue;
-			aparms.db_index = i;
-			aparms.info = &info;
-			rc = tf_rm_get_info(&aparms);
-			if (rc)
-				goto cleanup_all;
-
-			hparms.db_index = i;
-			rc = tf_rm_get_hcapi_type(&hparms);
-			if (rc)
-				goto cleanup_all;
-
-			local_resv[f].type = hcapi_type;
-			local_resv[f].start = info.entry.start;
-			local_resv[f].stride = info.entry.stride;
-			f++;
-		}
-		*resv_size = found;
-	}
-
-	tf_rm_log_residuals(rm_db->dir,
-			    rm_db->type,
-			    rm_db->num_entries,
-			    residuals);
-
-	tfp_free((void *)residuals);
-	*resv = local_resv;
-
-	return 0;
-
- cleanup_all:
-	tfp_free((void *)local_resv);
-	*resv = NULL;
- cleanup_residuals:
-	tfp_free((void *)residuals);
-
-	return rc;
-}
-
-int
-tf_rm_create_db(struct tf *tfp,
-		struct tf_rm_create_db_parms *parms)
-{
-	int rc;
-	int i;
-	int j;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	uint16_t max_types;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_req_entry *query;
-	enum tf_rm_resc_resv_strategy resv_strategy;
-	struct tf_rm_resc_req_entry *req;
-	struct tf_rm_resc_entry *resv;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_element *db;
-	uint32_t pool_size;
-	uint16_t hcapi_items;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc)
-		return rc;
-
-	/* Retrieve device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc)
-		return rc;
-
-	/* Need device max number of elements for the RM QCAPS */
-	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
-	if (rc)
-		return rc;
-
-	cparms.nitems = max_types;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Get Firmware Capabilities */
-	rc = tf_msg_session_resc_qcaps(tfp,
-				       parms->dir,
-				       max_types,
-				       query,
-				       &resv_strategy);
-	if (rc)
-		return rc;
-
-	/* Process capabilities against DB requirements. However, as a
-	 * DB can hold elements that are not HCAPI we can reduce the
-	 * req msg content by removing those out of the request yet
-	 * the DB holds them all as to give a fast lookup. We can also
-	 * remove entries where there are no request for elements.
-	 */
-	tf_rm_count_hcapi_reservations(parms->dir,
-				       parms->type,
-				       parms->cfg,
-				       parms->alloc_cnt,
-				       parms->num_elements,
-				       &hcapi_items);
-
-	/* Alloc request, alignment already set */
-	cparms.nitems = (size_t)hcapi_items;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Alloc reservation, alignment and nitems already set */
-	cparms.size = sizeof(struct tf_rm_resc_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-	/* Build the request */
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			/* Only perform reservation for entries that
-			 * has been requested
-			 */
-			if (parms->alloc_cnt[i] == 0)
-				continue;
-
-			/* Verify that we can get the full amount
-			 * allocated per the qcaps availability.
-			 */
-			if (parms->alloc_cnt[i] <=
-			    query[parms->cfg[i].hcapi_type].max) {
-				req[j].type = parms->cfg[i].hcapi_type;
-				req[j].min = parms->alloc_cnt[i];
-				req[j].max = parms->alloc_cnt[i];
-				j++;
-			} else {
-				TFP_DRV_LOG(ERR,
-					    "%s: Resource failure, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    parms->cfg[i].hcapi_type);
-				TFP_DRV_LOG(ERR,
-					"req:%d, avail:%d\n",
-					parms->alloc_cnt[i],
-					query[parms->cfg[i].hcapi_type].max);
-				return -EINVAL;
-			}
-		}
-	}
-
-	rc = tf_msg_session_resc_alloc(tfp,
-				       parms->dir,
-				       hcapi_items,
-				       req,
-				       resv);
-	if (rc)
-		return rc;
-
-	/* Build the RM DB per the request */
-	cparms.nitems = 1;
-	cparms.size = sizeof(struct tf_rm_new_db);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db = (void *)cparms.mem_va;
-
-	/* Build the DB within RM DB */
-	cparms.nitems = parms->num_elements;
-	cparms.size = sizeof(struct tf_rm_element);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
-
-	db = rm_db->db;
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-
-		/* Skip any non HCAPI types as we didn't include them
-		 * in the reservation request.
-		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
-			continue;
-
-		/* If the element didn't request an allocation no need
-		 * to create a pool nor verify if we got a reservation.
-		 */
-		if (parms->alloc_cnt[i] == 0)
-			continue;
-
-		/* If the element had requested an allocation and that
-		 * allocation was a success (full amount) then
-		 * allocate the pool.
-		 */
-		if (parms->alloc_cnt[i] == resv[j].stride) {
-			db[i].alloc.entry.start = resv[j].start;
-			db[i].alloc.entry.stride = resv[j].stride;
-
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			j++;
-		} else {
-			/* Bail out as we want what we requested for
-			 * all elements, not any less.
-			 */
-			TFP_DRV_LOG(ERR,
-				    "%s: Alloc failed, type:%d\n",
-				    tf_dir_2_str(parms->dir),
-				    db[i].cfg_type);
-			TFP_DRV_LOG(ERR,
-				    "req:%d, alloc:%d\n",
-				    parms->alloc_cnt[i],
-				    resv[j].stride);
-			goto fail;
-		}
-	}
-
-	rm_db->num_entries = parms->num_elements;
-	rm_db->dir = parms->dir;
-	rm_db->type = parms->type;
-	*parms->rm_db = (void *)rm_db;
-
-	printf("%s: type:%d num_entries:%d\n",
-	       tf_dir_2_str(parms->dir),
-	       parms->type,
-	       i);
-
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-
-	return 0;
-
- fail:
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-	tfp_free((void *)db->pool);
-	tfp_free((void *)db);
-	tfp_free((void *)rm_db);
-	parms->rm_db = NULL;
-
-	return -EINVAL;
-}
-
-int
-tf_rm_free_db(struct tf *tfp,
-	      struct tf_rm_free_db_parms *parms)
-{
-	int rc;
-	int i;
-	uint16_t resv_size = 0;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_resc_entry *resv;
-	bool residuals_found = false;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	/* Device unbind happens when the TF Session is closed and the
-	 * session ref count is 0. Device unbind will cleanup each of
-	 * its support modules, i.e. Identifier, thus we're ending up
-	 * here to close the DB.
-	 *
-	 * On TF Session close it is assumed that the session has already
-	 * cleaned up all its resources, individually, while
-	 * destroying its flows.
-	 *
-	 * To assist in the 'cleanup checking' the DB is checked for any
-	 * remaining elements and logged if found to be the case.
-	 *
-	 * Any such elements will need to be 'cleared' ahead of
-	 * returning the resources to the HCAPI RM.
-	 *
-	 * RM will signal FW to flush the DB resources. FW will
-	 * perform the invalidation. TF Session close will return the
-	 * previous allocated elements to the RM and then close the
-	 * HCAPI RM registration. That then saves several 'free' msgs
-	 * from being required.
-	 */
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-
-	/* Check for residuals that the client didn't clean up */
-	rc = tf_rm_check_residuals(rm_db,
-				   &resv_size,
-				   &resv,
-				   &residuals_found);
-	if (rc)
-		return rc;
-
-	/* Invalidate any residuals followed by a DB traversal for
-	 * pool cleanup.
-	 */
-	if (residuals_found) {
-		rc = tf_msg_session_resc_flush(tfp,
-					       parms->dir,
-					       resv_size,
-					       resv);
-		tfp_free((void *)resv);
-		/* On failure we still have to cleanup so we can only
-		 * log that FW failed.
-		 */
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s: Internal Flush error, module:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_device_module_type_2_str(rm_db->type));
-	}
-
-	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db[i].pool);
-
-	tfp_free((void *)parms->rm_db);
-
-	return rc;
-}
-
-int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority)
-		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
-	else
-		id = ba_alloc(rm_db->db[parms->db_index].pool);
-	if (id == BA_FAIL) {
-		rc = -ENOMEM;
-		TFP_DRV_LOG(ERR,
-			    "%s: Allocation failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_ADD_BASE,
-				parms->db_index,
-				id,
-				&index);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Alloc adjust of base index failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return -EINVAL;
-	}
-
-	*parms->index = index;
-
-	return rc;
-}
-
-int
-tf_rm_free(struct tf_rm_free_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
-	/* No logging direction matters and that is not available here */
-	if (rc)
-		return rc;
-
-	return rc;
-}
-
-int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
-				     adj_index);
-
-	return rc;
-}
-
-int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	memcpy(parms->info,
-	       &rm_db->db[parms->db_index].alloc,
-	       sizeof(struct tf_rm_alloc_info));
-
-	return 0;
-}
-
-int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
-
-	return 0;
-}
-
-int
-tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
-{
-	int rc = 0;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail silently (no logging), if the pool is not valid there
-	 * was no elements allocated for it.
-	 */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		*parms->count = 0;
-		return 0;
-	}
-
-	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
-
-	return rc;
-
-}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
deleted file mode 100644
index 5cb6889..0000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ /dev/null
@@ -1,446 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_RM_NEW_H_
-#define TF_RM_NEW_H_
-
-#include "tf_core.h"
-#include "bitalloc.h"
-#include "tf_device.h"
-
-struct tf;
-
-/**
- * The Resource Manager (RM) module provides basic DB handling for
- * internal resources. These resources exists within the actual device
- * and are controlled by the HCAPI Resource Manager running on the
- * firmware.
- *
- * The RM DBs are all intended to be indexed using TF types there for
- * a lookup requires no additional conversion. The DB configuration
- * specifies the TF Type to HCAPI Type mapping and it becomes the
- * responsibility of the DB initialization to handle this static
- * mapping.
- *
- * Accessor functions are providing access to the DB, thus hiding the
- * implementation.
- *
- * The RM DB will work on its initial allocated sizes so the
- * capability of dynamically growing a particular resource is not
- * possible. If this capability later becomes a requirement then the
- * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
- * structure and several new APIs would need to be added to allow for
- * growth of a single TF resource type.
- *
- * The access functions does not check for NULL pointers as it's a
- * support module, not called directly.
- */
-
-/**
- * Resource reservation single entry result. Used when accessing HCAPI
- * RM on the firmware.
- */
-struct tf_rm_new_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
-};
-
-/**
- * RM Element configuration enumeration. Used by the Device to
- * indicate how the RM elements the DB consists off, are to be
- * configured at time of DB creation. The TF may present types to the
- * ULP layer that is not controlled by HCAPI within the Firmware.
- */
-enum tf_rm_elem_cfg_type {
-	/** No configuration */
-	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
-	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
-	/**
-	 * Shared element thus it belongs to a shared FW Session and
-	 * is not controlled by the Host.
-	 */
-	TF_RM_ELEM_CFG_SHARED,
-	TF_RM_TYPE_MAX
-};
-
-/**
- * RM Reservation strategy enumeration. Type of strategy comes from
- * the HCAPI RM QCAPS handshake.
- */
-enum tf_rm_resc_resv_strategy {
-	TF_RM_RESC_RESV_STATIC_PARTITION,
-	TF_RM_RESC_RESV_STRATEGY_1,
-	TF_RM_RESC_RESV_STRATEGY_2,
-	TF_RM_RESC_RESV_STRATEGY_3,
-	TF_RM_RESC_RESV_MAX
-};
-
-/**
- * RM Element configuration structure, used by the Device to configure
- * how an individual TF type is configured in regard to the HCAPI RM
- * of same type.
- */
-struct tf_rm_element_cfg {
-	/**
-	 * RM Element config controls how the DB for that element is
-	 * processed.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/* If a HCAPI to TF type conversion is required then TF type
-	 * can be added here.
-	 */
-
-	/**
-	 * HCAPI RM Type for the element. Used for TF to HCAPI type
-	 * conversion.
-	 */
-	uint16_t hcapi_type;
-};
-
-/**
- * Allocation information for a single element.
- */
-struct tf_rm_alloc_info {
-	/**
-	 * HCAPI RM allocated range information.
-	 *
-	 * NOTE:
-	 * In case of dynamic allocation support this would have
-	 * to be changed to linked list of tf_rm_entry instead.
-	 */
-	struct tf_rm_new_entry entry;
-};
-
-/**
- * Create RM DB parameters
- */
-struct tf_rm_create_db_parms {
-	/**
-	 * [in] Device module type. Used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-	/**
-	 * [in] Receive or transmit direction.
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Number of elements.
-	 */
-	uint16_t num_elements;
-	/**
-	 * [in] Parameter structure array. Array size is num_elements.
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Resource allocation count array. This array content
-	 * originates from the tf_session_resources that is passed in
-	 * on session open.
-	 * Array size is num_elements.
-	 */
-	uint16_t *alloc_cnt;
-	/**
-	 * [out] RM DB Handle
-	 */
-	void **rm_db;
-};
-
-/**
- * Free RM DB parameters
- */
-struct tf_rm_free_db_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-};
-
-/**
- * Allocate RM parameters for a single element
- */
-struct tf_rm_allocate_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Pointer to the allocated index in normalized
-	 * form. Normalized means the index has been adjusted,
-	 * i.e. Full Action Record offsets.
-	 */
-	uint32_t *index;
-	/**
-	 * [in] Priority, indicates the prority of the entry
-	 * priority  0: allocate from top of the tcam (from index 0
-	 *              or lowest available index)
-	 * priority !0: allocate from bottom of the tcam (from highest
-	 *              available index)
-	 */
-	uint32_t priority;
-};
-
-/**
- * Free RM parameters for a single element
- */
-struct tf_rm_free_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint16_t index;
-};
-
-/**
- * Is Allocated parameters for a single element
- */
-struct tf_rm_is_allocated_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t index;
-	/**
-	 * [in] Pointer to flag that indicates the state of the query
-	 */
-	int *allocated;
-};
-
-/**
- * Get Allocation information for a single element
- */
-struct tf_rm_get_alloc_info_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the requested allocation information for
-	 * the specified db_index
-	 */
-	struct tf_rm_alloc_info *info;
-};
-
-/**
- * Get HCAPI type parameters for a single element
- */
-struct tf_rm_get_hcapi_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the hcapi type for the specified db_index
-	 */
-	uint16_t *hcapi_type;
-};
-
-/**
- * Get InUse count parameters for single element
- */
-struct tf_rm_get_inuse_count_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the inuse count for the specified db_index
-	 */
-	uint16_t *count;
-};
-
-/**
- * @page rm Resource Manager
- *
- * @ref tf_rm_create_db
- *
- * @ref tf_rm_free_db
- *
- * @ref tf_rm_allocate
- *
- * @ref tf_rm_free
- *
- * @ref tf_rm_is_allocated
- *
- * @ref tf_rm_get_info
- *
- * @ref tf_rm_get_hcapi_type
- *
- * @ref tf_rm_get_inuse_count
- */
-
-/**
- * Creates and fills a Resource Manager (RM) DB with requested
- * elements. The DB is indexed per the parms structure.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to create parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- * - Fail on parameter check
- * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
- * - Fail on DB creation if DB already exist
- *
- * - Allocs local DB
- * - Does hcapi qcaps
- * - Does hcapi reservation
- * - Populates the pool with allocated elements
- * - Returns handle to the created DB
- */
-int tf_rm_create_db(struct tf *tfp,
-		    struct tf_rm_create_db_parms *parms);
-
-/**
- * Closes the Resource Manager (RM) DB and frees all allocated
- * resources per the associated database.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free_db(struct tf *tfp,
-		  struct tf_rm_free_db_parms *parms);
-
-/**
- * Allocates a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to allocate parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- *   - (-ENOMEM) if pool is empty
- */
-int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
-
-/**
- * Free's a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free(struct tf_rm_free_parms *parms);
-
-/**
- * Performs an allocation verification check on a specified element.
- *
- * [in] parms
- *   Pointer to is allocated parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- *  - If pool is set to Chip MAX, then the query index must be checked
- *    against the allocated range and query index must be allocated as well.
- *  - If pool is allocated size only, then check if query index is allocated.
- */
-int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
-
-/**
- * Retrieves an elements allocation information from the Resource
- * Manager (RM) DB.
- *
- * [in] parms
- *   Pointer to get info parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type.
- *
- * [in] parms
- *   Pointer to get hcapi parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type inuse count.
- *
- * [in] parms
- *   Pointer to get inuse parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
-
-#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 705bb09..e4472ed 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -14,6 +14,7 @@
 #include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "tf_resources.h"
 #include "stack.h"
 
 /**
@@ -43,7 +44,8 @@
 #define TF_SESSION_EM_POOL_SIZE \
 	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
 
-/** Session
+/**
+ * Session
  *
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
@@ -99,216 +101,6 @@ struct tf_session {
 	/** Device handle */
 	struct tf_dev_info dev;
 
-	/** Session HW and SRAM resources */
-	struct tf_rm_db resc;
-
-	/* Session HW resource pools */
-
-	/** RX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** RX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	/** TX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-
-	/** RX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	/** TX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-
-	/** RX EM Profile ID Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	/** TX EM Key Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/** RX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	/** TX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records are not controlled by way of a pool */
-
-	/** RX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	/** TX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-
-	/** RX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	/** TX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-
-	/** RX Meter Instance Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	/** TX Meter Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-
-	/** RX Mirror Configuration Pool*/
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	/** RX Mirror Configuration Pool */
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-
-	/** RX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	/** TX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	/** RX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	/** TX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	/** RX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	/** TX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	/** RX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	/** TX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-
-	/** RX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	/** TX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-
-	/** RX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	/** TX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-
-	/** RX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	/** TX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-
-	/** RX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	/** TX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-
-	/** RX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	/** TX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-
-	/** RX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	/** TX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-
-	/** RX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	/** TX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Session SRAM pools */
-
-	/** RX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		      TF_RSVD_SRAM_FULL_ACTION_RX);
-	/** TX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		      TF_RSVD_SRAM_FULL_ACTION_TX);
-
-	/** RX Multicast Group Pool, only RX is supported */
-	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
-		      TF_RSVD_SRAM_MCG_RX);
-
-	/** RX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_8B_RX);
-	/** TX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_8B_TX);
-
-	/** RX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_16B_RX);
-	/** TX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_16B_TX);
-
-	/** TX Encap 64B Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_64B_TX);
-
-	/** RX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		      TF_RSVD_SRAM_SP_SMAC_RX);
-	/** TX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_TX);
-
-	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-
-	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-
-	/** RX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_COUNTER_64B_RX);
-	/** TX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_COUNTER_64B_TX);
-
-	/** RX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_SPORT_RX);
-	/** TX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_SPORT_TX);
-
-	/** RX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_DPORT_RX);
-	/** TX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_DPORT_TX);
-
-	/** RX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	/** TX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
-
-	/** RX NAT Destination IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	/** TX NAT IPv4 Destination Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/**
-	 * Pools not allocated from HCAPI RM
-	 */
-
-	/** RX L2 Ctx Remap ID  Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 Ctx Remap ID Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** CRC32 seed table */
-#define TF_LKUP_SEED_MEM_SIZE 512
-	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
-
-	/** Lookup3 init values */
-	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index b5ce860..6303033 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -3,176 +3,412 @@
  * All rights reserved.
  */
 
-/* Truflow Table APIs and supporting code */
-
-#include <stdio.h>
-#include <string.h>
-#include <stdbool.h>
-#include <math.h>
-#include <sys/param.h>
 #include <rte_common.h>
-#include <rte_errno.h>
-#include "hsi_struct_def_dpdk.h"
 
-#include "tf_core.h"
+#include "tf_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
 #include "tf_util.h"
-#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
-#include "hwrm_tf.h"
-#include "bnxt.h"
-#include "tf_resources.h"
-#include "tf_rm.h"
-#include "stack.h"
-#include "tf_common.h"
+
+struct tf;
+
+/**
+ * Table DBs.
+ */
+static void *tbl_db[TF_DIR_MAX];
+
+/**
+ * Table Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
 
 /**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
+ * Shadow init flag, set on bind and cleared on unbind
  */
-static int
-tf_bulk_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms)
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table DB already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_tbl_unbind(struct tf *tfp)
 {
 	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms)
+{
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
 	if (rc)
 		return rc;
 
-	index = parms->starting_idx;
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
 
-	/*
-	 * Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->starting_idx,
-			    &index);
+			    parms->idx);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
+{
+	int rc;
+	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
 		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   parms->type,
-		   index);
+		   parms->idx);
 		return -EINVAL;
 	}
 
-	/* Get the entry */
-	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
 }
 
-#if (TF_SHADOW == 1)
-/**
- * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is incremente and
- * returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count incremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
-			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+int
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Alloc with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	int rc;
+	uint16_t hcapi_type;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	return -EOPNOTSUPP;
-}
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
-/**
- * Free Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is decremente and
- * new ref_count returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_free_tbl_entry_shadow(struct tf_session *tfs,
-			 struct tf_free_tbl_entry_parms *parms)
-{
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Free with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
 
-	return -EOPNOTSUPP;
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
 }
-#endif /* TF_SHADOW */
 
-/* API defined in tf_core.h */
 int
-tf_bulk_get_tbl_entry(struct tf *tfp,
-		 struct tf_bulk_get_tbl_entry_parms *parms)
+tf_tbl_bulk_get(struct tf *tfp,
+		struct tf_tbl_get_bulk_parms *parms)
 {
-	int rc = 0;
+	int rc;
+	int i;
+	uint16_t hcapi_type;
+	uint32_t idx;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
+	if (!init) {
 		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
+			    "%s: No Table DBs created\n",
 			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
+	/* Verify that the entries has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.allocated = &allocated;
+	idx = parms->starting_idx;
+	for (i = 0; i < parms->num_entries; i++) {
+		aparms.index = idx;
+		rc = tf_rm_is_allocated(&aparms);
 		if (rc)
+			return rc;
+
+		if (!allocated) {
 			TFP_DRV_LOG(ERR,
-				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    strerror(-rc));
+				    idx);
+			return -EINVAL;
+		}
+		idx++;
+	}
+
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entries */
+	rc = tf_msg_bulk_get_tbl_entry(tfp,
+				       parms->dir,
+				       hcapi_type,
+				       parms->starting_idx,
+				       parms->num_entries,
+				       parms->entry_sz_in_bytes,
+				       parms->physical_mem_addr);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 2b7456f..eb560ff 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -3,17 +3,21 @@
  * All rights reserved.
  */
 
-#ifndef _TF_TBL_H_
-#define _TF_TBL_H_
-
-#include <stdint.h>
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
 
 #include "tf_core.h"
 #include "stack.h"
 
-struct tf_session;
+struct tf;
+
+/**
+ * The Table module provides processing of Internal TF table types.
+ */
 
-/** table scope control block content */
+/**
+ * Table scope control block content
+ */
 struct tf_em_caps {
 	uint32_t flags;
 	uint32_t supported;
@@ -35,66 +39,364 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
-	struct tf_em_caps          em_caps[TF_DIR_MAX];
-	struct stack               ext_act_pool[TF_DIR_MAX];
-	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps em_caps[TF_DIR_MAX];
+	struct stack ext_act_pool[TF_DIR_MAX];
+	uint32_t *ext_act_pool_mem[TF_DIR_MAX];
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+};
+
+/**
+ * Table allocation parameters
+ */
+struct tf_tbl_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t *idx;
+};
+
+/**
+ * Table free parameters
+ */
+struct tf_tbl_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
- */
-#define TF_EM_PAGE_SIZE_4K 12
-#define TF_EM_PAGE_SIZE_8K 13
-#define TF_EM_PAGE_SIZE_64K 16
-#define TF_EM_PAGE_SIZE_256K 18
-#define TF_EM_PAGE_SIZE_1M 20
-#define TF_EM_PAGE_SIZE_2M 21
-#define TF_EM_PAGE_SIZE_4M 22
-#define TF_EM_PAGE_SIZE_1G 30
-
-/* Set page size */
-#define PAGE_SIZE TF_EM_PAGE_SIZE_2M
-
-#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
-#else
-#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
-#endif
-
-#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
-#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
-
-/**
- * Initialize table pool structure to indicate
- * no table scope has been associated with the
- * external pool of indexes.
- *
- * [in] session
- */
-void
-tf_init_tbl_pool(struct tf_session *session);
-
-#endif /* _TF_TBL_H_ */
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table set parameters
+ */
+struct tf_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get parameters
+ */
+struct tf_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get bulk parameters
+ */
+struct tf_tbl_get_bulk_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * @page tbl Table
+ *
+ * @ref tf_tbl_bind
+ *
+ * @ref tf_tbl_unbind
+ *
+ * @ref tf_tbl_alloc
+ *
+ * @ref tf_tbl_free
+ *
+ * @ref tf_tbl_alloc_search
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_bulk_get
+ */
+
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table allocation parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
+
+/**
+ * Retrieves bulk block of elements by sending a firmware request to
+ * get the elements.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get bulk parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bulk_get(struct tf *tfp,
+		    struct tf_tbl_get_bulk_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
deleted file mode 100644
index 2f5af60..0000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ /dev/null
@@ -1,342 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <rte_common.h>
-
-#include "tf_tbl_type.h"
-#include "tf_common.h"
-#include "tf_rm_new.h"
-#include "tf_util.h"
-#include "tf_msg.h"
-#include "tfp.h"
-
-struct tf;
-
-/**
- * Table DBs.
- */
-static void *tbl_db[TF_DIR_MAX];
-
-/**
- * Table Shadow DBs
- */
-/* static void *shadow_tbl_db[TF_DIR_MAX]; */
-
-/**
- * Init flag, set on bind and cleared on unbind
- */
-static uint8_t init;
-
-/**
- * Shadow init flag, set on bind and cleared on unbind
- */
-/* static uint8_t shadow_init; */
-
-int
-tf_tbl_bind(struct tf *tfp,
-	    struct tf_tbl_cfg_parms *parms)
-{
-	int rc;
-	int i;
-	struct tf_rm_create_db_parms db_cfg = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (init) {
-		TFP_DRV_LOG(ERR,
-			    "Table already initialized\n");
-		return -EINVAL;
-	}
-
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.cfg = parms->cfg;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		db_cfg.dir = i;
-		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
-		db_cfg.rm_db = &tbl_db[i];
-		rc = tf_rm_create_db(tfp, &db_cfg);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Table DB creation failed\n",
-				    tf_dir_2_str(i));
-
-			return rc;
-		}
-	}
-
-	init = 1;
-
-	printf("Table Type - initialized\n");
-
-	return 0;
-}
-
-int
-tf_tbl_unbind(struct tf *tfp __rte_unused)
-{
-	int rc;
-	int i;
-	struct tf_rm_free_db_parms fparms = { 0 };
-
-	TF_CHECK_PARMS1(tfp);
-
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No Table DBs created\n");
-		return -EINVAL;
-	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		fparms.dir = i;
-		fparms.rm_db = tbl_db[i];
-		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
-
-		tbl_db[i] = NULL;
-	}
-
-	init = 0;
-
-	return 0;
-}
-
-int
-tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms)
-{
-	int rc;
-	uint32_t idx;
-	struct tf_rm_allocate_parms aparms = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Allocate requested element */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = &idx;
-	rc = tf_rm_allocate(&aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed allocate, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return rc;
-	}
-
-	*parms->idx = idx;
-
-	return 0;
-}
-
-int
-tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms)
-{
-	int rc;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_free_parms fparms = { 0 };
-	int allocated = 0;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Check if element is in use */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Entry already free, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	/* Free requested element */
-	fparms.rm_db = tbl_db[parms->dir];
-	fparms.db_index = parms->type;
-	fparms.index = parms->idx;
-	rc = tf_rm_free(&fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Free failed, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_alloc_search(struct tf *tfp __rte_unused,
-		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
-{
-	return 0;
-}
-
-int
-tf_tbl_set(struct tf *tfp,
-	   struct tf_tbl_set_parms *parms)
-{
-	int rc;
-	int allocated = 0;
-	uint16_t hcapi_type;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_get(struct tf *tfp,
-	   struct tf_tbl_get_parms *parms)
-{
-	int rc;
-	uint16_t hcapi_type;
-	int allocated = 0;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
deleted file mode 100644
index 3474489..0000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ /dev/null
@@ -1,318 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_TBL_TYPE_H_
-#define TF_TBL_TYPE_H_
-
-#include "tf_core.h"
-
-struct tf;
-
-/**
- * The Table module provides processing of Internal TF table types.
- */
-
-/**
- * Table configuration parameters
- */
-struct tf_tbl_cfg_parms {
-	/**
-	 * Number of table types in each of the configuration arrays
-	 */
-	uint16_t num_elements;
-	/**
-	 * Table Type element configuration array
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Shadow table type configuration array
-	 */
-	struct tf_shadow_tbl_cfg *shadow_cfg;
-	/**
-	 * Boolean controlling the request shadow copy.
-	 */
-	bool shadow_copy;
-	/**
-	 * Session resource allocations
-	 */
-	struct tf_session_resources *resources;
-};
-
-/**
- * Table allocation parameters
- */
-struct tf_tbl_alloc_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t *idx;
-};
-
-/**
- * Table free parameters
- */
-struct tf_tbl_free_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation type
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t idx;
-	/**
-	 * [out] Reference count after free, only valid if session has been
-	 * created with shadow_copy.
-	 */
-	uint16_t ref_cnt;
-};
-
-/**
- * Table allocate search parameters
- */
-struct tf_tbl_alloc_search_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
-	 */
-	uint32_t tbl_scope_id;
-	/**
-	 * [in] Enable search for matching entry. If the table type is
-	 * internal the shadow copy will be searched before
-	 * alloc. Session must be configured with shadow copy enabled.
-	 */
-	uint8_t search_enable;
-	/**
-	 * [in] Result data to search for (if search_enable)
-	 */
-	uint8_t *result;
-	/**
-	 * [in] Result data size in bytes (if search_enable)
-	 */
-	uint16_t result_sz_in_bytes;
-	/**
-	 * [out] If search_enable, set if matching entry found
-	 */
-	uint8_t hit;
-	/**
-	 * [out] Current ref count after allocation (if search_enable)
-	 */
-	uint16_t ref_cnt;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table set parameters
- */
-struct tf_tbl_set_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to set
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [in] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to write to
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table get parameters
- */
-struct tf_tbl_get_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to get
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [out] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to read
-	 */
-	uint32_t idx;
-};
-
-/**
- * @page tbl Table
- *
- * @ref tf_tbl_bind
- *
- * @ref tf_tbl_unbind
- *
- * @ref tf_tbl_alloc
- *
- * @ref tf_tbl_free
- *
- * @ref tf_tbl_alloc_search
- *
- * @ref tf_tbl_set
- *
- * @ref tf_tbl_get
- */
-
-/**
- * Initializes the Table module with the requested DBs. Must be
- * invoked as the first thing before any of the access functions.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table configuration parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_bind(struct tf *tfp,
-		struct tf_tbl_cfg_parms *parms);
-
-/**
- * Cleans up the private DBs and releases all the data.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_unbind(struct tf *tfp);
-
-/**
- * Allocates the requested table type from the internal RM DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table allocation parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc(struct tf *tfp,
-		 struct tf_tbl_alloc_parms *parms);
-
-/**
- * Free's the requested table type and returns it to the DB. If shadow
- * DB is enabled its searched first and if found the element refcount
- * is decremented. If refcount goes to 0 then its returned to the
- * table type DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_free(struct tf *tfp,
-		struct tf_tbl_free_parms *parms);
-
-/**
- * Supported if Shadow DB is configured. Searches the Shadow DB for
- * any matching element. If found the refcount in the shadow DB is
- * updated accordingly. If not found a new element is allocated and
- * installed into the shadow DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc_search(struct tf *tfp,
-			struct tf_tbl_alloc_search_parms *parms);
-
-/**
- * Configures the requested element by sending a firmware request which
- * then installs it into the device internal structures.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_set(struct tf *tfp,
-	       struct tf_tbl_set_parms *parms);
-
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table get parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_get(struct tf *tfp,
-	       struct tf_tbl_get_parms *parms);
-
-#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index a1761ad..fc047f8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -9,7 +9,7 @@
 #include "tf_tcam.h"
 #include "tf_common.h"
 #include "tf_util.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_device.h"
 #include "tfp.h"
 #include "tf_session.h"
@@ -49,7 +49,7 @@ tf_tcam_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "TCAM already initialized\n");
+			    "TCAM DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -86,11 +86,12 @@ tf_tcam_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init)
-		return -EINVAL;
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No TCAM DBs created\n");
+		return 0;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 24/50] net/bnxt: update RM to support HCAPI only
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (22 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 23/50] net/bnxt: update table get to use new design Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 25/50] net/bnxt: remove table scope from session Somnath Kotur
                   ` (26 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- For the EM Module there is a need to only allocate the EM Records in
  HCAPI RM but the storage control is requested to be outside of the RM DB.
- Add TF_RM_ELEM_CFG_HCAPI_BA.
- Return error when the number of reserved entries for wc tcam is odd
  number in tf_tcam_bind.
- Remove em_pool from session
- Use RM provided start offset and size
- HCAPI returns entry index instead of row index for WC TCAM.
- Move resource type conversion to hrwm set/free tcam functions.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   2 +
 drivers/net/bnxt/tf_core/tf_device_p4.h   |  54 ++++++------
 drivers/net/bnxt/tf_core/tf_em_internal.c | 131 +++++++++++++++++++-----------
 drivers/net/bnxt/tf_core/tf_msg.c         |   6 +-
 drivers/net/bnxt/tf_core/tf_rm.c          |  81 +++++++++---------
 drivers/net/bnxt/tf_core/tf_rm.h          |  14 +++-
 drivers/net/bnxt/tf_core/tf_session.h     |   5 --
 drivers/net/bnxt/tf_core/tf_tcam.c        |  21 +++++
 8 files changed, 190 insertions(+), 124 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index e352667..1eaf182 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -68,6 +68,8 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
 		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
 			return -ENOTSUP;
+
+		*num_slices_per_row = 1;
 	} else { /* for other type of tcam */
 		*num_slices_per_row = 1;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 473e4ea..8fae180 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,19 +12,19 @@
 #include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
 	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_TCAM },
 	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
@@ -32,26 +32,26 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
 	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
 	/* CFA_RESOURCE_TYPE_P4_UPAR */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_EPOC */
@@ -79,7 +79,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EM_REC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
 };
 
 struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 1c51474..3129fbe 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -23,20 +23,28 @@
  */
 static void *em_db[TF_DIR_MAX];
 
+#define TF_EM_DB_EM_REC 0
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
 static uint8_t init;
 
+
+/**
+ * EM Pool
+ */
+static struct stack em_pool[TF_DIR_MAX];
+
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  * [in] num_entries
  *   number of entries to write
+ * [in] start
+ *   starting offset
  *
  * Return:
  *  0       - Success, entry allocated - no search support
@@ -44,54 +52,66 @@ static uint8_t init;
  *          - Failure, entry not allocated, out of resources
  */
 static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
+tf_create_em_pool(enum tf_dir dir,
+		  uint32_t num_entries,
+		  uint32_t start)
 {
 	struct tfp_calloc_parms parms;
 	uint32_t i, j;
 	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 
-	parms.nitems = num_entries;
+	/* Assumes that num_entries has been checked before we get here */
+	parms.nitems = num_entries / TF_SESSION_EM_ENTRY_SIZE;
 	parms.size = sizeof(uint32_t);
 	parms.alignment = 0;
 
 	rc = tfp_calloc(&parms);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool allocation failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+	rc = stack_init(num_entries / TF_SESSION_EM_ENTRY_SIZE,
+			(uint32_t *)parms.mem_va,
+			pool);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack init failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
 
 	/* Fill pool with indexes
 	 */
-	j = num_entries - 1;
+	j = start + num_entries - TF_SESSION_EM_ENTRY_SIZE;
 
-	for (i = 0; i < num_entries; i++) {
+	for (i = 0; i < (num_entries / TF_SESSION_EM_ENTRY_SIZE); i++) {
 		rc = stack_push(pool, j);
 		if (rc) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, EM pool stack push failure %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup;
 		}
-		j--;
+
+		j -= TF_SESSION_EM_ENTRY_SIZE;
 	}
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
@@ -105,18 +125,15 @@ tf_create_em_pool(struct tf_session *session,
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  *
  * Return:
  */
 static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
+tf_free_em_pool(enum tf_dir dir)
 {
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 	uint32_t *ptr;
 
 	ptr = stack_items(pool);
@@ -140,22 +157,19 @@ tf_em_insert_int_entry(struct tf *tfp,
 	uint16_t rptr_index = 0;
 	uint8_t rptr_entry = 0;
 	uint8_t num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 	uint32_t index;
 
 	rc = stack_pop(pool, &index);
 
 	if (rc) {
-		PMD_DRV_LOG
-		  (ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
+		PMD_DRV_LOG(ERR,
+			    "%s, EM entry index allocation failed\n",
+			    tf_dir_2_str(parms->dir));
 		return rc;
 	}
 
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rptr_index = index;
 	rc = tf_msg_insert_em_internal_entry(tfp,
 					     parms,
 					     &rptr_index,
@@ -166,8 +180,9 @@ tf_em_insert_int_entry(struct tf *tfp,
 
 	PMD_DRV_LOG
 		  (ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   "%s, Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   index,
 		   rptr_index,
 		   rptr_entry,
 		   num_of_entries);
@@ -204,15 +219,13 @@ tf_em_delete_int_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	int rc = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 
 	rc = tf_msg_delete_em_entry(tfp, parms);
 
 	/* Return resource to pool */
 	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+		stack_push(pool, parms->index);
 
 	return rc;
 }
@@ -224,8 +237,9 @@ tf_em_int_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
-	struct tf_session *session;
 	uint8_t db_exists = 0;
+	struct tf_rm_get_alloc_info_parms iparms;
+	struct tf_rm_alloc_info info;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -235,14 +249,6 @@ tf_em_int_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		tf_create_em_pool(session,
-				  i,
-				  TF_SESSION_EM_POOL_SIZE);
-	}
-
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -257,6 +263,18 @@ tf_em_int_bind(struct tf *tfp,
 		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
 			continue;
 
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] %
+		    TF_SESSION_EM_ENTRY_SIZE != 0) {
+			rc = -ENOMEM;
+			TFP_DRV_LOG(ERR,
+				    "%s, EM Allocation must be in blocks of %d, failure %s\n",
+				    tf_dir_2_str(i),
+				    TF_SESSION_EM_ENTRY_SIZE,
+				    strerror(-rc));
+
+			return rc;
+		}
+
 		db_cfg.rm_db = &em_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
@@ -272,6 +290,28 @@ tf_em_int_bind(struct tf *tfp,
 	if (db_exists)
 		init = 1;
 
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		iparms.rm_db = em_db[i];
+		iparms.db_index = TF_EM_DB_EM_REC;
+		iparms.info = &info;
+
+		rc = tf_rm_get_info(&iparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB get info failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+
+		rc = tf_create_em_pool(i,
+				       iparms.info->entry.stride,
+				       iparms.info->entry.start);
+		/* Logging handled in tf_create_em_pool */
+		if (rc)
+			return rc;
+	}
+
+
 	return 0;
 }
 
@@ -281,7 +321,6 @@ tf_em_int_unbind(struct tf *tfp)
 	int rc;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
-	struct tf_session *session;
 
 	TF_CHECK_PARMS1(tfp);
 
@@ -292,10 +331,8 @@ tf_em_int_unbind(struct tf *tfp)
 		return 0;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	for (i = 0; i < TF_DIR_MAX; i++)
-		tf_free_em_pool(session, i);
+		tf_free_em_pool(i);
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 02d8a49..7fffb6b 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -857,12 +857,12 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < size)
+	if (tfp_le_to_cpu_32(resp.size) != size)
 		return -EINVAL;
 
 	tfp_memcpy(data,
 		   &resp.data,
-		   resp.size);
+		   size);
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
@@ -919,7 +919,7 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < data_size)
+	if (tfp_le_to_cpu_32(resp.size) != data_size)
 		return -EINVAL;
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0469b6..e7af9eb 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -106,7 +106,8 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 	uint16_t cnt = 0;
 
 	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		if ((cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		     cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) &&
 		    reservations[i] > 0)
 			cnt++;
 
@@ -467,7 +468,8 @@ tf_rm_create_db(struct tf *tfp,
 	/* Build the request */
 	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		    parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 			/* Only perform reservation for entries that
 			 * has been requested
 			 */
@@ -529,7 +531,8 @@ tf_rm_create_db(struct tf *tfp,
 		/* Skip any non HCAPI types as we didn't include them
 		 * in the reservation request.
 		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+		    parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 			continue;
 
 		/* If the element didn't request an allocation no need
@@ -551,29 +554,32 @@ tf_rm_create_db(struct tf *tfp,
 			       resv[j].start,
 			       resv[j].stride);
 
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
+			/* Only allocate BA pool if so requested */
+			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
+				/* Create pool */
+				pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+					     sizeof(struct bitalloc));
+				/* Alloc request, alignment already set */
+				cparms.nitems = pool_size;
+				cparms.size = sizeof(struct bitalloc);
+				rc = tfp_calloc(&cparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool alloc failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
+				db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+				rc = ba_init(db[i].pool, resv[j].stride);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool init failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
 			}
 			j++;
 		} else {
@@ -682,6 +688,9 @@ tf_rm_free_db(struct tf *tfp,
 				    tf_device_module_type_2_str(rm_db->type));
 	}
 
+	/* No need to check for configuration type, even if we do not
+	 * have a BA pool we just delete on a null ptr, no harm
+	 */
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -705,8 +714,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -770,8 +778,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -816,8 +823,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -857,9 +863,9 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	memcpy(parms->info,
@@ -880,9 +886,9 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
@@ -903,8 +909,7 @@ tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail silently (no logging), if the pool is not valid there
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5cb6889..f44fcca 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -56,12 +56,18 @@ struct tf_rm_new_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	/** No configuration */
+	/**
+	 * No configuration
+	 */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
+	/** HCAPI 'controlled', no RM storage thus the Device Module
+	 *  using the RM can chose to handle storage locally.
+	 */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
+	/** HCAPI 'controlled', uses a Bit Allocator Pool for internal
+	 *  storage in the RM.
+	 */
+	TF_RM_ELEM_CFG_HCAPI_BA,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
 	 * is not controlled by the Host.
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index e4472ed..ebee4db 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -103,11 +103,6 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
-
-	/**
-	 * EM Pools
-	 */
-	struct stack em_pool[TF_DIR_MAX];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index fc047f8..d5bb4ee 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -43,6 +43,7 @@ tf_tcam_bind(struct tf *tfp,
 {
 	int rc;
 	int i;
+	struct tf_tcam_resources *tcam_cnt;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -53,6 +54,14 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	tcam_cnt = parms->resources->tcam_cnt;
+	if ((tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2) ||
+	    (tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2)) {
+		TFP_DRV_LOG(ERR,
+			    "Number of WC TCAM entries cannot be odd num\n");
+		return -EINVAL;
+	}
+
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -168,6 +177,18 @@ tf_tcam_alloc(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM &&
+	    (parms->idx % 2) != 0) {
+		rc = tf_rm_allocate(&aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Failed tcam, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type);
+			return rc;
+		}
+	}
+
 	parms->idx *= num_slice_per_row;
 
 	return 0;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 25/50] net/bnxt: remove table scope from session
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (23 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 24/50] net/bnxt: update RM to support HCAPI only Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 26/50] net/bnxt: add external action alloc and free Somnath Kotur
                   ` (25 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Remove table scope data from session. Added to EEM.
- Complete move to RM of table scope base and range.
- Fix some err messaging strings.
- Fix the tcam logging message.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      |  2 +-
 drivers/net/bnxt/tf_core/tf_em.h        |  1 -
 drivers/net/bnxt/tf_core/tf_em_common.c | 16 ++++++++------
 drivers/net/bnxt/tf_core/tf_em_common.h |  5 +----
 drivers/net/bnxt/tf_core/tf_em_host.c   | 38 +++++++++++++--------------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 12 ++++-------
 drivers/net/bnxt/tf_core/tf_session.h   |  3 ---
 drivers/net/bnxt/tf_core/tf_tcam.c      |  6 ++++--
 8 files changed, 35 insertions(+), 48 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8727900..6410843 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -573,7 +573,7 @@ tf_free_tcam_entry(struct tf *tfp,
 	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: TCAM allocation failed, rc:%s\n",
+			    "%s: TCAM free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 7042f44..c3c712f 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,7 +9,6 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
-#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index d0d80da..e31a63b 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,6 +29,8 @@
  */
 void *eem_db[TF_DIR_MAX];
 
+#define TF_EEM_DB_TBL_SCOPE 1
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -39,10 +41,12 @@ static uint8_t init;
  */
 static enum tf_mem_type mem_type;
 
+/** Table scope array */
+struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
+tbl_scope_cb_find(uint32_t tbl_scope_id)
 {
 	int i;
 	struct tf_rm_is_allocated_parms parms;
@@ -50,8 +54,8 @@ tbl_scope_cb_find(struct tf_session *session,
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
@@ -60,8 +64,8 @@ tbl_scope_cb_find(struct tf_session *session,
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
+		if (tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &tbl_scopes[i];
 	}
 
 	return NULL;
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index 45699a7..bf01df9 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -14,8 +14,6 @@
  * Function to search for table scope control block structure
  * with specified table scope ID.
  *
- * [in] session
- *   Session to use for the search of the table scope control block
  * [in] tbl_scope_id
  *   Table scope ID to search for
  *
@@ -23,8 +21,7 @@
  *  Pointer to the found table scope control block struct or NULL if
  *   table scope control block struct not found
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+struct tf_tbl_scope_cb *tbl_scope_cb_find(uint32_t tbl_scope_id);
 
 /**
  * Create and initialize a stack to use for action entries
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 11899c6..f36c9dc 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,6 +48,9 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
+#define TF_EEM_DB_TBL_SCOPE 1
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
 /**
  * Function to free a page table
@@ -935,14 +938,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_entry(struct tf *tfp,
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find(
-		(struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -960,14 +961,12 @@ tf_em_insert_ext_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_entry(struct tf *tfp,
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find(
-		(struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -984,16 +983,13 @@ tf_em_ext_host_alloc(struct tf *tfp,
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct hcapi_cfa_em_table *em_tables;
-	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 	struct tf_rm_allocate_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -1002,8 +998,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 		return rc;
 	}
 
-	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
-	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
 	tbl_scope_cb->index = parms->tbl_scope_id;
 	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
 
@@ -1095,8 +1090,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
 }
@@ -1108,13 +1103,9 @@ tf_em_ext_host_free(struct tf *tfp,
 	int rc = 0;
 	enum tf_dir  dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
 	struct tf_rm_free_parms aparms = { 0 };
 
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Table scope error\n");
@@ -1123,8 +1114,8 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1145,5 +1136,6 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index ee18a0c..6dd1154 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -63,14 +63,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_sys_entry(struct tf *tfp,
+tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -87,14 +85,12 @@ tf_em_insert_ext_sys_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_sys_entry(struct tf *tfp,
+tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index ebee4db..a303fde 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -100,9 +100,6 @@ struct tf_session {
 
 	/** Device handle */
 	struct tf_dev_info dev;
-
-	/** Table scope array */
-	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index d5bb4ee..b67159a 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -287,7 +287,8 @@ tf_tcam_free(struct tf *tfp,
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
@@ -382,7 +383,8 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d set failed, rc:%s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 26/50] net/bnxt: add external action alloc and free
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (24 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 25/50] net/bnxt: remove table scope from session Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 27/50] net/bnxt: align CFA resources with RM Somnath Kotur
                   ` (24 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Jay Ding <jay.ding@broadcom.com>

- Link external action alloc and free to new hcapi interface
- Add parameter range checking
- Fix issues with index allocation check

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 163 ++++++++++++++++++++++---------
 drivers/net/bnxt/tf_core/tf_core.h       |   4 -
 drivers/net/bnxt/tf_core/tf_device.h     |  58 +++++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   6 ++
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   2 -
 drivers/net/bnxt/tf_core/tf_em.h         |  95 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_em_common.c  | 120 ++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  80 ++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_system.c  |   6 ++
 drivers/net/bnxt/tf_core/tf_identifier.c |   4 +-
 drivers/net/bnxt/tf_core/tf_rm.h         |   5 +
 drivers/net/bnxt/tf_core/tf_tbl.c        |  10 +-
 drivers/net/bnxt/tf_core/tf_tbl.h        |  12 +++
 drivers/net/bnxt/tf_core/tf_tcam.c       |   8 +-
 drivers/net/bnxt/tf_core/tf_util.c       |   4 -
 15 files changed, 499 insertions(+), 78 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6410843..45accb0 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -617,25 +617,48 @@ tf_alloc_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_alloc_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	aparms.dir = parms->dir;
 	aparms.type = parms->type;
 	aparms.idx = &idx;
-	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table allocation failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	aparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_alloc_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_ext_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: External table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+
+	} else {
+		if (dev->ops->tf_dev_alloc_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	parms->idx = idx;
@@ -677,25 +700,47 @@ tf_free_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_free_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	fparms.dir = parms->dir;
 	fparms.type = parms->type;
 	fparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table free failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	fparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_free_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_ext_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_free_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return 0;
@@ -735,27 +780,49 @@ tf_set_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_set_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	sparms.dir = parms->dir;
 	sparms.type = parms->type;
 	sparms.data = parms->data;
 	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
 	sparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table set failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	sparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_set_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_ext_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_set_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index a7a7bd3..e898f19 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -211,10 +211,6 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
 	/** Wh+/SR Action _Modify L4 Dest Port */
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
 	/** Meter Profiles */
 	TF_TBL_TYPE_METER_PROF,
 	/** Meter Instance */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 93f3627..58b7a4a 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -217,6 +217,26 @@ struct tf_dev_ops {
 				struct tf_tbl_alloc_parms *parms);
 
 	/**
+	 * Allocation of a external table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ext_tbl)(struct tf *tfp,
+				    struct tf_tbl_alloc_parms *parms);
+
+	/**
 	 * Free of a table type element.
 	 *
 	 * This API free's a previous allocated table type element from a
@@ -236,6 +256,25 @@ struct tf_dev_ops {
 			       struct tf_tbl_free_parms *parms);
 
 	/**
+	 * Free of a external table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ext_tbl)(struct tf *tfp,
+				   struct tf_tbl_free_parms *parms);
+
+	/**
 	 * Searches for the specified table type element in a shadow DB.
 	 *
 	 * This API searches for the specified table type element in a
@@ -277,6 +316,25 @@ struct tf_dev_ops {
 			      struct tf_tbl_set_parms *parms);
 
 	/**
+	 * Sets the specified external table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_ext_tbl)(struct tf *tfp,
+				  struct tf_tbl_set_parms *parms);
+
+	/**
 	 * Retrieves the specified table type element.
 	 *
 	 * This API retrieves the specified element data by invoking the
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 1eaf182..9a32307 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -85,10 +85,13 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = NULL,
 	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_ext_tbl = NULL,
 	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_ext_tbl = NULL,
 	.tf_dev_free_tbl = NULL,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
+	.tf_dev_set_ext_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
 	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
@@ -113,9 +116,12 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_alloc_ext_tbl = tf_tbl_ext_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 8fae180..298e100 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -47,8 +47,6 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index c3c712f..1c2369c 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -456,4 +456,99 @@ int tf_em_ext_common_free(struct tf *tfp,
  */
 int tf_em_ext_common_alloc(struct tf *tfp,
 			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_system_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
+
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index e31a63b..39a8412 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,8 +29,6 @@
  */
 void *eem_db[TF_DIR_MAX];
 
-#define TF_EEM_DB_TBL_SCOPE 1
-
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -54,13 +52,13 @@ tbl_scope_cb_find(uint32_t tbl_scope_id)
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
 
-	if (i < 0 || !allocated)
+	if (i < 0 || allocated != TF_RM_ALLOCATED_ENTRY_IN_USE)
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
@@ -158,6 +156,111 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type);
+		return rc;
+	}
+
+	*parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms)
+{
+	int rc = 0;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
 uint32_t
 tf_em_get_key_mask(int num_entries)
 {
@@ -273,6 +376,15 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_tbl_ext_host_set(tfp, parms);
+	else
+		return tf_tbl_ext_system_set(tfp, parms);
+}
+
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index f36c9dc..d734f1a 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,7 +48,6 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
-#define TF_EEM_DB_TBL_SCOPE 1
 
 extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
@@ -989,7 +988,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -1090,7 +1089,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
@@ -1114,7 +1113,7 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
@@ -1136,6 +1135,77 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
-	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
+	return rc;
+}
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 6dd1154..10768df 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -112,3 +112,9 @@ tf_em_ext_system_free(struct tf *tfp __rte_unused,
 {
 	return 0;
 }
+
+int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
+			  struct tf_tbl_set_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 2113710..2198392 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -159,13 +159,13 @@ tf_ident_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->id);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index f44fcca..fd04480 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -12,6 +12,11 @@
 
 struct tf;
 
+/** RM return codes */
+#define TF_RM_ALLOCATED_ENTRY_FREE        0
+#define TF_RM_ALLOCATED_ENTRY_IN_USE      1
+#define TF_RM_ALLOCATED_NO_ENTRY_FOUND   -1
+
 /**
  * The Resource Manager (RM) module provides basic DB handling for
  * internal resources. These resources exists within the actual device
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 6303033..9b1b20f 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -169,13 +169,13 @@ tf_tbl_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -230,7 +230,7 @@ tf_tbl_set(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -298,7 +298,7 @@ tf_tbl_get(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -371,7 +371,7 @@ tf_tbl_bulk_get(struct tf *tfp,
 		if (rc)
 			return rc;
 
-		if (!allocated) {
+		if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 			TFP_DRV_LOG(ERR,
 				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index eb560ff..2a10b47 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -84,6 +84,10 @@ struct tf_tbl_alloc_parms {
 	 */
 	enum tf_tbl_type type;
 	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
 	uint32_t *idx;
@@ -102,6 +106,10 @@ struct tf_tbl_free_parms {
 	 */
 	enum tf_tbl_type type;
 	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
 	 * [in] Index to free
 	 */
 	uint32_t idx;
@@ -169,6 +177,10 @@ struct tf_tbl_set_parms {
 	 */
 	enum tf_tbl_type type;
 	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
 	 * [in] Entry data
 	 */
 	uint8_t *data;
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b67159a..b1092cd 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -252,13 +252,13 @@ tf_tcam_free(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -362,13 +362,13 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry is not allocated, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Convert TF type to HCAPI RM type */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 5472a9a..85f6e25 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -92,10 +92,6 @@ tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 		return "NAT IPv4 Source";
 	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
 		return "NAT IPv4 Destination";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-		return "NAT IPv6 Source";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-		return "NAT IPv6 Destination";
 	case TF_TBL_TYPE_METER_PROF:
 		return "Meter Profile";
 	case TF_TBL_TYPE_METER_INST:
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 27/50] net/bnxt: align CFA resources with RM
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (25 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 26/50] net/bnxt: add external action alloc and free Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 28/50] net/bnxt: implement IF tables set and get Somnath Kotur
                   ` (23 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Randy Schacher <stuart.schacher@broadcom.com>

- HCAPI resources need to align for Resource Manager
- Clean up unnecessary debug messages

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 250 ++++++++++++++------------
 drivers/net/bnxt/tf_core/tf_identifier.c      |   3 +-
 drivers/net/bnxt/tf_core/tf_msg.c             |  37 ++--
 drivers/net/bnxt/tf_core/tf_rm.c              |  21 +--
 drivers/net/bnxt/tf_core/tf_tbl.c             |   3 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  28 ++-
 6 files changed, 197 insertions(+), 145 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 6e79fac..6d6651f 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -48,232 +48,246 @@
 #define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
 #define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
-/* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
+/* EPOCH 0 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH0          0xfUL
+/* EPOCH 1 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH1          0x10UL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x11UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x12UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x13UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x14UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x15UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x16UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P58_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6         0x6UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT           0x7UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT           0x8UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4          0x9UL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
-/* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
-/* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4          0xaUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER               0xbUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE          0xcUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION         0xdUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION     0xeUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P58_EXT_FORMAT_0_ACTION 0xfUL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_1_ACTION     0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION     0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION     0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION     0x13UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_5_ACTION     0x14UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_6_ACTION     0x15UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM        0x16UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP       0x17UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC           0x18UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM           0x19UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID     0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P58_EM_REC              0x1cUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM             0x1dUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF          0x1eUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR              0x1fUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM             0x20UL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB              0x21UL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB              0x22UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
-#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM            0x23UL
+#define CFA_RESOURCE_TYPE_P58_LAST               CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P45_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P45_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P45_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM             0x21UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM            0x22UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE           0x23UL
+#define CFA_RESOURCE_TYPE_P45_LAST               CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P4_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P4_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P4_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM             0x21UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE           0x22UL
+#define CFA_RESOURCE_TYPE_P4_LAST               CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 2198392..2cc43b4 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -59,7 +59,8 @@ tf_ident_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Identifier - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Identifier - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 7fffb6b..659065d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,9 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
+/* Logging defines */
+#define TF_RM_MSG_DEBUG  0
+
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -215,7 +218,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -225,31 +228,39 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nQCAPS\n");
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("type: %d(0x%x) %d %d\n",
 		       query[i].type,
 		       query[i].type,
 		       query[i].min,
 		       query[i].max);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	}
 
 	*resv_strategy = resp.flags &
 	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
 
+cleanup:
 	tf_msg_free_dma_buf(&qcaps_buf);
 
 	return rc;
@@ -293,8 +304,10 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	dma_size = size * sizeof(struct tf_rm_resc_entry);
 	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
-	if (rc)
+	if (rc) {
+		tf_msg_free_dma_buf(&req_buf);
 		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
@@ -320,7 +333,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -330,11 +343,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nRESV\n");
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
@@ -343,14 +359,17 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
 		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
 		       resv[i].type,
 		       resv[i].type,
 		       resv[i].start,
 		       resv[i].stride);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	}
 
+cleanup:
 	tf_msg_free_dma_buf(&req_buf);
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -412,8 +431,6 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	parms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp, &parms);
-	if (rc)
-		return rc;
 
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -434,7 +451,7 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
-	uint32_t flags;
+	uint16_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
@@ -480,7 +497,7 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
+	uint16_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
@@ -726,8 +743,6 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
-	if (rc)
-		goto cleanup;
 
 cleanup:
 	tf_msg_free_dma_buf(&buf);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e7af9eb..30313e2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -17,6 +17,9 @@
 #include "tfp.h"
 #include "tf_msg.h"
 
+/* Logging defines */
+#define TF_RM_DEBUG  0
+
 /**
  * Generic RM Element data type that an RM DB is build upon.
  */
@@ -120,16 +123,11 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
 		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
+				"%s, %s, %s allocation of %d not supported\n",
 				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-
+				tf_device_module_type_subtype_2_str(type, i),
+				reservations[i]);
 		}
 	}
 
@@ -549,11 +547,6 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
 			/* Only allocate BA pool if so requested */
 			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 				/* Create pool */
@@ -603,10 +596,12 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+#if (TF_RM_DEBUG == 1)
 	printf("%s: type:%d num_entries:%d\n",
 	       tf_dir_2_str(parms->dir),
 	       parms->type,
 	       i);
+#endif /* (TF_RM_DEBUG == 1) */
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 9b1b20f..a864b16 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -71,7 +71,8 @@ tf_tbl_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Table Type - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b1092cd..1c48b53 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -81,7 +81,8 @@ tf_tcam_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("TCAM - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "TCAM - initialized\n");
 
 	return 0;
 }
@@ -275,6 +276,31 @@ tf_tcam_free(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		int i;
+
+		for (i = -1; i < 3; i += 3) {
+			aparms.index += i;
+			rc = tf_rm_is_allocated(&aparms);
+			if (rc)
+				return rc;
+
+			if (allocated == TF_RM_ALLOCATED_ENTRY_IN_USE) {
+				/* Free requested element */
+				fparms.index = aparms.index;
+				rc = tf_rm_free(&fparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+						    "%s: Free failed, type:%d, index:%d\n",
+						    tf_dir_2_str(parms->dir),
+						    parms->type,
+						    fparms.index);
+					return rc;
+				}
+			}
+		}
+	}
+
 	/* Convert TF type to HCAPI RM type */
 	hparms.rm_db = tcam_db[parms->dir];
 	hparms.db_index = parms->type;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 28/50] net/bnxt: implement IF tables set and get
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (26 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 27/50] net/bnxt: align CFA resources with RM Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 29/50] net/bnxt: add TF register and unregister Somnath Kotur
                   ` (22 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Jay Ding <jay.ding@broadcom.com>

- Implement set/get for PROF_SPIF_CTXT, LKUP_PF_DFLT_ARP,
  PROF_PF_ERR_ARP with tunneled HWRM messages
- Add IF table for PROF_PARIF_DFLT_ARP
- Fix page size offset in the HCAPI code
- Fix Entry offset calculation

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_hw.h      |  93 ++++++++++++
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h     |  53 +++++++
 drivers/net/bnxt/hcapi/hcapi_cfa.h       |  38 +++--
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h  |  12 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c    |   8 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h    |  18 ++-
 drivers/net/bnxt/meson.build             |   2 +-
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  63 ++++++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 116 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       | 104 ++++++++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  21 +++
 drivers/net/bnxt/tf_core/tf_device.h     |  39 +++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   5 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |  10 ++
 drivers/net/bnxt/tf_core/tf_em_common.c  |   5 +-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  12 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |   3 +-
 drivers/net/bnxt/tf_core/tf_if_tbl.c     | 178 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_if_tbl.h     | 236 +++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 186 ++++++++++++++++++++----
 drivers/net/bnxt/tf_core/tf_msg.h        |  30 ++++
 drivers/net/bnxt/tf_core/tf_session.c    |  14 +-
 23 files changed, 1170 insertions(+), 78 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
index efaf607..172706f 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_hw.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
@@ -334,6 +334,12 @@ enum cfa_p40_prof_ctxt_remap_mem_flds {
 #define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS 0
 #define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS 4
 
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS 2
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS 16
+
 enum cfa_p40_prof_profile_tcam_remap_mem_flds {
 	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_FLD = 0,
 	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_FLD = 1,
@@ -343,6 +349,8 @@ enum cfa_p40_prof_profile_tcam_remap_mem_flds {
 	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_FLD = 5,
 	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_FLD = 6,
 	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_FLD = 9,
 	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_MAX_FLD
 };
 
@@ -685,4 +693,89 @@ enum cfa_p40_eem_key_tbl_flds {
 	CFA_P40_EEM_KEY_TBL_AR_PTR_FLD = 7,
 	CFA_P40_EEM_KEY_TBL_MAX_FLD
 };
+
+/**
+ * Mirror Destination 0 Source Property Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_SP_PTR_BITPOS 0
+#define CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS 11
+
+/**
+ * igonore or honor drop
+ */
+#define CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS 13
+#define CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS 1
+
+/**
+ * ingress or egress copy
+ */
+#define CFA_P40_MIRROR_TBL_COPY_BITPOS 14
+#define CFA_P40_MIRROR_TBL_COPY_NUM_BITS 1
+
+/**
+ * Mirror Destination enable.
+ */
+#define CFA_P40_MIRROR_TBL_EN_BITPOS 15
+#define CFA_P40_MIRROR_TBL_EN_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_AR_PTR_BITPOS 16
+#define CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS 16
+
+#define CFA_P40_MIRROR_TBL_TOTAL_NUM_BITS 32
+
+enum cfa_p40_mirror_tbl_flds {
+	CFA_P40_MIRROR_TBL_SP_PTR_FLD = 0,
+	CFA_P40_MIRROR_TBL_IGN_DROP_FLD = 1,
+	CFA_P40_MIRROR_TBL_COPY_FLD = 2,
+	CFA_P40_MIRROR_TBL_EN_FLD = 3,
+	CFA_P40_MIRROR_TBL_AR_PTR_FLD = 4,
+	CFA_P40_MIRROR_TBL_MAX_FLD
+};
+
+/**
+ * P45 Specific Updates (SR) - Non-autogenerated
+ */
+/**
+ * Valid TCAM entry.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Source Partition.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  166
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+
+/**
+ * Source Virtual I/F.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  12
+
+
+/* The SR layout of the l2 ctxt key is different from the Wh+.  Switch to
+ * cfa_p45_hw.h definition when available.
+ */
+enum cfa_p45_prof_l2_ctxt_tcam_flds {
+	CFA_P45_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_FLD = 1,
+	CFA_P45_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 2,
+	CFA_P45_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 3,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 4,
+	CFA_P45_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 5,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC1_FLD = 6,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_OVID_FLD = 7,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_IVID_FLD = 8,
+	CFA_P45_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P45_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P45_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P45_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 171
+
 #endif /* _CFA_P40_HW_H_ */
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
index c30e4f4..3243b3f 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -127,6 +127,11 @@ const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
 	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS},
 };
 
 const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
@@ -247,4 +252,52 @@ const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
 	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
 
 };
+
+const struct hcapi_cfa_field cfa_p40_mirror_tbl_layout[] = {
+	{CFA_P40_MIRROR_TBL_SP_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS,
+	 CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_COPY_BITPOS,
+	 CFA_P40_MIRROR_TBL_COPY_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_EN_BITPOS,
+	 CFA_P40_MIRROR_TBL_EN_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_AR_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS},
+};
+
+/* P45 Defines */
+
+const struct hcapi_cfa_field cfa_p45_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
 #endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index d2a494e..e95633b 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -170,7 +170,7 @@ struct hcapi_cfa_resc_db {
  * management information.
  */
 typedef struct hcapi_cfa_rm_data {
-    uint32_t dummy_data;
+	uint32_t dummy_data;
 } hcapi_cfa_rm_data_t;
 
 /* End RM support */
@@ -178,32 +178,29 @@ typedef struct hcapi_cfa_rm_data {
 struct hcapi_cfa_devops;
 
 struct hcapi_cfa_devinfo {
-			  uint8_t global_cfg_data[CFA_GLOBAL_CFG_DATA_SZ];
-			  struct hcapi_cfa_layout_tbl layouts;
-			  struct hcapi_cfa_devops *devops;
+	uint8_t global_cfg_data[CFA_GLOBAL_CFG_DATA_SZ];
+	struct hcapi_cfa_layout_tbl layouts;
+	struct hcapi_cfa_devops *devops;
 };
 
 int hcapi_cfa_dev_bind(enum hcapi_cfa_ver hw_ver,
-			struct hcapi_cfa_devinfo *dev_info);
+		       struct hcapi_cfa_devinfo *dev_info);
 
-int hcapi_cfa_key_compile_layout(
-				 struct hcapi_cfa_key_template *key_template,
+int hcapi_cfa_key_compile_layout(struct hcapi_cfa_key_template *key_template,
 				 struct hcapi_cfa_key_layout *key_layout);
 uint64_t hcapi_cfa_key_hash(uint64_t *key_data, uint16_t bitlen);
-int hcapi_cfa_action_compile_layout(
-				    struct hcapi_cfa_action_template *act_template,
+int hcapi_cfa_action_compile_layout(struct hcapi_cfa_action_template *act_template,
 				    struct hcapi_cfa_action_layout *act_layout);
-int hcapi_cfa_action_init_obj(
-			      uint64_t *act_obj,
+int hcapi_cfa_action_init_obj(uint64_t *act_obj,
 			      struct hcapi_cfa_action_layout *act_layout);
-int hcapi_cfa_action_compute_ptr(
-				 uint64_t *act_obj,
+int hcapi_cfa_action_compute_ptr(uint64_t *act_obj,
 				 struct hcapi_cfa_action_layout *act_layout,
 				 uint32_t base_ptr);
 
 int hcapi_cfa_action_hw_op(struct hcapi_cfa_hwop *op,
 			   uint8_t *act_tbl,
 			   struct hcapi_cfa_data *act_obj);
+
 int hcapi_cfa_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
 			struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_rm_register_client(hcapi_cfa_rm_data_t *data,
@@ -229,21 +226,20 @@ int hcapi_cfa_rm_release_resources(hcapi_cfa_rm_data_t *data,
 int hcapi_cfa_rm_initialize(hcapi_cfa_rm_data_t *data);
 
 #if SUPPORT_CFA_HW_P4
-
-int hcapi_cfa_p4_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
-			    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_dev_hw_tbl_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			       struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_prof_l2ctxt_hwop(struct hcapi_cfa_hwop *op,
-				   struct hcapi_cfa_data *obj_data);
+				  struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_prof_l2ctxtrmp_hwop(struct hcapi_cfa_hwop *op,
-				      struct hcapi_cfa_data *obj_data);
+				     struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_prof_tcam_hwop(struct hcapi_cfa_hwop *op,
 				 struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_prof_tcamrmp_hwop(struct hcapi_cfa_hwop *op,
-				    struct hcapi_cfa_data *obj_data);
+				   struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
-			       struct hcapi_cfa_data *obj_data);
+			      struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
-				   struct hcapi_cfa_data *obj_data);
+				  struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
 			     struct hcapi_cfa_data *mirror);
 #endif /* SUPPORT_CFA_HW_P4 */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
index 0b7f98f..2390130 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -34,10 +34,6 @@
 
 #define CFA_GLOBAL_CFG_DATA_SZ (100)
 
-#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
-#define SUPPORT_CFA_HW_ALL (1)
-#endif
-
 #include "hcapi_cfa_p4.h"
 #define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
 #define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
@@ -120,6 +116,8 @@ struct hcapi_cfa_layout {
 	const struct hcapi_cfa_field *field_array;
 	/** [out] number of HW field entries in the HW layout field array */
 	uint32_t array_sz;
+	/** [out] layout_id - layout id associated with the layout */
+	uint16_t layout_id;
 };
 
 /**
@@ -246,6 +244,8 @@ struct hcapi_cfa_key_tbl {
 	 *  applicable for newer chip
 	 */
 	uint8_t *base1;
+	/** [in] Page size for EEM tables */
+	uint32_t page_size;
 };
 
 /**
@@ -266,7 +266,7 @@ struct hcapi_cfa_key_obj {
 struct hcapi_cfa_key_data {
 	/** [in] For on-chip key table, it is the offset in unit of smallest
 	 *  key. For off-chip key table, it is the byte offset relative
-	 *  to the key record memory base.
+	 *  to the key record memory base and adjusted for page and entry size.
 	 */
 	uint32_t offset;
 	/** [in] HW key data buffer pointer */
@@ -665,5 +665,5 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 			struct hcapi_cfa_key_loc *key_loc);
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset);
+			      uint32_t page);
 #endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index 89c91ea..be72d5f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -22,7 +22,6 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 #endif
 
 #define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
-#define TF_EM_PAGE_SIZE (1 << 21)
 uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
 uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
 bool hcapi_cfa_lkup_init;
@@ -208,10 +207,9 @@ static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
 
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset)
+			      uint32_t page)
 {
 	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
 	uint64_t addr;
 
 	if (mem == NULL)
@@ -375,7 +373,9 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 
 	op->hw.base_addr =
 		hcapi_get_table_page((struct hcapi_cfa_em_table *)key_tbl->base0,
-				     key_obj->offset);
+				     key_obj->offset / key_tbl->page_size);
+	/* Offset is adjusted to be the offset into the page */
+	key_obj->offset = key_obj->offset % key_tbl->page_size;
 
 	if (op->hw.base_addr == 0)
 		return -1;
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
index 2c1bcad..f9fa108 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -21,6 +21,10 @@ enum cfa_p4_tbl_id {
 	CFA_P4_TBL_WC_TCAM_REMAP,
 	CFA_P4_TBL_VEB_TCAM,
 	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT,
+	CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR,
+	CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR,
+	CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR,
 	CFA_P4_TBL_MAX
 };
 
@@ -333,17 +337,29 @@ enum cfa_p4_action_sram_entry_type {
 	 */
 
 	/** SRAM Action Record */
-	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FULL_ACTION,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_0_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_1_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_2_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_3_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_4_ACTION,
+
 	/** SRAM Action Encap 8 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
 	/** SRAM Action Encap 16 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
 	/** SRAM Action Encap 64 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_SRC,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_DEST,
+
 	/** SRAM Action Modify IPv4 Source */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
 	/** SRAM Action Modify IPv4 Destination */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+
 	/** SRAM Action Source Properties SMAC */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
 	/** SRAM Action Source Properties SMAC IPv4 */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 7f3ec62..f25a944 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,7 +43,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_shadow_tcam.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm.c',
+	'tf_core/tf_if_tbl.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index b31ed60..b923237 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -25,6 +25,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
 
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
@@ -34,3 +35,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 26836e4..32f1523 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -16,7 +16,9 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
+	HWRM_TFT_IF_TBL_SET = 827,
+	HWRM_TFT_IF_TBL_GET = 828,
+	TF_SUBTYPE_LAST = HWRM_TFT_IF_TBL_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -46,7 +48,17 @@ typedef enum tf_subtype {
 /* WC DMA Address Type */
 #define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
 /* WC Entry */
-#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY				0x30d1UL
+/* SPIF DFLT L2 CTXT Entry */
+#define TF_DEV_DATA_TYPE_SPIF_DFLT_L2_CTXT		  0x3131UL
+/* PARIF DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_DFLT_ACT_REC		0x3132UL
+/* PARIF ERR DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_ERR_DFLT_ACT_REC	 0x3133UL
+/* ILT Entry */
+#define TF_DEV_DATA_TYPE_ILT				0x3134UL
+/* VNIC SVIF entry */
+#define TF_DEV_DATA_TYPE_VNIC_SVIF			0x3135UL
 /* Action Data */
 #define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
 #define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
@@ -56,6 +68,9 @@ typedef enum tf_subtype {
 
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
+struct tf_if_tbl_set_input;
+struct tf_if_tbl_get_input;
+struct tf_if_tbl_get_output;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
@@ -85,4 +100,48 @@ typedef struct tf_tbl_type_bulk_get_output {
 	uint16_t			 size;
 } tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
+/* Input params for if tbl set */
+typedef struct tf_if_tbl_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* index of table entry */
+	uint16_t			 idx;
+	/* size of the data write to table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* data to write into table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_set_input_t, *ptf_if_tbl_set_input_t;
+
+/* Input params for if tbl get */
+typedef struct tf_if_tbl_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* size of the data get from table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* index of table entry */
+	uint16_t			 idx;
+} tf_if_tbl_get_input_t, *ptf_if_tbl_get_input_t;
+
+/* output params for if tbl get */
+typedef struct tf_if_tbl_get_output {
+	/* Value read from table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_get_output_t, *ptf_if_tbl_get_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 45accb0..a980a20 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -1039,3 +1039,119 @@ tf_free_tbl_scope(struct tf *tfp,
 
 	return rc;
 }
+
+int
+tf_set_if_tbl_entry(struct tf *tfp,
+		    struct tf_set_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_set_parms sparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.idx = parms->idx;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_set_if_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_get_if_tbl_entry(struct tf *tfp,
+		    struct tf_get_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_get_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.idx = parms->idx;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_get_if_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e898f19..e3d46bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1556,4 +1556,108 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page if_tbl Interface Table Access
+ *
+ * @ref tf_set_if_tbl_entry
+ *
+ * @ref tf_get_if_tbl_entry
+ *
+ * @ref tf_restore_if_tbl_entry
+ */
+/**
+ * Enumeration of TruFlow interface table types.
+ */
+enum tf_if_tbl_type {
+	/** Default Profile L2 Context Entry */
+	TF_IF_TBL_TYPE_PROF_SPIF_DFLT_L2_CTXT,
+	/** Default Profile TCAM/Lookup Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	/** Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	/** Default Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_LKUP_PARIF_DFLT_ACT_REC_PTR,
+	/** SR2 Ingress lookup table */
+	TF_IF_TBL_TYPE_ILT,
+	/** SR2 VNIC/SVIF Table */
+	TF_IF_TBL_TYPE_VNIC_SVIF,
+	TF_IF_TBL_TYPE_MAX
+};
+
+/**
+ * tf_set_if_tbl_entry parameter definition
+ */
+struct tf_set_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Interface to write
+	 */
+	uint32_t idx;
+};
+
+/**
+ * set interface table entry
+ *
+ * Used to set an interface table. This API is used for managing tables indexed
+ * by SVIF/SPIF/PARIF interfaces. In current implementation only the value is
+ * set.
+ * Returns success or failure code.
+ */
+int tf_set_if_tbl_entry(struct tf *tfp,
+			struct tf_set_if_tbl_entry_parms *parms);
+
+/**
+ * tf_get_if_tbl_entry parameter definition
+ */
+struct tf_get_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of table to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * get interface table entry
+ *
+ * Used to retrieve an interface table entry.
+ *
+ * Reads the interface table entry value
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_if_tbl_entry(struct tf *tfp,
+			struct tf_get_if_tbl_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 20b0c59..a3073c8 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -44,6 +44,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
+	struct tf_if_tbl_cfg_parms if_tbl_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -114,6 +115,19 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * IF_TBL
+	 */
+	if_tbl_cfg.num_elements = TF_IF_TBL_TYPE_MAX;
+	if_tbl_cfg.cfg = tf_if_tbl_p4;
+	if_tbl_cfg.shadow_copy = shadow_copy;
+	rc = tf_if_tbl_bind(tfp, &if_tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "IF Table initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -186,6 +200,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_if_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, IF Table Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 58b7a4a..5a0943a 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl.h"
 #include "tf_tcam.h"
+#include "tf_if_tbl.h"
 
 struct tf;
 struct tf_session;
@@ -567,6 +568,44 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
 				     struct tf_free_tbl_scope_parms *parms);
+
+	/**
+	 * Sets the specified interface table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to interface table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_set_parms *parms);
+
+	/**
+	 * Retrieves the specified interface table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_get_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9a32307..2dc34b8 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
+#include "tf_if_tbl.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -105,6 +106,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_delete_ext_em_entry = NULL,
 	.tf_dev_alloc_tbl_scope = NULL,
 	.tf_dev_free_tbl_scope = NULL,
+	.tf_dev_set_if_tbl = NULL,
+	.tf_dev_get_if_tbl = NULL,
 };
 
 /**
@@ -135,4 +138,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
 	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
+	.tf_dev_set_if_tbl = tf_if_tbl_set,
+	.tf_dev_get_if_tbl = tf_if_tbl_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 298e100..3b03a7c 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -10,6 +10,7 @@
 
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_if_tbl.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -86,4 +87,13 @@ struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 };
 
+struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 39a8412..23a7fc9 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -337,11 +337,10 @@ tf_em_ext_common_bind(struct tf *tfp,
 		db_exists = 1;
 	}
 
-	if (db_exists) {
-		mem_type = parms->mem_type;
+	if (db_exists)
 		init = 1;
-	}
 
+	mem_type = parms->mem_type;
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index d734f1a..9abb9b1 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -833,7 +833,8 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	op.opcode = HCAPI_CFA_HWOPS_ADD;
 	key_tbl.base0 =
 		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = (uint8_t *)&key_entry;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -849,8 +850,7 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 		key_tbl.base0 =
 			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 
 		rc = hcapi_cfa_key_hw_op(&op,
 					 &key_tbl,
@@ -915,7 +915,8 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key_tbl.base0 =
 		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[
 			(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = NULL;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -1198,7 +1199,8 @@ int tf_tbl_ext_host_set(struct tf *tfp,
 	op.opcode = HCAPI_CFA_HWOPS_PUT;
 	key_tbl.base0 =
 		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
 	key_obj.data = parms->data;
 	key_obj.size = parms->data_sz_in_bytes;
 
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 2cc43b4..90aeaa4 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -68,7 +68,7 @@ tf_ident_bind(struct tf *tfp,
 int
 tf_ident_unbind(struct tf *tfp)
 {
-	int rc;
+	int rc = 0;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
 
@@ -89,7 +89,6 @@ tf_ident_unbind(struct tf *tfp)
 			TFP_DRV_LOG(ERR,
 				    "rm free failed on unbind\n");
 		}
-
 		ident_db[i] = NULL;
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.c b/drivers/net/bnxt/tf_core/tf_if_tbl.c
new file mode 100644
index 0000000..dc73ba2
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.c
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_if_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+
+/**
+ * IF Table DBs.
+ */
+static void *if_tbl_db[TF_DIR_MAX];
+
+/**
+ * IF Table Shadow DBs
+ */
+/* static void *shadow_if_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+/**
+ * Convert if_tbl_type to hwrm type.
+ *
+ * [in] if_tbl_type
+ *   Interface table type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_if_tbl_get_hcapi_type(struct tf_if_tbl_get_hcapi_parms *parms)
+{
+	struct tf_if_tbl_cfg *tbl_cfg;
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	tbl_cfg = (struct tf_if_tbl_cfg *)parms->tbl_db;
+	cfg_type = tbl_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_IF_TBL_CFG)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = tbl_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_if_tbl_bind(struct tf *tfp __rte_unused,
+	       struct tf_if_tbl_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "IF TBL DB already initialized\n");
+		return -EINVAL;
+	}
+
+	if_tbl_db[TF_DIR_RX] = parms->cfg;
+	if_tbl_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_if_tbl_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	if_tbl_db[TF_DIR_RX] = NULL;
+	if_tbl_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_if_tbl_set(struct tf *tfp,
+	      struct tf_if_tbl_set_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	rc = tf_msg_set_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_if_tbl_get(struct tf *tfp,
+	      struct tf_if_tbl_get_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	/* Get the entry */
+	rc = tf_msg_get_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
new file mode 100644
index 0000000..54d4c37
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_IF_TBL_TYPE_H_
+#define TF_IF_TBL_TYPE_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/*
+ * This is the constant used to define invalid CFA
+ * types across all devices.
+ */
+#define CFA_IF_TBL_TYPE_INVALID 65535
+
+struct tf;
+
+/**
+ * The IF Table module provides processing of Internal TF interface table types.
+ */
+
+/**
+ * IF table configuration enumeration.
+ */
+enum tf_if_tbl_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_IF_TBL_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_IF_TBL_CFG,
+};
+
+/**
+ * IF table configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI type.
+ */
+struct tf_if_tbl_cfg {
+	/**
+	 * IF table config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_if_tbl_get_hcapi_parms {
+	/**
+	 * [in] IF Tbl DB Handle
+	 */
+	void *tbl_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_if_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_if_tbl_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_if_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+};
+
+/**
+ * IF Table set parameters
+ */
+struct tf_if_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * IF Table get parameters
+ */
+struct tf_if_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page if tbl Table
+ *
+ * @ref tf_if_tbl_bind
+ *
+ * @ref tf_if_tbl_unbind
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_restore
+ */
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_bind(struct tf *tfp,
+		   struct tf_if_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_unbind(struct tf *tfp);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Interface Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_set(struct tf *tfp,
+		  struct tf_if_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_get(struct tf *tfp,
+		  struct tf_if_tbl_get_parms *parms);
+
+#endif /* TF_IF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 659065d..6600a14 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -125,12 +125,19 @@ tf_msg_session_close(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_close_input req = { 0 };
 	struct hwrm_tf_session_close_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_CLOSE;
 	parms.req_data = (uint32_t *)&req;
@@ -150,12 +157,19 @@ tf_msg_session_qcfg(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_QCFG,
 	parms.req_data = (uint32_t *)&req;
@@ -448,13 +462,22 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_insert_input req = { 0 };
 	struct hwrm_tf_em_insert_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
 	uint16_t flags;
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	tfp_memcpy(req.em_key,
 		   em_parms->key,
 		   ((em_parms->key_sz_in_bits + 7) / 8));
@@ -498,11 +521,19 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
 	uint16_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
@@ -789,21 +820,19 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_set_input req = { 0 };
 	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
@@ -840,21 +869,19 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_get_input req = { 0 };
 	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
@@ -897,22 +924,20 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.start_index = tfp_cpu_to_le_32(starting_idx);
@@ -939,3 +964,102 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+int
+tf_msg_get_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_get_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_get_input_t req = { 0 };
+	tf_if_tbl_get_output_t resp;
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_16(params->data_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_IF_TBL_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	if (parms.tf_resp_code != 0)
+		return tfp_le_to_cpu_32(parms.tf_resp_code);
+
+	tfp_memcpy(&params->data[0], resp.data, req.data_sz_in_bytes);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_set_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_set_input_t req = { 0 };
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_32(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_32(params->data_sz_in_bytes);
+	tfp_memcpy(&req.data[0], params->data, params->data_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_IF_TBL_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 7432873..37f2910 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -428,4 +428,34 @@ int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			      uint16_t entry_sz_in_bytes,
 			      uint64_t physical_mem_addr);
 
+/**
+ * Sends Set message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_set_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_set_parms *params);
+
+/**
+ * Sends get message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table get parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_get_parms *params);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index ab9e05f..6c07e47 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -70,14 +70,24 @@ tf_session_open_session(struct tf *tfp,
 		goto cleanup;
 	}
 	tfp->session->core_data = cparms.mem_va;
+	session_id = &parms->open_cfg->session_id;
+
+	/* Update Session Info, which is what is visible to the caller */
+	tfp->session->ver.major = 0;
+	tfp->session->ver.minor = 0;
+	tfp->session->ver.update = 0;
 
-	/* Initialize Session and Device */
+	tfp->session->session_id.internal.domain = session_id->internal.domain;
+	tfp->session->session_id.internal.bus = session_id->internal.bus;
+	tfp->session->session_id.internal.device = session_id->internal.device;
+	tfp->session->session_id.internal.fw_session_id = fw_session_id;
+
+	/* Initialize Session and Device, which is private */
 	session = (struct tf_session *)tfp->session->core_data;
 	session->ver.major = 0;
 	session->ver.minor = 0;
 	session->ver.update = 0;
 
-	session_id = &parms->open_cfg->session_id;
 	session->session_id.internal.domain = session_id->internal.domain;
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 29/50] net/bnxt: add TF register and unregister
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (27 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 28/50] net/bnxt: implement IF tables set and get Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 30/50] net/bnxt: add global config set and get APIs Somnath Kotur
                   ` (21 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TF register/unregister support. Session got session clients to
  keep track of the ctrl-channels/function.
- Add support code to tfp layer

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_core/Makefile     |   1 +
 drivers/net/bnxt/tf_core/ll.c         |  52 ++++
 drivers/net/bnxt/tf_core/ll.h         |  46 +++
 drivers/net/bnxt/tf_core/tf_core.c    |  26 +-
 drivers/net/bnxt/tf_core/tf_core.h    | 105 +++++--
 drivers/net/bnxt/tf_core/tf_msg.c     |  84 ++++-
 drivers/net/bnxt/tf_core/tf_msg.h     |  42 ++-
 drivers/net/bnxt/tf_core/tf_rm.c      |   2 +-
 drivers/net/bnxt/tf_core/tf_session.c | 569 ++++++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h | 201 +++++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.c     |   2 +
 drivers/net/bnxt/tf_core/tf_tcam.c    |   8 +-
 drivers/net/bnxt/tf_core/tfp.c        |  17 +
 drivers/net/bnxt/tf_core/tfp.h        |  15 +
 15 files changed, 1075 insertions(+), 96 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index f25a944..54564e0 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -44,6 +44,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
+	'tf_core/ll.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index b923237..e1c65ca 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -8,6 +8,7 @@
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/ll.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/ll.c b/drivers/net/bnxt/tf_core/ll.c
new file mode 100644
index 0000000..6f58662
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Functions */
+
+#include <stdio.h>
+#include "ll.h"
+
+/* init linked list */
+void ll_init(struct ll *ll)
+{
+	ll->head = NULL;
+	ll->tail = NULL;
+}
+
+/* insert entry in linked list */
+void ll_insert(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == NULL) {
+		ll->head = entry;
+		ll->tail = entry;
+		entry->next = NULL;
+		entry->prev = NULL;
+	} else {
+		entry->next = ll->head;
+		entry->prev = NULL;
+		entry->next->prev = entry;
+		ll->head = entry->next->prev;
+	}
+}
+
+/* delete entry from linked list */
+void ll_delete(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == entry && ll->tail == entry) {
+		ll->head = NULL;
+		ll->tail = NULL;
+	} else if (ll->head == entry) {
+		ll->head = entry->next;
+		ll->head->prev = NULL;
+	} else if (ll->tail == entry) {
+		ll->tail = entry->prev;
+		ll->tail->next = NULL;
+	} else {
+		entry->prev->next = entry->next;
+		entry->next->prev = entry->prev;
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/ll.h b/drivers/net/bnxt/tf_core/ll.h
new file mode 100644
index 0000000..d709178
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Header File */
+
+#ifndef _LL_H_
+#define _LL_H_
+
+/* linked list entry */
+struct ll_entry {
+	struct ll_entry *prev;
+	struct ll_entry *next;
+};
+
+/* linked list */
+struct ll {
+	struct ll_entry *head;
+	struct ll_entry *tail;
+};
+
+/**
+ * Linked list initialization.
+ *
+ * [in] ll, linked list to be initialized
+ */
+void ll_init(struct ll *ll);
+
+/**
+ * Linked list insert
+ *
+ * [in] ll, linked list where element is inserted
+ * [in] entry, entry to be added
+ */
+void ll_insert(struct ll *ll, struct ll_entry *entry);
+
+/**
+ * Linked list delete
+ *
+ * [in] ll, linked list where element is removed
+ * [in] entry, entry to be deleted
+ */
+void ll_delete(struct ll *ll, struct ll_entry *entry);
+
+#endif /* _LL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a980a20..489c461 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -58,21 +58,20 @@ tf_open_session(struct tf *tfp,
 	parms->session_id.internal.device = device;
 	oparms.open_cfg = parms;
 
+	/* Session vs session client is decided in
+	 * tf_session_open_session()
+	 */
+	printf("TF_OPEN, %s\n", parms->ctrl_chan_name);
 	rc = tf_session_open_session(tfp, &oparms);
 	/* Logging handled by tf_session_open_session */
 	if (rc)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
+		    parms->session_id.internal.device);
 
 	return 0;
 }
@@ -152,6 +151,9 @@ tf_close_session(struct tf *tfp)
 
 	cparms.ref_count = &ref_count;
 	cparms.session_id = &session_id;
+	/* Session vs session client is decided in
+	 * tf_session_close_session()
+	 */
 	rc = tf_session_close_session(tfp,
 				      &cparms);
 	/* Logging handled by tf_session_close_session */
@@ -159,16 +161,10 @@ tf_close_session(struct tf *tfp)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Closed session, session_id:%d, ref_count:%d\n",
-		    cparms.session_id->id,
-		    *cparms.ref_count);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    cparms.session_id->internal.domain,
 		    cparms.session_id->internal.bus,
-		    cparms.session_id->internal.device,
-		    cparms.session_id->internal.fw_session_id);
+		    cparms.session_id->internal.device);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e3d46bd..fea222b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -72,7 +72,6 @@ enum tf_mem {
  * @ref tf_close_session
  */
 
-
 /**
  * Session Version defines
  *
@@ -114,6 +113,21 @@ union tf_session_id {
 };
 
 /**
+ * Session Client Identifier
+ *
+ * Unique identifier for a client within a session. Session Client ID
+ * is constructed from the passed in session and a firmware allocated
+ * fw_session_client_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_client_id {
+	uint16_t id;
+	struct {
+		uint8_t fw_session_id;
+		uint8_t fw_session_client_id;
+	} internal;
+};
+
+/**
  * Session Version
  *
  * The version controls the format of the tf_session and
@@ -368,8 +382,8 @@ struct tf_session_info {
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
- * session info at that time. Additional 'opens' can be done using
- * same session_info by using tf_attach_session().
+ * session info at that time. A TruFlow Session can be used by more
+ * than one PF/VF by using the tf_open_session().
  *
  * It is expected that ULP allocates this memory as shared memory.
  *
@@ -507,35 +521,61 @@ struct tf_open_session_parms {
 	 */
 	union tf_session_id session_id;
 	/**
+	 * [in/out] session_client_id
+	 *
+	 * Session_client_id is unique per client.
+	 *
+	 * Session_client_id is composed of session_id and the
+	 * fw_session_client_id fw_session_id. The construction is
+	 * done by parsing the ctrl_chan_name together with allocation
+	 * of a fw_session_client_id during tf_open_session().
+	 *
+	 * A reference count will be incremented in the session on
+	 * which a client is created.
+	 *
+	 * A session can first be closed if there is one Session
+	 * Client left. Session Clients should closed using
+	 * tf_close_session().
+	 */
+	union tf_session_client_id session_client_id;
+	/**
 	 * [in] device type
 	 *
-	 * Device type is passed, one of Wh+, SR, Thor, SR2
+	 * Device type for the session.
 	 */
 	enum tf_device_type device_type;
-	/** [in] resources
+	/**
+	 * [in] resources
 	 *
-	 * Resource allocation
+	 * Resource allocation for the session.
 	 */
 	struct tf_session_resources resources;
 };
 
 /**
- * Opens a new TruFlow management session.
+ * Opens a new TruFlow Session or session client.
+ *
+ * What gets created depends on the passed in tfp content. If the tfp
+ * does not have prior session data a new session with associated
+ * session client. If tfp has a session already a session client will
+ * be created. In both cases the session client is created using the
+ * provided ctrl_chan_name.
  *
- * TruFlow will allocate session specific memory, shared memory, to
- * hold its session data. This data is private to TruFlow.
+ * In case of session creation TruFlow will allocate session specific
+ * memory, shared memory, to hold its session data. This data is
+ * private to TruFlow.
  *
- * Multiple PFs can share the same session. An association, refcount,
- * between session and PFs is maintained within TruFlow. Thus, a PF
- * can attach to an existing session, see tf_attach_session().
+ * No other TruFlow APIs will succeed unless this API is first called
+ * and succeeds.
  *
- * No other TruFlow APIs will succeed unless this API is first called and
- * succeeds.
+ * tf_open_session() returns a session id and session client id that
+ * is used on all other TF APIs.
  *
- * tf_open_session() returns a session id that can be used on attach.
+ * A Session or session client can be closed using tf_close_session().
  *
  * [in] tfp
  *   Pointer to TF handle
+ *
  * [in] parms
  *   Pointer to open parameters
  *
@@ -546,6 +586,11 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+/**
+ * Experimental
+ *
+ * tf_attach_session parameters definition.
+ */
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -595,15 +640,18 @@ struct tf_attach_session_parms {
 };
 
 /**
- * Attaches to an existing session. Used when more than one PF wants
- * to share a single session. In that case all TruFlow management
- * traffic will be sent to the TruFlow firmware using the 'PF' that
- * did the attach not the session ctrl channel.
+ * Experimental
+ *
+ * Allows a 2nd application instance to attach to an existing
+ * session. Used when a session is to be shared between two processes.
  *
  * Attach will increment a ref count as to manage the shared session data.
  *
- * [in] tfp, pointer to TF handle
- * [in] parms, pointer to attach parameters
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to attach parameters
  *
  * Returns
  *   - (0) if successful.
@@ -613,9 +661,15 @@ int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
 
 /**
- * Closes an existing session. Cleans up all hardware and firmware
- * state associated with the TruFlow application session when the last
- * PF associated with the session results in refcount to be zero.
+ * Closes an existing session client or the session it self. The
+ * session client is default closed and if the session reference count
+ * is 0 then the session is closed as well.
+ *
+ * On session close all hardware and firmware state associated with
+ * the TruFlow application is cleaned up.
+ *
+ * The session client is extracted from the tfp. Thus tf_close_session()
+ * cannot close a session client on behalf of another function.
  *
  * Returns success or failure code.
  */
@@ -1056,9 +1110,10 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_set_tbl_entry
  *
  * @ref tf_get_tbl_entry
+ *
+ * @ref tf_bulk_get_tbl_entry
  */
 
-
 /**
  * tf_alloc_tbl_entry parameter definition
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 6600a14..8c2dff8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -84,7 +84,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
-		    uint8_t *fw_session_id)
+		    uint8_t *fw_session_id,
+		    uint8_t *fw_session_client_id)
 {
 	int rc;
 	struct hwrm_tf_session_open_input req = { 0 };
@@ -106,7 +107,8 @@ tf_msg_session_open(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	*fw_session_id = resp.fw_session_id;
+	*fw_session_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
+	*fw_session_client_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
 
 	return rc;
 }
@@ -120,6 +122,84 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 }
 
 int
+tf_msg_session_client_register(struct tf *tfp,
+			       char *ctrl_channel_name,
+			       uint8_t *fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_register_input req = { 0 };
+	struct hwrm_tf_session_register_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	tfp_memcpy(&req.session_client_name,
+		   ctrl_channel_name,
+		   TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_REGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_client_id =
+		(uint8_t)tfp_le_to_cpu_32(resp.fw_session_client_id);
+
+	return rc;
+}
+
+int
+tf_msg_session_client_unregister(struct tf *tfp,
+				 uint8_t fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_unregister_input req = { 0 };
+	struct hwrm_tf_session_unregister_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.fw_session_client_id = tfp_cpu_to_le_32(fw_session_client_id);
+
+	parms.tf_type = HWRM_TF_SESSION_UNREGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+
+	return rc;
+}
+
+int
 tf_msg_session_close(struct tf *tfp)
 {
 	int rc;
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 37f2910..c02a520 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -34,7 +34,8 @@ struct tf;
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
-			uint8_t *fw_session_id);
+			uint8_t *fw_session_id,
+			uint8_t *fw_session_client_id);
 
 /**
  * Sends session close request to Firmware
@@ -42,6 +43,9 @@ int tf_msg_session_open(struct tf *tfp,
  * [in] session
  *   Pointer to session handle
  *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
  * [in] fw_session_id
  *   Pointer to the fw_session_id that is assigned to the session at
  *   time of session open
@@ -54,6 +58,42 @@ int tf_msg_session_attach(struct tf *tfp,
 			  uint8_t tf_fw_session_id);
 
 /**
+ * Sends session client register request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_register(struct tf *tfp,
+				   char *ctrl_channel_name,
+				   uint8_t *fw_session_client_id);
+
+/**
+ * Sends session client unregister request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_unregister(struct tf *tfp,
+				     uint8_t fw_session_client_id);
+
+/**
  * Sends session close request to Firmware
  *
  * [in] session
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 30313e2..fdb87ec 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -389,7 +389,7 @@ tf_rm_create_db(struct tf *tfp,
 	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 6c07e47..7ebffc3 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -12,14 +12,49 @@
 #include "tf_msg.h"
 #include "tfp.h"
 
-int
-tf_session_open_session(struct tf *tfp,
-			struct tf_session_open_session_parms *parms)
+struct tf_session_client_create_parms {
+	/**
+	 * [in] Pointer to the control channel name string
+	 */
+	char *ctrl_chan_name;
+
+	/**
+	 * [out] Firmware Session Client ID
+	 */
+	union tf_session_client_id *session_client_id;
+};
+
+struct tf_session_client_destroy_parms {
+	/**
+	 * FW Session Client Identifier
+	 */
+	union tf_session_client_id session_client_id;
+};
+
+/**
+ * Creates a Session and the associated client.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_create(struct tf *tfp,
+		  struct tf_session_open_session_parms *parms)
 {
 	int rc;
 	struct tf_session *session;
+	struct tf_session_client *client;
 	struct tfp_calloc_parms cparms;
 	uint8_t fw_session_id;
+	uint8_t fw_session_client_id;
 	union tf_session_id *session_id;
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -27,7 +62,8 @@ tf_session_open_session(struct tf *tfp,
 	/* Open FW session and get a new session_id */
 	rc = tf_msg_session_open(tfp,
 				 parms->open_cfg->ctrl_chan_name,
-				 &fw_session_id);
+				 &fw_session_id,
+				 &fw_session_client_id);
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
@@ -92,15 +128,46 @@ tf_session_open_session(struct tf *tfp,
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
 	session->session_id.internal.fw_session_id = fw_session_id;
-	/* Return the allocated fw session id */
-	session_id->internal.fw_session_id = fw_session_id;
+	/* Return the allocated session id */
+	session_id->id = session->session_id.id;
 
 	session->shadow_copy = parms->open_cfg->shadow_copy;
 
-	tfp_memcpy(session->ctrl_chan_name,
+	/* Init session client list */
+	ll_init(&session->client_ll);
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	client->session_client_id.internal.fw_session_id = fw_session_id;
+	client->session_client_id.internal.fw_session_client_id =
+		fw_session_client_id;
+
+	tfp_memcpy(client->ctrl_chan_name,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
+	ll_insert(&session->client_ll, &client->ll_entry);
+	session->ref_count++;
+
 	rc = tf_dev_bind(tfp,
 			 parms->open_cfg->device_type,
 			 session->shadow_copy,
@@ -110,7 +177,7 @@ tf_session_open_session(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	session->ref_count++;
+	session->dev_init = true;
 
 	return 0;
 
@@ -121,6 +188,235 @@ tf_session_open_session(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Creates a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_client_create(struct tf *tfp,
+			 struct tf_session_client_create_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tf_session_client *client;
+	struct tfp_calloc_parms cparms;
+	union tf_session_client_id session_client_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &session);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	client = tf_session_find_session_client_by_name(session,
+							parms->ctrl_chan_name);
+	if (client) {
+		TFP_DRV_LOG(ERR,
+			    "Client %s, already registered with this session\n",
+			    parms->ctrl_chan_name);
+		return -EOPNOTSUPP;
+	}
+
+	rc = tf_msg_session_client_register
+		    (tfp,
+		    parms->ctrl_chan_name,
+		    &session_client_id.internal.fw_session_client_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to create client on session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	/* Build the Session Client ID by adding the fw_session_id */
+	rc = tf_session_get_fw_session_id
+			(tfp,
+			&session_client_id.internal.fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session Firmware id lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tfp_memcpy(client->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	client->session_client_id.id = session_client_id.id;
+
+	ll_insert(&session->client_ll, &client->ll_entry);
+
+	session->ref_count++;
+
+	/* Build the return value */
+	parms->session_client_id->id = session_client_id.id;
+
+ cleanup:
+	/* TBD - Add code to unregister newly create client from fw */
+
+	return rc;
+}
+
+
+/**
+ * Destroys a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session client destroy parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTFOUND) error, client not owned by the session.
+ *   - (-ENOTSUPP) error, unable to destroy client as its the last
+ *                 client. Please use the tf_session_close().
+ */
+static int
+tf_session_client_destroy(struct tf *tfp,
+			  struct tf_session_client_destroy_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_session_client *client;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Check session owns this client and that we're not the last client */
+	client = tf_session_get_session_client(tfs,
+					       parms->session_client_id);
+	if (client == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "Client %d, not found within this session\n",
+			    parms->session_client_id.id);
+		return -EINVAL;
+	}
+
+	/* If last client the request is rejected and cleanup should
+	 * be done by session close.
+	 */
+	if (tfs->ref_count == 1)
+		return -EOPNOTSUPP;
+
+	rc = tf_msg_session_client_unregister
+			(tfp,
+			parms->session_client_id.internal.fw_session_client_id);
+
+	/* Log error, but continue. If FW fails we do not really have
+	 * a way to fix this but the client would no longer be valid
+	 * thus we remove from the session.
+	 */
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Client destroy on FW Failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+
+	/* Decrement the session ref_count */
+	tfs->ref_count--;
+
+	tfp_free(client);
+
+	return rc;
+}
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session_client_create_parms scparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Decide if we're creating a new session or session client */
+	if (tfp->session == NULL) {
+		rc = tf_session_create(tfp, parms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to create session, ctrl_chan_name:%s, rc:%s\n",
+				    parms->open_cfg->ctrl_chan_name,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+		       "Session created, session_client_id:%d, session_id:%d\n",
+		       parms->open_cfg->session_client_id.id,
+		       parms->open_cfg->session_id.id);
+	} else {
+		scparms.ctrl_chan_name = parms->open_cfg->ctrl_chan_name;
+		scparms.session_client_id = &parms->open_cfg->session_client_id;
+
+		/* Create the new client and get it associated with
+		 * the session.
+		 */
+		rc = tf_session_client_create(tfp, &scparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+			      "Failed to create client on session %d, rc:%s\n",
+			      parms->open_cfg->session_id.id,
+			      strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Session Client:%d created on session:%d\n",
+			    parms->open_cfg->session_client_id.id,
+			    parms->open_cfg->session_id.id);
+	}
+
+	return 0;
+}
+
 int
 tf_session_attach_session(struct tf *tfp __rte_unused,
 			  struct tf_session_attach_session_parms *parms __rte_unused)
@@ -141,7 +437,10 @@ tf_session_close_session(struct tf *tfp,
 {
 	int rc;
 	struct tf_session *tfs;
+	struct tf_session_client *client;
 	struct tf_dev_info *tfd;
+	struct tf_session_client_destroy_parms scdparms;
+	uint16_t fid;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -161,7 +460,49 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	tfs->ref_count--;
+	/* Get the client, we need it independently of the closure
+	 * type (client or session closure).
+	 *
+	 * We find the client by way of the fid. Thus one cannot close
+	 * a client on behalf of someone else.
+	 */
+	rc = tfp_get_fid(tfp, &fid);
+	if (rc)
+		return rc;
+
+	client = tf_session_find_session_client_by_fid(tfs,
+						       fid);
+	/* In case multiple clients we chose to close those first */
+	if (tfs->ref_count > 1) {
+		/* Linaro gcc can't static init this structure */
+		memset(&scdparms,
+		       0,
+		       sizeof(struct tf_session_client_destroy_parms));
+
+		scdparms.session_client_id = client->session_client_id;
+		/* Destroy requested client so its no longer
+		 * registered with this session.
+		 */
+		rc = tf_session_client_destroy(tfp, &scdparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to unregister Client %d, rc:%s\n",
+				    client->session_client_id.id,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Closed session client, session_client_id:%d\n",
+			    client->session_client_id.id);
+
+		TFP_DRV_LOG(INFO,
+			    "session_id:%d, ref_count:%d\n",
+			    tfs->session_id.id,
+			    tfs->ref_count);
+
+		return 0;
+	}
 
 	/* Record the session we're closing so the caller knows the
 	 * details.
@@ -176,23 +517,6 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	if (tfs->ref_count > 0) {
-		/* In case we're attached only the session client gets
-		 * closed.
-		 */
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "FW Session close failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		return 0;
-	}
-
-	/* Final cleanup as we're last user of the session */
-
 	/* Unbind the device */
 	rc = tf_dev_unbind(tfp, tfd);
 	if (rc) {
@@ -202,7 +526,6 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
 		/* Log error */
@@ -211,6 +534,21 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
+	/* Final cleanup as we're last user of the session thus we
+	 * also delete the last client.
+	 */
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+	tfp_free(client);
+
+	tfs->ref_count--;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    tfs->session_id.id,
+		    tfs->ref_count);
+
+	tfs->dev_init = false;
+
 	tfp_free(tfp->session->core_data);
 	tfp_free(tfp->session);
 	tfp->session = NULL;
@@ -218,12 +556,31 @@ tf_session_close_session(struct tf *tfp,
 	return 0;
 }
 
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return true;
+	}
+
+	return false;
+}
+
 int
-tf_session_get_session(struct tf *tfp,
-		       struct tf_session **tfs)
+tf_session_get_session_internal(struct tf *tfp,
+				struct tf_session **tfs)
 {
-	int rc;
+	int rc = 0;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -234,7 +591,113 @@ tf_session_get_session(struct tf *tfp,
 
 	*tfs = (struct tf_session *)(tfp->session->core_data);
 
-	return 0;
+	return rc;
+}
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session **tfs)
+{
+	int rc;
+	uint16_t fw_fid;
+	bool supported = false;
+
+	rc = tf_session_get_session_internal(tfp,
+					     tfs);
+	/* Logging done by tf_session_get_session_internal */
+	if (rc)
+		return rc;
+
+	/* As session sharing among functions aka 'individual clients'
+	 * is supported we have to assure that the client is indeed
+	 * registered before we get deep in the TruFlow api stack.
+	 */
+	rc = tfp_get_fid(tfp, &fw_fid);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Internal FID lookup\n, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	supported = tf_session_is_fid_supported(*tfs, fw_fid);
+	if (supported == false) {
+		TFP_DRV_LOG
+			(ERR,
+			"Ctrl channel not registered with session\n, rc:%s\n",
+			strerror(-rc));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->session_client_id.id == session_client_id.id)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL || ctrl_chan_name == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (strncmp(client->ctrl_chan_name,
+			    ctrl_chan_name,
+			    TF_SESSION_NAME_MAX) == 0)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return client;
+	}
+
+	return NULL;
 }
 
 int
@@ -253,6 +716,7 @@ tf_session_get_fw_session_id(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -261,7 +725,15 @@ tf_session_get_fw_session_id(struct tf *tfp,
 		return rc;
 	}
 
-	rc = tf_session_get_session(tfp, &tfs);
+	if (fw_session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -269,3 +741,36 @@ tf_session_get_fw_session_id(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_session_get_session_id(struct tf *tfp,
+			  union tf_session_id *session_id)
+{
+	int rc;
+	struct tf_session *tfs;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*session_id = tfs->session_id;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index a303fde..aa7a278 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -16,6 +16,7 @@
 #include "tf_tbl.h"
 #include "tf_resources.h"
 #include "stack.h"
+#include "ll.h"
 
 /**
  * The Session module provides session control support. A session is
@@ -29,7 +30,6 @@
 
 /** Session defines
  */
-#define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
 /**
@@ -50,7 +50,7 @@
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
  * allocations and (if so configured) any shadow copy of flow
- * information.
+ * information. It also holds info about Session Clients.
  *
  * Memory is assigned to the Truflow instance by way of
  * tf_open_session. Memory is allocated and owned by i.e. ULP.
@@ -65,17 +65,10 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Session ID, allocated by FW on tf_open_session() */
-	union tf_session_id session_id;
-
 	/**
-	 * String containing name of control channel interface to be
-	 * used for this session to communicate with firmware.
-	 *
-	 * ctrl_chan_name will be used as part of a name for any
-	 * shared memory allocation.
+	 * Session ID, allocated by FW on tf_open_session()
 	 */
-	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	union tf_session_id session_id;
 
 	/**
 	 * Boolean controlling the use and availability of shadow
@@ -92,14 +85,67 @@ struct tf_session {
 
 	/**
 	 * Session Reference Count. To keep track of functions per
-	 * session the ref_count is incremented. There is also a
+	 * session the ref_count is updated. There is also a
 	 * parallel TruFlow Firmware ref_count in case the TruFlow
 	 * Core goes away without informing the Firmware.
 	 */
 	uint8_t ref_count;
 
-	/** Device handle */
+	/**
+	 * Session Reference Count for attached sessions. To keep
+	 * track of application sharing of a session the
+	 * ref_count_attach is updated.
+	 */
+	uint8_t ref_count_attach;
+
+	/**
+	 * Device handle
+	 */
 	struct tf_dev_info dev;
+	/**
+	 * Device init flag. False if Device is not fully initialized,
+	 * else true.
+	 */
+	bool dev_init;
+
+	/**
+	 * Linked list of clients registered for this session
+	 */
+	struct ll client_ll;
+};
+
+/**
+ * Session Client
+ *
+ * Shared memory for each of the Session Clients. A session can have
+ * one or more clients.
+ */
+struct tf_session_client {
+	/**
+	 * Linked list of clients
+	 */
+	struct ll_entry ll_entry; /* For inserting in link list, must be
+				   * first field of struct.
+				   */
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Firmware FID, learned at time of Session Client create.
+	 */
+	uint16_t fw_fid;
+
+	/**
+	 * Session Client ID, allocated by FW on tf_register_session()
+	 */
+	union tf_session_client_id session_client_id;
 };
 
 /**
@@ -126,7 +172,13 @@ struct tf_session_attach_session_parms {
  * Session close parameter definition
  */
 struct tf_session_close_session_parms {
+	/**
+	 * []
+	 */
 	uint8_t *ref_count;
+	/**
+	 * []
+	 */
 	union tf_session_id *session_id;
 };
 
@@ -139,11 +191,23 @@ struct tf_session_close_session_parms {
  *
  * @ref tf_session_close_session
  *
+ * @ref tf_session_is_fid_supported
+ *
+ * @ref tf_session_get_session_internal
+ *
  * @ref tf_session_get_session
  *
+ * @ref tf_session_get_session_client
+ *
+ * @ref tf_session_find_session_client_by_name
+ *
+ * @ref tf_session_find_session_client_by_fid
+ *
  * @ref tf_session_get_device
  *
  * @ref tf_session_get_fw_session_id
+ *
+ * @ref tf_session_get_session_id
  */
 
 /**
@@ -179,7 +243,8 @@ int tf_session_attach_session(struct tf *tfp,
 			      struct tf_session_attach_session_parms *parms);
 
 /**
- * Closes a previous created session.
+ * Closes a previous created session. Only possible if previous
+ * registered Clients had been unregistered first.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -189,13 +254,53 @@ int tf_session_attach_session(struct tf *tfp,
  *
  * Returns
  *   - (0) if successful.
+ *   - (-EUSERS) if clients are still registered with the session.
  *   - (-EINVAL) on failure.
  */
 int tf_session_close_session(struct tf *tfp,
 			     struct tf_session_close_session_parms *parms);
 
 /**
- * Looks up the private session information from the TF session info.
+ * Verifies that the fid is supported by the session. Used to assure
+ * that a function i.e. client/control channel is registered with the
+ * session.
+ *
+ * [in] tfs
+ *   Pointer to TF Session handle
+ *
+ * [in] fid
+ *   FID value to check
+ *
+ * Returns
+ *   - (true) if successful, else false
+ *   - (-EINVAL) on failure.
+ */
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Does not perform a fid check against the registered
+ * clients. Should be used if tf_session_get_session() was used
+ * previously i.e. at the TF API boundary.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_internal(struct tf *tfp,
+				    struct tf_session **tfs);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Performs a fid check against the clients on the session.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -211,6 +316,53 @@ int tf_session_get_session(struct tf *tfp,
 			   struct tf_session **tfs);
 
 /**
+ * Looks up client within the session.
+ *
+ * [in] tfs
+ *   Pointer pointer to the session
+ *
+ * [in] session_client_id
+ *   Client id to look for within the session
+ *
+ * Returns
+ *   client if successful.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id);
+
+/**
+ * Looks up client using name within the session.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] session_client_name, name of the client to lookup in the session
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name);
+
+/**
+ * Looks up client using the fid.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] fid, fid of the client to find
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid);
+
+/**
  * Looks up the device information from the TF Session.
  *
  * [in] tfp
@@ -227,8 +379,7 @@ int tf_session_get_device(struct tf_session *tfs,
 			  struct tf_dev_info **tfd);
 
 /**
- * Looks up the FW session id of the firmware connection for the
- * requested TF handle.
+ * Looks up the FW Session id the requested TF handle.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -243,4 +394,20 @@ int tf_session_get_device(struct tf_session *tfs,
 int tf_session_get_fw_session_id(struct tf *tfp,
 				 uint8_t *fw_session_id);
 
+/**
+ * Looks up the Session id the requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_id(struct tf *tfp,
+			      union tf_session_id *session_id);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index a864b16..684b544 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -266,6 +266,7 @@ tf_tbl_set(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -335,6 +336,7 @@ tf_tbl_get(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 1c48b53..cbfaa94 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -138,7 +138,7 @@ tf_tcam_alloc(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -218,7 +218,7 @@ tf_tcam_free(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -319,6 +319,7 @@ tf_tcam_free(struct tf *tfp,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -353,7 +354,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -415,6 +416,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 69d1c9a..426a182 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -161,3 +161,20 @@ tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
 {
 	rte_spinlock_unlock(&parms->slock);
 }
+
+int
+tfp_get_fid(struct tf *tfp, uint16_t *fw_fid)
+{
+	struct bnxt *bp = NULL;
+
+	if (tfp == NULL || fw_fid == NULL)
+		return -EINVAL;
+
+	bp = container_of(tfp, struct bnxt, tfp);
+	if (bp == NULL)
+		return -EINVAL;
+
+	*fw_fid = bp->fw_fid;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index fe49b63..8789eba 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -238,4 +238,19 @@ int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
 #define tfp_bswap_32(val) rte_bswap32(val)
 #define tfp_bswap_64(val) rte_bswap64(val)
 
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
 #endif /* _TFP_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 30/50] net/bnxt: add global config set and get APIs
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (28 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 29/50] net/bnxt: add TF register and unregister Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 31/50] net/bnxt: add support for EEM System memory Somnath Kotur
                   ` (20 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Jay Ding <jay.ding@broadcom.com>

- Add support to update global configuration for ACT_TECT
  and ACT_ABCR.
- Add support to allow Tunnel and Action global configuration.
- Remove register read and write operations.
- Remove the register read and write support.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/hcapi/hcapi_cfa.h       |   3 +
 drivers/net/bnxt/meson.build             |   1 +
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  54 ++++++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 137 +++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       |  77 ++++++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  20 ++++
 drivers/net/bnxt/tf_core/tf_device.h     |  33 +++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   4 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   5 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c | 199 +++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_global_cfg.h | 170 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 109 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |  31 +++++
 14 files changed, 840 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h

diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index e95633b..95f6370 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -242,6 +242,9 @@ int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				  struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
 			     struct hcapi_cfa_data *mirror);
+int hcapi_cfa_p4_global_cfg_hwop(struct hcapi_cfa_hwop *op,
+				 uint32_t type,
+				 struct hcapi_cfa_data *config);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 54564e0..ace7353 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -45,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
+	'tf_core/tf_global_cfg.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index e1c65ca..d9110ca 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -27,6 +27,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_global_cfg.c
 
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
@@ -37,3 +38,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_global_cfg.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 32f1523..7ade992 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -13,8 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_REG_GET = 821,
-	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_GET_GLOBAL_CFG = 821,
+	HWRM_TFT_SET_GLOBAL_CFG = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	HWRM_TFT_IF_TBL_SET = 827,
 	HWRM_TFT_IF_TBL_GET = 828,
@@ -66,18 +66,66 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
+struct tf_set_global_cfg_input;
+struct tf_get_global_cfg_input;
+struct tf_get_global_cfg_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
 struct tf_if_tbl_set_input;
 struct tf_if_tbl_get_input;
 struct tf_if_tbl_get_output;
+/* Input params for global config set */
+typedef struct tf_set_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config type */
+	uint32_t			 type;
+	/* Offset of the type */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_set_global_cfg_input_t, *ptf_set_global_cfg_input_t;
+
+/* Input params for global config to get */
+typedef struct tf_get_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config to retrieve */
+	uint32_t			 type;
+	/* Offset to retrieve */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+} tf_get_global_cfg_input_t, *ptf_get_global_cfg_input_t;
+
+/* Output params for global config */
+typedef struct tf_get_global_cfg_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+	/* Data to get */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_get_global_cfg_output_t, *ptf_get_global_cfg_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
-	uint16_t			 flags;
+	uint32_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
 #define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 489c461..0f119b4 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_em.h"
 #include "tf_rm.h"
+#include "tf_global_cfg.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "bitalloc.h"
@@ -277,6 +278,142 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
+/** Get global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_get_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_get_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+/** Set global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_set_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_set_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_identifier(struct tf *tfp,
 		    struct tf_alloc_identifier_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index fea222b..3f54ab1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1612,6 +1612,83 @@ int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
 /**
+ * @page global Global Configuration
+ *
+ * @ref tf_set_global_cfg
+ *
+ * @ref tf_get_global_cfg
+ */
+/**
+ * Tunnel Encapsulation Offsets
+ */
+enum tf_tunnel_encap_offsets {
+	TF_TUNNEL_ENCAP_L2,
+	TF_TUNNEL_ENCAP_NAT,
+	TF_TUNNEL_ENCAP_MPLS,
+	TF_TUNNEL_ENCAP_VXLAN,
+	TF_TUNNEL_ENCAP_GENEVE,
+	TF_TUNNEL_ENCAP_NVGRE,
+	TF_TUNNEL_ENCAP_GRE,
+	TF_TUNNEL_ENCAP_FULL_GENERIC
+};
+/**
+ * Global Configuration Table Types
+ */
+enum tf_global_config_type {
+	TF_TUNNEL_ENCAP,  /**< Tunnel Encap Config(TECT) */
+	TF_ACTION_BLOCK,  /**< Action Block Config(ABCR) */
+	TF_GLOBAL_CFG_TYPE_MAX
+};
+
+/**
+ * tf_global_cfg parameter definition
+ */
+struct tf_global_cfg_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * Get global configuration
+ *
+ * Retrieve the configuration
+ *
+ * Returns success or failure code.
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
+/**
+ * Update the global configuration table
+ *
+ * Read, modify write the value.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
+/**
  * @page if_tbl Interface Table Access
  *
  * @ref tf_set_if_tbl_entry
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index a3073c8..ead9584 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -45,6 +45,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
 	struct tf_if_tbl_cfg_parms if_tbl_cfg;
+	struct tf_global_cfg_cfg_parms global_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -128,6 +129,18 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * GLOBAL_CFG
+	 */
+	global_cfg.num_elements = TF_GLOBAL_CFG_TYPE_MAX;
+	global_cfg.cfg = tf_global_cfg_p4;
+	rc = tf_global_cfg_bind(tfp, &global_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -207,6 +220,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_global_cfg_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Global Cfg Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 5a0943a..1740a27 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 struct tf_session;
@@ -606,6 +607,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_if_tbl)(struct tf *tfp,
 				 struct tf_if_tbl_get_parms *parms);
+
+	/**
+	 * Update global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_set_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
+
+	/**
+	 * Get global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_get_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 2dc34b8..6526082 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -108,6 +108,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_free_tbl_scope = NULL,
 	.tf_dev_set_if_tbl = NULL,
 	.tf_dev_get_if_tbl = NULL,
+	.tf_dev_set_global_cfg = NULL,
+	.tf_dev_get_global_cfg = NULL,
 };
 
 /**
@@ -140,4 +142,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 	.tf_dev_set_if_tbl = tf_if_tbl_set,
 	.tf_dev_get_if_tbl = tf_if_tbl_get,
+	.tf_dev_set_global_cfg = tf_global_cfg_set,
+	.tf_dev_get_global_cfg = tf_global_cfg_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 3b03a7c..7fabb4b 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -11,6 +11,7 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -96,4 +97,8 @@ struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
 	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
 };
 
+struct tf_global_cfg_cfg tf_global_cfg_p4[TF_GLOBAL_CFG_TYPE_MAX] = {
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_TUNNEL_ENCAP },
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_ACTION_BLOCK },
+};
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.c b/drivers/net/bnxt/tf_core/tf_global_cfg.c
new file mode 100644
index 0000000..4ed4039
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.c
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_global_cfg.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+/**
+ * Global Cfg DBs.
+ */
+static void *global_cfg_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_global_cfg_get_hcapi_parms {
+	/**
+	 * [in] Global Cfg DB Handle
+	 */
+	void *global_cfg_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Check global_cfg_type and return hwrm type.
+ *
+ * [in] global_cfg_type
+ *   Global Cfg type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_global_cfg_get_hcapi_type(struct tf_global_cfg_get_hcapi_parms *parms)
+{
+	struct tf_global_cfg_cfg *global_cfg;
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	global_cfg = (struct tf_global_cfg_cfg *)parms->global_cfg_db;
+	cfg_type = global_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_GLOBAL_CFG_CFG_HCAPI)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = global_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_global_cfg_bind(struct tf *tfp __rte_unused,
+		   struct tf_global_cfg_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg DB already initialized\n");
+		return -EINVAL;
+	}
+
+	global_cfg_db[TF_DIR_RX] = parms->cfg;
+	global_cfg_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Global Cfg - initialized\n");
+
+	return 0;
+}
+
+int
+tf_global_cfg_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Global Cfg DBs created\n");
+		return 0;
+	}
+
+	global_cfg_db[TF_DIR_RX] = NULL;
+	global_cfg_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_global_cfg_set(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_msg_set_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_global_cfg_get(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.h b/drivers/net/bnxt/tf_core/tf_global_cfg.h
new file mode 100644
index 0000000..5c73bb1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_GLOBAL_CFG_H_
+#define TF_GLOBAL_CFG_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/**
+ * The global cfg module provides processing of global cfg types.
+ */
+
+struct tf;
+
+/**
+ * Global cfg configuration enumeration.
+ */
+enum tf_global_cfg_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_GLOBAL_CFG_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_GLOBAL_CFG_CFG_HCAPI,
+};
+
+/**
+ * Global cfg configuration structure, used by the Device to configure
+ * how an individual global cfg type is configured in regard to the HCAPI type.
+ */
+struct tf_global_cfg_cfg {
+	/**
+	 * Global cfg config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Global Cfg configuration parameters
+ */
+struct tf_global_cfg_cfg_parms {
+	/**
+	 * Number of table types in the configuration array
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_global_cfg_cfg *cfg;
+};
+
+/**
+ * global cfg parameters
+ */
+struct tf_dev_global_cfg_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * @page global cfg
+ *
+ * @ref tf_global_cfg_bind
+ *
+ * @ref tf_global_cfg_unbind
+ *
+ * @ref tf_global_cfg_set
+ *
+ * @ref tf_global_cfg_get
+ *
+ */
+/**
+ * Initializes the Global Cfg module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_bind(struct tf *tfp,
+		   struct tf_global_cfg_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_unbind(struct tf *tfp);
+
+/**
+ * Updates the global configuration table
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_set(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+/**
+ * Get global configuration
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_get(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+#endif /* TF_GLOBAL_CFG_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 8c2dff8..035c094 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -992,6 +992,111 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 /* HWRM Tunneled messages */
 
 int
+tf_msg_get_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_get_global_cfg_input_t req = { 0 };
+	tf_get_global_cfg_output_t resp = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+	uint16_t resp_size = 0;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_GET_GLOBAL_CFG,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	resp_size = tfp_le_to_cpu_16(resp.size);
+	if (resp_size < params->config_sz_in_bytes)
+		return -EINVAL;
+
+	if (params->config)
+		tfp_memcpy(params->config,
+			   resp.data,
+			   resp_size);
+	else
+		return -EFAULT;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_set_global_cfg_input_t req = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	tfp_memcpy(req.data, params->config,
+		   params->config_sz_in_bytes);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SET_GLOBAL_CFG,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  enum tf_dir dir,
 			  uint16_t hcapi_type,
@@ -1066,8 +1171,8 @@ tf_msg_get_if_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
-		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX);
 
 	/* Populate the request */
 	req.fw_session_id =
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index c02a520..195710e 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_tcam.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 
@@ -449,6 +450,36 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 /* HWRM Tunneled messages */
 
 /**
+ * Sends global cfg read request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to read parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_get_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
+/**
+ * Sends global cfg update request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to write parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_set_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
+/**
  * Sends bulk get message of a Table Type element to the firmware.
  *
  * [in] tfp
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 31/50] net/bnxt: add support for EEM System memory
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (29 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 30/50] net/bnxt: add global config set and get APIs Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 32/50] net/bnxt: integrate with the latest tf_core library Somnath Kotur
                   ` (19 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Select EEM Host or System memory via config parameter
- Add EEM system memory support for kernel memory
- Dependent on DPDK changes that add support for the HWRM_OEM_CMD.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 config/common_base                      |   1 +
 drivers/net/bnxt/Makefile               |   3 +
 drivers/net/bnxt/bnxt.h                 |   8 +
 drivers/net/bnxt/bnxt_hwrm.c            |  27 ++
 drivers/net/bnxt/bnxt_hwrm.h            |   1 +
 drivers/net/bnxt/meson.build            |   2 +-
 drivers/net/bnxt/tf_core/Makefile       |   5 +-
 drivers/net/bnxt/tf_core/tf_core.c      |  13 +-
 drivers/net/bnxt/tf_core/tf_core.h      |   4 +-
 drivers/net/bnxt/tf_core/tf_device.c    |   5 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |   2 +-
 drivers/net/bnxt/tf_core/tf_em.h        | 113 +-----
 drivers/net/bnxt/tf_core/tf_em_common.c | 685 ++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_common.h |  30 ++
 drivers/net/bnxt/tf_core/tf_em_host.c   | 693 +-------------------------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 546 ++++++++++++++++++++++---
 drivers/net/bnxt/tf_core/tf_if_tbl.h    |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c       |  24 ++
 drivers/net/bnxt/tf_core/tf_tbl.h       |   7 +
 drivers/net/bnxt/tf_core/tfp.c          |  12 +
 drivers/net/bnxt/tf_core/tfp.h          |  15 +
 21 files changed, 1327 insertions(+), 873 deletions(-)

diff --git a/config/common_base b/config/common_base
index c7d5c73..ccd03aa 100644
--- a/config/common_base
+++ b/config/common_base
@@ -220,6 +220,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
 # Compile burst-oriented Broadcom BNXT PMD driver
 #
 CONFIG_RTE_LIBRTE_BNXT_PMD=y
+CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM=n
 
 #
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 349b09c..6b9544b 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,9 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
 include $(SRCDIR)/hcapi/Makefile
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), y)
+CFLAGS += -DTF_USE_SYSTEM_MEM
+endif
 endif
 
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 65862ab..43e5e71 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -563,6 +563,13 @@ struct bnxt_rep_info {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define  MAX_TABLE_SUPPORT 4
+#define  MAX_DIR_SUPPORT   2
+struct bnxt_dmabuf_info {
+	uint32_t entry_num;
+	int      fd[MAX_DIR_SUPPORT][MAX_TABLE_SUPPORT];
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -780,6 +787,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_dmabuf_info dmabuf;
 	struct bnxt_ulp_context	*ulp_ctx;
 	struct bnxt_flow_stat_info *flow_stat;
 	uint8_t			flow_xstat;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index e6a28d0..2605ef0 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -5506,3 +5506,30 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 
 	return 0;
 }
+
+#ifdef RTE_LIBRTE_BNXT_PMD_SYSTEM
+int
+bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num)
+{
+	struct hwrm_oem_cmd_input req = {0};
+	struct hwrm_oem_cmd_output *resp = bp->hwrm_cmd_resp_addr;
+	struct bnxt_dmabuf_info oem_data;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_OEM_CMD, BNXT_USE_CHIMP_MB);
+	req.IANA = 0x14e4;
+
+	memset(&oem_data, 0, sizeof(struct bnxt_dmabuf_info));
+	oem_data.entry_num = (entry_num);
+	memcpy(&req.oem_data[0], &oem_data, sizeof(struct bnxt_dmabuf_info));
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+	HWRM_CHECK_RESULT();
+
+	bp->dmabuf.entry_num = entry_num;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+#endif /* RTE_LIBRTE_BNXT_PMD_SYSTEM */
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 87cd407..9e0b799 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -276,4 +276,5 @@ int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
+int bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num);
 #endif
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index ace7353..8f6ed41 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -31,7 +31,6 @@ sources = files('bnxt_cpr.c',
         'tf_core/tf_em_common.c',
         'tf_core/tf_em_host.c',
         'tf_core/tf_em_internal.c',
-        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
@@ -46,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
+	'tf_core/tf_em_host.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index d9110ca..0493b55 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -16,8 +16,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), n)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
+else
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM) += tf_core/tf_em_system.c
+endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0f119b4..00b2775 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -540,10 +540,12 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_alloc_parms aparms = { 0 };
+	struct tf_tcam_alloc_parms aparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&aparms, 0, sizeof(struct tf_tcam_alloc_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -598,10 +600,13 @@ tf_set_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_set_parms sparms = { 0 };
+	struct tf_tcam_set_parms sparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&sparms, 0, sizeof(struct tf_tcam_set_parms));
+
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -667,10 +672,12 @@ tf_free_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_free_parms fparms = { 0 };
+	struct tf_tcam_free_parms fparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&fparms, 0, sizeof(struct tf_tcam_free_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3f54ab1..9e80426 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1731,7 +1731,7 @@ struct tf_set_if_tbl_entry_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -1768,7 +1768,7 @@ struct tf_get_if_tbl_entry_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index ead9584..f08f7eb 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -92,8 +92,11 @@ tf_dev_bind_p4(struct tf *tfp,
 	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
 	em_cfg.cfg = tf_em_ext_p4;
 	em_cfg.resources = resources;
+#ifdef TF_USE_SYSTEM_MEM
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_SYSTEM;
+#else
 	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
-
+#endif
 	rc = tf_em_ext_common_bind(tfp, &em_cfg);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 6526082..dfe626c 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -126,7 +126,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
-	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_common_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 1c2369c..45c0e11 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -16,6 +16,9 @@
 
 #include "hcapi/hcapi_cfa_defs.h"
 
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -69,8 +72,16 @@
 #error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
+/*
+ * System memory always uses 4K pages
+ */
+#ifdef TF_USE_SYSTEM_MEM
+#define TF_EM_PAGE_SIZE (1 << TF_EM_PAGE_SIZE_4K)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SIZE_4K)
+#else
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
 #define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+#endif
 
 /*
  * Used to build GFID:
@@ -169,39 +180,6 @@ struct tf_em_cfg_parms {
  */
 
 /**
- * Allocates EEM Table scope
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- *   -ENOMEM - Out of memory
- */
-int tf_alloc_eem_tbl_scope(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free's EEM Table scope control block
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			     struct tf_free_tbl_scope_parms *parms);
-
-/**
  * Insert record in to internal EM table
  *
  * [in] tfp
@@ -374,8 +352,8 @@ int tf_em_ext_common_unbind(struct tf *tfp);
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+int tf_em_ext_alloc(struct tf *tfp,
+		    struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using host memory
@@ -390,40 +368,8 @@ int tf_em_ext_host_alloc(struct tf *tfp,
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
-
-/**
- * Alloc for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_alloc(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_free(struct tf *tfp,
-			  struct tf_free_tbl_scope_parms *parms);
+int tf_em_ext_free(struct tf *tfp,
+		   struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
@@ -510,8 +456,8 @@ tf_tbl_ext_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
 
 /**
  * Sets the specified external table type element.
@@ -529,26 +475,11 @@ int tf_tbl_ext_set(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
 
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data by invoking the
- * firmware.
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_system_set(struct tf *tfp,
-			  struct tf_tbl_set_parms *parms);
+int
+tf_em_ext_system_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms);
 
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 23a7fc9..3d903ff 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -23,6 +23,8 @@
 
 #include "bnxt.h"
 
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
 /**
  * EM DBs.
@@ -281,19 +283,604 @@ tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
 		       struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
+	key_entry->hdr.pointer = result->pointer;
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
 
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
 
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		 uint32_t page_size)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(page_size,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    page_size);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, page_size,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 \
+		    " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * page_size,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    parms->rx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    parms->tx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries
+		= parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size
+		= parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries
+		= 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries
+		= parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries
+		= 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
 
 #ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
 #endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 =
+			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables
+			[(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
 }
 
+
 int
 tf_em_ext_common_bind(struct tf *tfp,
 		      struct tf_em_cfg_parms *parms)
@@ -341,6 +928,7 @@ tf_em_ext_common_bind(struct tf *tfp,
 		init = 1;
 
 	mem_type = parms->mem_type;
+
 	return 0;
 }
 
@@ -375,31 +963,88 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms)
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_tbl_ext_host_set(tfp, parms);
-	else
-		return tf_tbl_ext_system_set(tfp, parms);
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	return rc;
 }
 
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_alloc(tfp, parms);
-	else
-		return tf_em_ext_system_alloc(tfp, parms);
+	return tf_em_ext_alloc(tfp, parms);
 }
 
 int
 tf_em_ext_common_free(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_free(tfp, parms);
-	else
-		return tf_em_ext_system_free(tfp, parms);
+	return tf_em_ext_free(tfp, parms);
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index bf01df9..fa313c4 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -101,4 +101,34 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   uint32_t offset,
 			   enum hcapi_cfa_em_table_type table_type);
 
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		     uint32_t page_size);
+
 #endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 9abb9b1..4d769cc 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -22,7 +22,6 @@
 
 #include "bnxt.h"
 
-
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
 #define PTU_PTE_NEXT_TO_LAST   0x4UL
@@ -30,20 +29,6 @@
 /* Number of pointers per page_size */
 #define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
 /**
  * EM DBs.
  */
@@ -295,203 +280,6 @@ tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 }
 
 /**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
-/**
  * Unregisters EM Ctx in Firmware
  *
  * [in] tfp
@@ -552,7 +340,7 @@ tf_em_ctx_reg(struct tf *tfp,
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
+			rc = tf_em_size_table(tbl, TF_EM_PAGE_SIZE);
 
 			if (rc)
 				goto cleanup;
@@ -578,406 +366,9 @@ tf_em_ctx_reg(struct tf *tfp,
 	return rc;
 }
 
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if ((parms->rx_num_flows_in_k != 0) &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if ((parms->tx_num_flows_in_k != 0) &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries
-		= parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size
-		= parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries
-		= 0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries
-		= parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size
-		= parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries
-		= 0;
-
-	return 0;
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t mask;
-	uint32_t key0_hash;
-	uint32_t key1_hash;
-	uint32_t key0_index;
-	uint32_t key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 =
-			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[
-			(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
 int
-tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_insert_eem_entry(
-		tbl_scope_cb,
-		parms);
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
-}
-
-int
-tf_em_ext_host_alloc(struct tf *tfp,
-		     struct tf_alloc_tbl_scope_parms *parms)
+tf_em_ext_alloc(struct tf *tfp,
+		struct tf_alloc_tbl_scope_parms *parms)
 {
 	int rc;
 	enum tf_dir dir;
@@ -1084,7 +475,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 cleanup_full:
 	free_parms.tbl_scope_id = parms->tbl_scope_id;
-	tf_em_ext_host_free(tfp, &free_parms);
+	tf_em_ext_free(tfp, &free_parms);
 	return -EINVAL;
 
 cleanup:
@@ -1097,8 +488,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 }
 
 int
-tf_em_ext_host_free(struct tf *tfp,
-		    struct tf_free_tbl_scope_parms *parms)
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
 {
 	int rc = 0;
 	enum tf_dir  dir;
@@ -1139,75 +530,3 @@ tf_em_ext_host_free(struct tf *tfp,
 	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
 	return rc;
 }
-
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	uint32_t tbl_scope_id;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	tbl_scope_id = parms->tbl_scope_id;
-
-	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Get the table scope control block associated with the
-	 * external pool
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	op.opcode = HCAPI_CFA_HWOPS_PUT;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = parms->idx;
-	key_obj.data = parms->data;
-	key_obj.size = parms->data_sz_in_bytes;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 10768df..e383f1f 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -4,11 +4,24 @@
  */
 
 #include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <unistd.h>
+#include <string.h>
+
 #include <rte_common.h>
 #include <rte_errno.h>
 #include <rte_log.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
 #include "tf_em.h"
 #include "tf_em_common.h"
 #include "tf_msg.h"
@@ -18,103 +31,508 @@
 
 #include "bnxt.h"
 
+enum tf_em_req_type {
+	TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ = 5,
+};
 
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
+struct tf_em_bnxt_lfc_req_hdr {
+	uint32_t ver;
+	uint32_t bus;
+	uint32_t devfn;
+	enum tf_em_req_type req_type;
+};
+
+struct tf_em_bnxt_lfc_cfa_eem_std_hdr {
+	uint16_t version;
+	uint16_t size;
+	uint32_t flags;
+	#define TF_EM_BNXT_LFC_EEM_CFG_PRIMARY_FUNC     (1 << 0)
+};
+
+struct tf_em_bnxt_lfc_dmabuf_fd {
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+};
+
+#ifndef __user
+#define __user
+#endif
+
+struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req {
+	struct tf_em_bnxt_lfc_cfa_eem_std_hdr std;
+	uint8_t dir;
+	uint32_t flags;
+	void __user *dma_fd;
+};
+
+struct tf_em_bnxt_lfc_req {
+	struct tf_em_bnxt_lfc_req_hdr hdr;
+	union {
+		struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req
+		       eem_dmabuf_export_req;
+		uint64_t hreq;
+	} req;
+};
+
+#define TF_EEM_BNXT_LFC_IOCTL_MAGIC     0x98
+#define BNXT_LFC_REQ    \
+	_IOW(TF_EEM_BNXT_LFC_IOCTL_MAGIC, 1, struct tf_em_bnxt_lfc_req)
+
+/**
+ * EM DBs.
  */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_insert_em_entry_parms *parms __rte_unused)
+extern void *eem_db[TF_DIR_MAX];
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+static void
+tf_em_dmabuf_mem_unmap(struct hcapi_cfa_em_table *tbl)
 {
-	return 0;
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no, pg_count;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			if (tp->pg_va_tbl != NULL &&
+			    tp->pg_va_tbl[page_no] != NULL &&
+			    tp->pg_size != 0) {
+				(void)munmap(tp->pg_va_tbl[page_no],
+					     tp->pg_size);
+			}
+		}
+
+		tfp_free((void *)tp->pg_va_tbl);
+		tfp_free((void *)tp->pg_pa_tbl);
+	}
 }
 
-/** delete EEM hash entry API
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
  *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
  *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
+ * [in] dir
+ *   Receive or transmit direction
  */
+static void
+tf_em_ctx_unreg(struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+			tf_em_dmabuf_mem_unmap(tbl);
+	}
+}
+
+static int tf_export_tbl_scope(int lfc_fd,
+			       int *fd,
+			       int bus,
+			       int devfn)
+{
+	struct tf_em_bnxt_lfc_req tf_lfc_req;
+	struct tf_em_bnxt_lfc_dmabuf_fd *dma_fd;
+	struct tfp_calloc_parms  mparms;
+	int rc;
+
+	memset(&tf_lfc_req, 0, sizeof(struct tf_em_bnxt_lfc_req));
+	tf_lfc_req.hdr.ver = 1;
+	tf_lfc_req.hdr.bus = bus;
+	tf_lfc_req.hdr.devfn = devfn;
+	tf_lfc_req.hdr.req_type = TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ;
+	tf_lfc_req.req.eem_dmabuf_export_req.flags = O_ACCMODE;
+	tf_lfc_req.req.eem_dmabuf_export_req.std.version = 1;
+
+	mparms.nitems = 1;
+	mparms.size = sizeof(struct tf_em_bnxt_lfc_dmabuf_fd);
+	mparms.alignment = 0;
+	tfp_calloc(&mparms);
+	dma_fd = (struct tf_em_bnxt_lfc_dmabuf_fd *)mparms.mem_va;
+	tf_lfc_req.req.eem_dmabuf_export_req.dma_fd = dma_fd;
+
+	rc = ioctl(lfc_fd, BNXT_LFC_REQ, &tf_lfc_req);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EXT EEM export chanel_fd %d, rc=%d\n",
+			    lfc_fd,
+			    rc);
+		tfp_free(dma_fd);
+		return rc;
+	}
+
+	memcpy(fd, dma_fd->fd, sizeof(dma_fd->fd));
+	tfp_free(dma_fd);
+
+	return rc;
+}
+
 static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_delete_em_entry_parms *parms __rte_unused)
+tf_em_dmabuf_mem_map(struct hcapi_cfa_em_table *tbl,
+		     int dmabuf_fd)
 {
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no;
+	uint32_t pg_count;
+	uint32_t offset;
+	struct tfp_calloc_parms parms;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		offset = 0;
+
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0)
+			return -ENOMEM;
+
+		tp->pg_va_tbl = parms.mem_va;
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0) {
+			tfp_free((void *)tp->pg_va_tbl);
+			return -ENOMEM;
+		}
+
+		tp->pg_pa_tbl = parms.mem_va;
+		tp->pg_count = 0;
+		tp->pg_size =  TF_EM_PAGE_SIZE;
+
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			tp->pg_va_tbl[page_no] = mmap(NULL,
+						      TF_EM_PAGE_SIZE,
+						      PROT_READ | PROT_WRITE,
+						      MAP_SHARED,
+						      dmabuf_fd,
+						      offset);
+			if (tp->pg_va_tbl[page_no] == (void *)-1) {
+				TFP_DRV_LOG(ERR,
+		"MMap memory error. level:%d page:%d pg_count:%d - %s\n",
+					    level,
+				     page_no,
+					    pg_count,
+					    strerror(errno));
+				return -ENOMEM;
+			}
+			offset += tp->pg_size;
+			tp->pg_count++;
+		}
+	}
+
 	return 0;
 }
 
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_insert_em_entry_parms *parms)
+static int tf_mmap_tbl_scope(struct tf_tbl_scope_cb *tbl_scope_cb,
+			     enum tf_dir dir,
+			     int tbl_type,
+			     int dmabuf_fd)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *tbl;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	if (tbl_type == TF_EFC_TABLE)
+		return 0;
+
+	tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[tbl_type];
+	return tf_em_dmabuf_mem_map(tbl, dmabuf_fd);
+}
+
+#define TF_LFC_DEVICE "/dev/bnxt_lfc"
+
+static int
+tf_prepare_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	int lfc_fd;
+
+	lfc_fd = open(TF_LFC_DEVICE, O_RDWR);
+	if (!lfc_fd) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: open %s device error\n",
+			    TF_LFC_DEVICE);
+		return -ENOENT;
 	}
 
-	return tf_insert_eem_entry
-		(tbl_scope_cb, parms);
+	tbl_scope_cb->lfc_fd = lfc_fd;
+
+	return 0;
 }
 
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_delete_em_entry_parms *parms)
+static int
+offload_system_mmap(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int rc;
+	int dmabuf_fd;
+	enum tf_dir dir;
+	enum hcapi_cfa_em_table_type tbl_type;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	rc = tf_prepare_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EEM: Prepare bnxt_lfc channel failed\n");
+		return rc;
 	}
 
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
+	rc = tf_export_tbl_scope(tbl_scope_cb->lfc_fd,
+				 (int *)tbl_scope_cb->fd,
+				 tbl_scope_cb->bus,
+				 tbl_scope_cb->devfn);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "export dmabuf fd failed\n");
+		return rc;
+	}
+
+	tbl_scope_cb->valid = true;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (tbl_type = TF_KEY0_TABLE; tbl_type <
+			     TF_MAX_TABLE; tbl_type++) {
+			if (tbl_type == TF_EFC_TABLE)
+				continue;
+
+			dmabuf_fd = tbl_scope_cb->fd[(dir ? 0 : 1)][tbl_type];
+			rc = tf_mmap_tbl_scope(tbl_scope_cb,
+					       dir,
+					       tbl_type,
+					       dmabuf_fd);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "dir:%d tbl:%d mmap failed rc %d\n",
+					    dir,
+					    tbl_type,
+					    rc);
+				break;
+			}
+		}
+	}
+	return 0;
 }
 
-int
-tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
-		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+static int
+tf_destroy_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	close(tbl_scope_cb->lfc_fd);
+
 	return 0;
 }
 
-int
-tf_em_ext_system_free(struct tf *tfp __rte_unused,
-		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+static int
+tf_dmabuf_alloc(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp,
+				  tbl_scope_cb->em_ctx_info[TF_DIR_RX].\
+				  em_tables[TF_KEY0_TABLE].num_entries);
+	if (rc)
+		PMD_DRV_LOG(ERR, "EEM: Failed to prepare system memory rc:%d\n",
+			    rc);
+
 	return 0;
 }
 
-int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
-			  struct tf_tbl_set_parms *parms __rte_unused)
+static int
+tf_dmabuf_free(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp, 0);
+	if (rc)
+		TFP_DRV_LOG(ERR, "EEM: Failed to cleanup system memory\n");
+
+	tf_destroy_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+
 	return 0;
 }
+
+int
+tf_em_ext_alloc(struct tf *tfp,
+		struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_free_parms fparms = { 0 };
+	int dir;
+	int i;
+	struct hcapi_cfa_em_table *em_tables;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+	tbl_scope_cb->bus = tfs->session_id.internal.bus;
+	tbl_scope_cb->devfn = tfs->session_id.internal.device;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	rc = tf_dmabuf_alloc(tfp, tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System DMA buff alloc failed\n");
+		return -EIO;
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+
+			if (i == TF_EFC_TABLE) {
+				/*TFP_DRV_LOG(WARNING,
+					    "Not support EFC table in WH+\n");*/
+				continue;
+			}
+
+			em_tables =
+				&tbl_scope_cb->em_ctx_info[dir].em_tables[i];
+
+			rc = tf_em_size_table(em_tables, TF_EM_PAGE_SIZE);
+			if (rc) {
+				TFP_DRV_LOG(ERR, "Size table failed\n");
+				goto cleanup;
+			}
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_create_tbl_pool_external(dir,
+					tbl_scope_cb,
+					em_tables[TF_RECORD_TABLE].num_entries,
+					em_tables[TF_RECORD_TABLE].entry_size);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	rc = offload_system_mmap(tbl_scope_cb);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System alloc mmap failed\n");
+		goto cleanup_full;
+	}
+
+	return rc;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int dir;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+
+		/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+
+		/* Unmap memory */
+		tf_em_ctx_unreg(tbl_scope_cb, dir);
+
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+	}
+
+	tf_dmabuf_free(tfp, tbl_scope_cb);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
index 54d4c37..7eb72bd 100644
--- a/drivers/net/bnxt/tf_core/tf_if_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -113,7 +113,7 @@ struct tf_if_tbl_set_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -143,7 +143,7 @@ struct tf_if_tbl_get_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [out] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 035c094..ed506de 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -813,7 +813,19 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = parms->hcapi_type;
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
@@ -869,7 +881,19 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct hwrm_tf_tcam_free_input req =  { 0 };
 	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(in_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = in_parms->hcapi_type;
 	req.count = 1;
 	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 2a10b47..f20e8d7 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -38,6 +38,13 @@ struct tf_em_caps {
  */
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
+#ifdef TF_USE_SYSTEM_MEM
+	int lfc_fd;
+	uint32_t bus;
+	uint32_t devfn;
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+	bool valid;
+#endif
 	int index;
 	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps em_caps[TF_DIR_MAX];
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 426a182..3eade31 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -87,6 +87,18 @@ tfp_send_msg_tunneled(struct tf *tfp,
 	return rc;
 }
 
+#ifdef TF_USE_SYSTEM_MEM
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows)
+{
+	return bnxt_hwrm_oem_cmd(container_of(tfp,
+					      struct bnxt,
+					      tfp),
+				 max_flows);
+}
+#endif /* TF_USE_SYSTEM_MEM */
+
 /**
  * Allocates zero'ed memory from the heap.
  *
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8789eba..421a7d9 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -171,6 +171,21 @@ tfp_msg_hwrm_oem_cmd(struct tf *tfp,
 		     uint32_t max_flows);
 
 /**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
+/**
  * Allocates zero'ed memory from the heap.
  *
  * NOTE: Also performs virt2phy address conversion by default thus is
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 32/50] net/bnxt: integrate with the latest tf_core library
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (30 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 31/50] net/bnxt: add support for EEM System memory Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 33/50] net/bnxt: add support for internal encap records Somnath Kotur
                   ` (18 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

ULP changes to integrate with the latest session open
interface in tf_core

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 46 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index c7281ab..a9ed5d9 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -68,6 +68,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 	struct rte_eth_dev		*ethdev = bp->eth_dev;
 	int32_t				rc = 0;
 	struct tf_open_session_parms	params;
+	struct tf_session_resources	*resources;
 
 	memset(&params, 0, sizeof(params));
 
@@ -79,6 +80,51 @@ ulp_ctx_session_open(struct bnxt *bp,
 		return rc;
 	}
 
+	params.shadow_copy = false;
+	params.device_type = TF_DEVICE_TYPE_WH;
+	resources = &params.resources;
+	/** RX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_L2_CTXT] = 16;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 720;
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 720;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 16;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 416;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 2048;
+
+	/** TX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_L2_CTXT] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 8;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 8;
+
+	/* EEM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+
 	rc = tf_open_session(&bp->tfp, &params);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 33/50] net/bnxt: add support for internal encap records
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (31 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 32/50] net/bnxt: integrate with the latest tf_core library Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 34/50] net/bnxt: add support for if table processing Somnath Kotur
                   ` (17 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Mike Baucom <michael.baucom@broadcom.com>

Modifications to allow internal encap records to be supported:
- Modified the mapper index table processing to handle encap without an
  action record
- Modified the session open code to reserve some 64 Byte internal encap
  records on tx
- Modified the blob encap swap to support encap without action record

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c   |  3 +++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c | 29 +++++++++++++----------------
 drivers/net/bnxt/tf_ulp/ulp_utils.c  |  2 +-
 3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index a9ed5d9..4c1a1c4 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -113,6 +113,9 @@ ulp_ctx_session_open(struct bnxt *bp,
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
 
+	/* ENCAP */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 394d4a1..c63f78a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1474,7 +1474,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
 
-	if (!flds || !num_flds) {
+	if (!flds || (!num_flds && !encap_flds)) {
 		BNXT_TF_DBG(ERR, "template undefined for the index table\n");
 		return -EINVAL;
 	}
@@ -1483,7 +1483,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	for (i = 0; i < (num_flds + encap_flds); i++) {
 		/* set the swap index if encap swap bit is enabled */
 		if (parms->device_params->encap_byte_swap && encap_flds &&
-		    ((i + 1) == num_flds))
+		    (i == num_flds))
 			ulp_blob_encap_swap_idx_set(&data);
 
 		/* Process the result fields */
@@ -1496,18 +1496,15 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			BNXT_TF_DBG(ERR, "data field failed\n");
 			return rc;
 		}
+	}
 
-		/* if encap bit swap is enabled perform the bit swap */
-		if (parms->device_params->encap_byte_swap && encap_flds) {
-			if ((i + 1) == (num_flds + encap_flds))
-				ulp_blob_perform_encap_swap(&data);
+	/* if encap bit swap is enabled perform the bit swap */
+	if (parms->device_params->encap_byte_swap && encap_flds) {
+		ulp_blob_perform_encap_swap(&data);
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-			if ((i + 1) == (num_flds + encap_flds)) {
-				BNXT_TF_DBG(INFO, "Dump fter encap swap\n");
-				ulp_mapper_blob_dump(&data);
-			}
+		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
+		ulp_mapper_blob_dump(&data);
 #endif
-		}
 	}
 
 	/* Perform the tf table allocation by filling the alloc params */
@@ -1818,6 +1815,11 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+					    tbl->resource_func);
+				return rc;
+			}
 			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected action resource %d\n",
@@ -1825,11 +1827,6 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 			return -EINVAL;
 		}
 	}
-	if (rc) {
-		BNXT_TF_DBG(ERR, "Resource type %d failed\n",
-			    tbl->resource_func);
-		return rc;
-	}
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
index c6e4dc7..79958b2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -478,7 +478,7 @@ ulp_blob_perform_encap_swap(struct ulp_blob *blob)
 		BNXT_TF_DBG(ERR, "invalid argument\n");
 		return; /* failure */
 	}
-	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx);
 	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
 
 	while (idx <= end_idx) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 34/50] net/bnxt: add support for if table processing
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (32 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 33/50] net/bnxt: add support for internal encap records Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 35/50] net/bnxt: disable vector mode in tx direction when truflow is enabled Somnath Kotur
                   ` (16 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for if table processing in the ulp mapper
layer. This enables support for the default partition action
record pointer interface table.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c             |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c          |   2 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           | 142 +++++++++++++++++++++----
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c         |   1 +
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h | 117 ++++++++++----------
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h  |   8 +-
 6 files changed, 188 insertions(+), 83 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4c1a1c4..4835b95 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -115,6 +115,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 
 	/* ENCAP */
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_16B] = 16;
 
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 22996e5..384dc5b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -933,7 +933,7 @@ ulp_default_flow_db_cfa_action_get(struct bnxt_ulp_context *ulp_ctx,
 				   uint32_t flow_id,
 				   uint32_t *cfa_action)
 {
-	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX;
+	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION;
 	uint64_t hndl;
 	int32_t rc;
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index c63f78a..053ff97 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -184,7 +184,8 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
 	return &ulp_act_tbl_list[idx];
 }
 
-/** Get a list of classifier tables that implement the flow
+/*
+ * Get a list of classifier tables that implement the flow
  * Gets a device dependent list of tables that implement the class template id
  *
  * dev_id [in] The device id of the forwarding element
@@ -193,13 +194,16 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
  *
  * num_tbls [out] The number of classifier tables in the returned array
  *
+ * fdb_tbl_idx [out] The flow database index Regular or default
+ *
  * returns An array of classifier tables to implement the flow, or NULL on
  * error
  */
 static struct bnxt_ulp_mapper_tbl_info *
 ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 			      uint32_t tid,
-			      uint32_t *num_tbls)
+			      uint32_t *num_tbls,
+			      uint32_t *fdb_tbl_idx)
 {
 	uint32_t idx;
 	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
@@ -212,7 +216,7 @@ ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 	 */
 	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
 	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
-
+	*fdb_tbl_idx = ulp_class_tmpl_list[tidx].flow_db_table_type;
 	return &ulp_class_tbl_list[idx];
 }
 
@@ -256,7 +260,8 @@ ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
-			     uint32_t *num_flds)
+			     uint32_t *num_flds,
+			     uint32_t *num_encap_flds)
 {
 	uint32_t idx;
 
@@ -265,6 +270,7 @@ ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
 
 	idx		= tbl->result_start_idx;
 	*num_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
 
 	/* NOTE: Need template to provide range checking define */
 	return &ulp_class_result_field_list[idx];
@@ -1147,6 +1153,7 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		struct bnxt_ulp_mapper_result_field_info *dflds;
 		struct bnxt_ulp_mapper_ident_info *idents;
 		uint32_t num_dflds, num_idents;
+		uint32_t encap_flds = 0;
 
 		/*
 		 * Since the cache entry is responsible for allocating
@@ -1167,8 +1174,9 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 
 		/* Create the result data blob */
-		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-		if (!dflds || !num_dflds) {
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds,
+						     &encap_flds);
+		if (!dflds || !num_dflds || encap_flds) {
 			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 			rc = -EINVAL;
 			goto error;
@@ -1294,6 +1302,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	int32_t	trc;
 	enum bnxt_ulp_flow_mem_type mtype = parms->device_params->flow_mem_type;
 	int32_t rc = 0;
+	uint32_t encap_flds = 0;
 
 	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
 	if (!kflds || !num_kflds) {
@@ -1328,8 +1337,8 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	 */
 
 	/* Create the result data blob */
-	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-	if (!dflds || !num_dflds) {
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds, &encap_flds);
+	if (!dflds || !num_dflds || encap_flds) {
 		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 		return -EINVAL;
 	}
@@ -1469,7 +1478,8 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Get the result fields list */
 	if (is_class_tbl)
-		flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+		flds = ulp_mapper_result_fields_get(tbl, &num_flds,
+						    &encap_flds);
 	else
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
@@ -1763,6 +1773,76 @@ ulp_mapper_cache_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 }
 
 static int32_t
+ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0;
+	struct tf_set_if_tbl_entry_parms iftbl_params = { 0 };
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	uint32_t encap_flds;
+
+	/* Initialize the blob data */
+	if (!ulp_blob_init(&data, tbl->result_bit_size,
+			   parms->device_params->byte_order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	/* Get the result fields list */
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds, &encap_flds);
+
+	if (!flds || !num_flds || encap_flds) {
+		BNXT_TF_DBG(ERR, "template undefined for the IF table\n");
+		return -EINVAL;
+	}
+
+	/* process the result fields, loop through them */
+	for (i = 0; i < num_flds; i++) {
+		/* Process the result fields */
+		rc = ulp_mapper_result_field_process(parms,
+						     tbl->direction,
+						     &flds[i],
+						     &data,
+						     "IFtable Result");
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	/* Get the index details from computed field */
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+
+	/* Perform the tf table set by filling the set params */
+	iftbl_params.dir = tbl->direction;
+	iftbl_params.type = tbl->resource_type;
+	iftbl_params.data = ulp_blob_data_get(&data, &tmplen);
+	iftbl_params.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+	iftbl_params.idx = idx;
+
+	rc = tf_set_if_tbl_entry(tfp, &iftbl_params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+			    iftbl_params.type,
+			    (iftbl_params.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iftbl_params.idx,
+			    rc);
+		return rc;
+	}
+
+	/*
+	 * TBD: Need to look at the need to store idx in flow db for restore
+	 * the table to its orginial state on deletion of this entry.
+	 */
+	return rc;
+}
+
+static int32_t
 ulp_mapper_glb_resource_info_init(struct tf *tfp,
 				  struct bnxt_ulp_mapper_data *mapper_data)
 {
@@ -1863,6 +1943,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE:
 			rc = ulp_mapper_cache_tbl_process(parms, tbl);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_IF_TABLE:
+			rc = ulp_mapper_if_tbl_process(parms, tbl);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
 				    tbl->resource_func);
@@ -2065,20 +2148,29 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Get the action table entry from device id and act context id */
 	parms.act_tid = cparms->act_tid;
-	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
-						     parms.act_tid,
-						     &parms.num_atbls);
-	if (!parms.atbls || !parms.num_atbls) {
-		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		return -EINVAL;
+
+	/*
+	 * Perform the action table get only if act template is not zero
+	 * for act template zero like for default rules ignore the action
+	 * table processing.
+	 */
+	if (parms.act_tid) {
+		parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+							     parms.act_tid,
+							     &parms.num_atbls);
+		if (!parms.atbls || !parms.num_atbls) {
+			BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			return -EINVAL;
+		}
 	}
 
 	/* Get the class table entry from device id and act context id */
 	parms.class_tid = cparms->class_tid;
 	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
 						    parms.class_tid,
-						    &parms.num_ctbls);
+						    &parms.num_ctbls,
+						    &parms.tbl_idx);
 	if (!parms.ctbls || !parms.num_ctbls) {
 		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
 			    parms.dev_id, parms.class_tid);
@@ -2112,7 +2204,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	 * free each of them.
 	 */
 	rc = ulp_flow_db_fid_alloc(ulp_ctx,
-				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   parms.tbl_idx,
 				   cparms->func_id,
 				   &parms.fid);
 	if (rc) {
@@ -2121,11 +2213,14 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	}
 
 	/* Process the action template list from the selected action table*/
-	rc = ulp_mapper_action_tbls_process(&parms);
-	if (rc) {
-		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		goto flow_error;
+	if (parms.act_tid) {
+		rc = ulp_mapper_action_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "action tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			goto flow_error;
+		}
 	}
 
 	/* All good. Now process the class template */
@@ -2258,3 +2353,4 @@ ulp_mapper_deinit(struct bnxt_ulp_context *ulp_ctx)
 	/* Reset the data pointer within the ulp_ctx. */
 	bnxt_ulp_cntxt_ptr2_mapper_data_set(ulp_ctx, NULL);
 }
+
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 89c08ab..5174223 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -256,6 +256,7 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 			BNXT_TF_DBG(ERR, "Mark index greater than allocated\n");
 			return -EINVAL;
 		}
+		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
 	}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index ac84f88..66343b9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -88,35 +88,36 @@ enum bnxt_ulp_byte_order {
 };
 
 enum bnxt_ulp_cf_idx {
-	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 0,
-	BNXT_ULP_CF_IDX_O_VTAG_NUM = 1,
-	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 2,
-	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 3,
-	BNXT_ULP_CF_IDX_I_VTAG_NUM = 4,
-	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 5,
-	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 6,
-	BNXT_ULP_CF_IDX_INCOMING_IF = 7,
-	BNXT_ULP_CF_IDX_DIRECTION = 8,
-	BNXT_ULP_CF_IDX_SVIF_FLAG = 9,
-	BNXT_ULP_CF_IDX_O_L3 = 10,
-	BNXT_ULP_CF_IDX_I_L3 = 11,
-	BNXT_ULP_CF_IDX_O_L4 = 12,
-	BNXT_ULP_CF_IDX_I_L4 = 13,
-	BNXT_ULP_CF_IDX_DEV_PORT_ID = 14,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 15,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 16,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 17,
-	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 18,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 19,
-	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 20,
-	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 21,
-	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 22,
-	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 23,
-	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 24,
-	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 25,
-	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 26,
-	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 27,
-	BNXT_ULP_CF_IDX_LAST = 28
+	BNXT_ULP_CF_IDX_NOT_USED = 0,
+	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 1,
+	BNXT_ULP_CF_IDX_O_VTAG_NUM = 2,
+	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 3,
+	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 4,
+	BNXT_ULP_CF_IDX_I_VTAG_NUM = 5,
+	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 6,
+	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 7,
+	BNXT_ULP_CF_IDX_INCOMING_IF = 8,
+	BNXT_ULP_CF_IDX_DIRECTION = 9,
+	BNXT_ULP_CF_IDX_SVIF_FLAG = 10,
+	BNXT_ULP_CF_IDX_O_L3 = 11,
+	BNXT_ULP_CF_IDX_I_L3 = 12,
+	BNXT_ULP_CF_IDX_O_L4 = 13,
+	BNXT_ULP_CF_IDX_I_L4 = 14,
+	BNXT_ULP_CF_IDX_DEV_PORT_ID = 15,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 16,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 17,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 18,
+	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 19,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 20,
+	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 21,
+	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 22,
+	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 23,
+	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 24,
+	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 25,
+	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
+	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
+	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
+	BNXT_ULP_CF_IDX_LAST = 29
 };
 
 enum bnxt_ulp_critical_resource {
@@ -133,11 +134,6 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
-enum bnxt_ulp_df_param_type {
-	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
-	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
-};
-
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
@@ -154,7 +150,8 @@ enum bnxt_ulp_flow_mem_type {
 enum bnxt_ulp_glb_regfile_index {
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 2
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
 };
 
 enum bnxt_ulp_hdr_type {
@@ -204,22 +201,22 @@ enum bnxt_ulp_priority {
 };
 
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 0,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 1,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 2,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 3,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 4,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 5,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 6,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 7,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 8,
-	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 9,
-	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 10,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 11,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 12,
-	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 13,
-	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 14,
-	BNXT_ULP_REGFILE_INDEX_NOT_USED = 15,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 1,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 8,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 9,
+	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 10,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 11,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 12,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
+	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
+	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
 	BNXT_ULP_REGFILE_INDEX_LAST = 16
 };
 
@@ -265,10 +262,10 @@ enum bnxt_ulp_resource_func {
 enum bnxt_ulp_resource_sub_type {
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT_INDEX = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT_INDEX = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX = 1,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
 	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
 };
 
@@ -282,7 +279,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
 	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
@@ -398,7 +394,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
 	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
 	BNXT_ULP_SYM_NO = 0,
@@ -489,6 +484,11 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_wh_plus {
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+};
+
 enum bnxt_ulp_act_prop_sz {
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
@@ -588,4 +588,9 @@ enum bnxt_ulp_act_hid {
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
 	BNXT_ULP_ACT_HID_0040 = 0x0040
 };
+
+enum bnxt_ulp_df_tpl {
+	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5c43358..1188223 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -150,9 +150,10 @@ struct bnxt_ulp_device_params {
 
 /* Flow Mapper */
 struct bnxt_ulp_mapper_tbl_list_info {
-	uint32_t	device_name;
-	uint32_t	start_tbl_idx;
-	uint32_t	num_tbls;
+	uint32_t		device_name;
+	uint32_t		start_tbl_idx;
+	uint32_t		num_tbls;
+	enum bnxt_ulp_fdb_type	flow_db_table_type;
 };
 
 struct bnxt_ulp_mapper_tbl_info {
@@ -183,6 +184,7 @@ struct bnxt_ulp_mapper_tbl_info {
 
 	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
+	uint32_t			comp_field_idx;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 35/50] net/bnxt: disable vector mode in tx direction when truflow is enabled
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (33 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 34/50] net/bnxt: add support for if table processing Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 36/50] net/bnxt: add index opcode and index operand mapper table Somnath Kotur
                   ` (15 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The vector mode in the tx handler is disabled when truflow is
enabled since truflow now requires bd action record support.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 697cd66..3550257 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1116,12 +1116,15 @@ bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
 {
 #ifdef RTE_ARCH_X86
 #ifndef RTE_LIBRTE_IEEE1588
+	struct bnxt *bp = eth_dev->data->dev_private;
+
 	/*
 	 * Vector mode transmit can be enabled only if not using scatter rx
 	 * or tx offloads.
 	 */
 	if (!eth_dev->data->scattered_rx &&
-	    !eth_dev->data->dev_conf.txmode.offloads) {
+	    !eth_dev->data->dev_conf.txmode.offloads &&
+	    !BNXT_TRUFLOW_EN(bp)) {
 		PMD_DRV_LOG(INFO, "Using vector mode transmit for port %d\n",
 			    eth_dev->data->port_id);
 		return bnxt_xmit_pkts_vec;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 36/50] net/bnxt: add index opcode and index operand mapper table
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (34 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 35/50] net/bnxt: disable vector mode in tx direction when truflow is enabled Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 37/50] net/bnxt: add support for global resource templates Somnath Kotur
                   ` (14 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Extended the regfile and computed field operations to a common
index opcode operation and added globlal resource operations are
also part of the index opcode operation.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 56 +++++++++++++++++++++----
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c   |  9 ++--
 drivers/net/bnxt/tf_ulp/ulp_template_db_class.c | 45 +++++++-------------
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h  |  8 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  4 +-
 5 files changed, 80 insertions(+), 42 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 053ff97..7ad3292 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1444,7 +1444,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	struct bnxt_ulp_mapper_result_field_info *flds;
 	struct ulp_flow_db_res_params	fid_parms;
 	struct ulp_blob	data;
-	uint64_t idx;
+	uint64_t idx = 0;
 	uint16_t tmplen;
 	uint32_t i, num_flds;
 	int32_t rc = 0, trc = 0;
@@ -1517,6 +1517,42 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 	}
 
+	/*
+	 * Check for index opcode, if it is Global then
+	 * no need to allocate the table, just set the table
+	 * and exit since it is not maintained in the flow db.
+	 */
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_GLOBAL) {
+		/* get the index from index operand */
+		if (tbl->index_operand < BNXT_ULP_GLB_REGFILE_INDEX_LAST &&
+		    ulp_mapper_glb_resource_read(parms->mapper_data,
+						 tbl->direction,
+						 tbl->index_operand,
+						 &idx)) {
+			BNXT_TF_DBG(ERR, "Glbl regfile[%d] read failed.\n",
+				    tbl->index_operand);
+			return -EINVAL;
+		}
+		/* set the Tf index table */
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->resource_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+		sparms.idx		= tfp_be_to_cpu_64(idx);
+		sparms.tbl_scope_id	= tbl_scope_id;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Glbl Set table[%d][%s][%d] failed rc=%d\n",
+				    sparms.type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+			return rc;
+		}
+		return 0; /* success */
+	}
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1547,11 +1583,13 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Always storing values in Regfile in BE */
 	idx = tfp_cpu_to_be_64(idx);
-	rc = ulp_regfile_write(parms->regfile, tbl->regfile_idx, idx);
-	if (!rc) {
-		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
-			    tbl->regfile_idx);
-		goto error;
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_ALLOCATE) {
+		rc = ulp_regfile_write(parms->regfile, tbl->index_operand, idx);
+		if (!rc) {
+			BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+				    tbl->index_operand);
+			goto error;
+		}
 	}
 
 	/* Perform the tf table set by filling the set params */
@@ -1816,7 +1854,11 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	}
 
 	/* Get the index details from computed field */
-	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+	if (tbl->index_opcode != BNXT_ULP_INDEX_OPCODE_COMP_FIELD) {
+		BNXT_TF_DBG(ERR, "Invalid tbl index opcode\n");
+		return -EINVAL;
+	}
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->index_operand);
 
 	/* Perform the tf table set by filling the set params */
 	iftbl_params.dir = tbl->direction;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 8af23ef..9b14fa0 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -76,7 +76,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -90,7 +91,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -104,7 +106,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	}
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index 1945893..9b6e736 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -113,8 +113,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 0,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -135,8 +134,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -157,8 +155,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -179,8 +176,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -201,8 +197,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -223,8 +218,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -245,8 +239,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -267,8 +260,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -289,8 +281,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -311,8 +302,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -333,8 +323,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -355,8 +344,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -377,8 +365,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -399,8 +386,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -421,8 +407,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 66343b9..0215a5d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -161,6 +161,14 @@ enum bnxt_ulp_hdr_type {
 	BNXT_ULP_HDR_TYPE_LAST = 3
 };
 
+enum bnxt_ulp_index_opcode {
+	BNXT_ULP_INDEX_OPCODE_NOT_USED = 0,
+	BNXT_ULP_INDEX_OPCODE_ALLOCATE = 1,
+	BNXT_ULP_INDEX_OPCODE_GLOBAL = 2,
+	BNXT_ULP_INDEX_OPCODE_COMP_FIELD = 3,
+	BNXT_ULP_INDEX_OPCODE_LAST = 4
+};
+
 enum bnxt_ulp_mapper_opc {
 	BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 1188223..a3ddd33 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -182,9 +182,9 @@ struct bnxt_ulp_mapper_tbl_info {
 	uint32_t	ident_start_idx;
 	uint16_t	ident_nums;
 
-	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
-	uint32_t			comp_field_idx;
+	enum bnxt_ulp_index_opcode	index_opcode;
+	uint32_t			index_operand;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 37/50] net/bnxt: add support for global resource templates
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (35 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 36/50] net/bnxt: add index opcode and index operand mapper table Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 38/50] net/bnxt: add support for internal exact match entries Somnath Kotur
                   ` (13 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the global resource templates, so that they
can be reused by the other regular templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           | 178 ++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |   1 +
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c  |   3 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h  |   6 +
 4 files changed, 181 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 7ad3292..c6fba51 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -80,15 +80,20 @@ ulp_mapper_glb_resource_write(struct bnxt_ulp_mapper_data *data,
  * returns 0 on success
  */
 static int32_t
-ulp_mapper_resource_ident_allocate(struct tf *tfp,
+ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 				   struct bnxt_ulp_mapper_data *mapper_data,
 				   struct bnxt_ulp_glb_resource_info *glb_res)
 {
 	struct tf_alloc_identifier_parms iparms = { 0 };
 	struct tf_free_identifier_parms fparms;
 	uint64_t regval;
+	struct tf *tfp;
 	int32_t rc = 0;
 
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
 	iparms.ident_type = glb_res->resource_type;
 	iparms.dir = glb_res->direction;
 
@@ -115,13 +120,76 @@ ulp_mapper_resource_ident_allocate(struct tf *tfp,
 		return rc;
 	}
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res[%s][%d][%d] = 0x%04x\n",
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
 		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
 		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
 #endif
 	return rc;
 }
 
+/*
+ * Internal function to allocate index tbl resource and store it in mapper data.
+ *
+ * returns 0 on success
+ */
+static int32_t
+ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
+				    struct bnxt_ulp_mapper_data *mapper_data,
+				    struct bnxt_ulp_glb_resource_info *glb_res)
+{
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+	uint64_t regval;
+	struct tf *tfp;
+	uint32_t tbl_scope_id;
+	int32_t rc = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
+	/* Get the scope id */
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	aparms.type = glb_res->resource_type;
+	aparms.dir = glb_res->direction;
+	aparms.search_enable = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO;
+	aparms.tbl_scope_id = tbl_scope_id;
+
+	/* Allocate the index tbl using tf api */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to alloc identifier [%s][%d]\n",
+			    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    aparms.type);
+		return rc;
+	}
+
+	/* entries are stored as big-endian format */
+	regval = tfp_cpu_to_be_64((uint64_t)aparms.idx);
+	/* write to the mapper global resource */
+	rc = ulp_mapper_glb_resource_write(mapper_data, glb_res, regval);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to write to global resource id\n");
+		/* Free the identifer when update failed */
+		free_parms.dir = aparms.dir;
+		free_parms.type = aparms.type;
+		free_parms.idx = aparms.idx;
+		tf_free_tbl_entry(tfp, &free_parms);
+		return rc;
+	}
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
+		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
+#endif
+	return rc;
+}
+
 /* Retrieve the cache initialization parameters for the tbl_idx */
 static struct bnxt_ulp_cache_tbl_params *
 ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
@@ -132,6 +200,16 @@ ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
 	return &ulp_cache_tbl_params[tbl_idx];
 }
 
+/* Retrieve the global template table */
+static uint32_t *
+ulp_mapper_glb_template_table_get(uint32_t *num_entries)
+{
+	if (!num_entries)
+		return NULL;
+	*num_entries = BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ;
+	return ulp_glb_template_tbl;
+}
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -659,7 +737,10 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 			return -EINVAL;
 		}
 		act_bit = tfp_be_to_cpu_64(act_bit);
-		act_val = ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit);
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit))
+			act_val = 1;
+		else
+			act_val = 0;
 		if (fld->field_bit_size > ULP_BYTE_2_BITS(sizeof(act_val))) {
 			BNXT_TF_DBG(ERR, "%s field size is incorrect\n", name);
 			return -EINVAL;
@@ -1553,6 +1634,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 		return 0; /* success */
 	}
+
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1617,6 +1699,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	fid_parms.direction	= tbl->direction;
 	fid_parms.resource_func	= tbl->resource_func;
 	fid_parms.resource_type	= tbl->resource_type;
+	fid_parms.resource_sub_type = tbl->resource_sub_type;
 	fid_parms.resource_hndl	= aparms.idx;
 	fid_parms.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO;
 
@@ -1885,7 +1968,7 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 }
 
 static int32_t
-ulp_mapper_glb_resource_info_init(struct tf *tfp,
+ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 				  struct bnxt_ulp_mapper_data *mapper_data)
 {
 	struct bnxt_ulp_glb_resource_info *glb_res;
@@ -1902,15 +1985,23 @@ ulp_mapper_glb_resource_info_init(struct tf *tfp,
 	for (idx = 0; idx < num_glb_res_ids; idx++) {
 		switch (glb_res[idx].resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
-			rc = ulp_mapper_resource_ident_allocate(tfp,
+			rc = ulp_mapper_resource_ident_allocate(ulp_ctx,
 								mapper_data,
 								&glb_res[idx]);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_resource_index_tbl_alloc(ulp_ctx,
+								 mapper_data,
+								 &glb_res[idx]);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Global resource %x not supported\n",
 				    glb_res[idx].resource_func);
+			rc = -EINVAL;
 			break;
 		}
+		if (rc)
+			return rc;
 	}
 	return rc;
 }
@@ -2126,7 +2217,9 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 			res.resource_func = ent->resource_func;
 			res.direction = dir;
 			res.resource_type = ent->resource_type;
-			res.resource_hndl = ent->resource_hndl;
+			/*convert it from BE to cpu */
+			res.resource_hndl =
+				tfp_be_to_cpu_64(ent->resource_hndl);
 			ulp_mapper_resource_free(ulp_ctx, &res);
 		}
 	}
@@ -2145,6 +2238,71 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
 
+/* Function to handle the default global templates that are allocated during
+ * the startup and reused later.
+ */
+static int32_t
+ulp_mapper_glb_template_table_init(struct bnxt_ulp_context *ulp_ctx)
+{
+	uint32_t *glbl_tmpl_list;
+	uint32_t num_glb_tmpls, idx, dev_id;
+	struct bnxt_ulp_mapper_parms parms;
+	struct bnxt_ulp_mapper_data *mapper_data;
+	int32_t rc = 0;
+
+	glbl_tmpl_list = ulp_mapper_glb_template_table_get(&num_glb_tmpls);
+	if (!glbl_tmpl_list || !num_glb_tmpls)
+		return rc; /* No global templates to process */
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mapper_data = bnxt_ulp_cntxt_ptr2_mapper_data_get(ulp_ctx);
+	if (!mapper_data) {
+		BNXT_TF_DBG(ERR, "Failed to get the ulp mapper data\n");
+		return -EINVAL;
+	}
+
+	/* Iterate the global resources and process each one */
+	for (idx = 0; idx < num_glb_tmpls; idx++) {
+		/* Initialize the parms structure */
+		memset(&parms, 0, sizeof(parms));
+		parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+		parms.ulp_ctx = ulp_ctx;
+		parms.dev_id = dev_id;
+		parms.mapper_data = mapper_data;
+
+		/* Get the class table entry from dev id and class id */
+		parms.class_tid = glbl_tmpl_list[idx];
+		parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+							    parms.class_tid,
+							    &parms.num_ctbls,
+							    &parms.tbl_idx);
+		if (!parms.ctbls || !parms.num_ctbls) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		parms.device_params = bnxt_ulp_device_params_get(parms.dev_id);
+		if (!parms.device_params) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		rc = ulp_mapper_class_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "class tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return rc;
+		}
+	}
+	return rc;
+}
+
 /* Function to handle the mapping of the Flow to be compatible
  * with the underlying hardware.
  */
@@ -2317,7 +2475,7 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 	}
 
 	/* Allocate the global resource ids */
-	rc = ulp_mapper_glb_resource_info_init(tfp, data);
+	rc = ulp_mapper_glb_resource_info_init(ulp_ctx, data);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to initialize global resource ids\n");
 		goto error;
@@ -2345,6 +2503,12 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 		}
 	}
 
+	rc = ulp_mapper_glb_template_table_init(ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize global templates\n");
+		goto error;
+	}
+
 	return 0;
 error:
 	/* Ignore the return code in favor of returning the original error. */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 0215a5d..7c0dc5e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -26,6 +26,7 @@
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
 #define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index e4b564a..d792ec9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -547,3 +547,6 @@ uint32_t bnxt_ulp_encap_vtag_map[] = {
 	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
 
+uint32_t ulp_glb_template_tbl[] = {
+};
+
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index a3ddd33..4bcd02b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -299,4 +299,10 @@ extern struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[];
  */
 extern struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[];
 
+/*
+ * The ulp_global template table is used to initialize default entries
+ * that could be reused by other templates.
+ */
+extern uint32_t ulp_glb_template_tbl[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 38/50] net/bnxt: add support for internal exact match entries
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (36 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 37/50] net/bnxt: add support for global resource templates Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 39/50] net/bnxt: add support for conditional execution of mapper tables Somnath Kotur
                   ` (12 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the internal exact match entries.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c              | 38 +++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c           | 13 ++++++---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 21 ++++++++------
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  4 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db_class.c |  6 ++--
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h  | 13 +++++----
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c   |  7 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  5 ++++
 8 files changed, 85 insertions(+), 22 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4835b95..1b52861 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -213,8 +213,27 @@ static int32_t
 ulp_eem_tbl_scope_init(struct bnxt *bp)
 {
 	struct tf_alloc_tbl_scope_parms params = {0};
+	uint32_t dev_id;
+	struct bnxt_ulp_device_params *dparms;
 	int rc;
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope alloc is not required\n");
+		return 0;
+	}
+
 	bnxt_init_tbl_scope_parms(bp, &params);
 
 	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
@@ -240,6 +259,8 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 	struct tf_free_tbl_scope_parms	params = {0};
 	struct tf			*tfp;
 	int32_t				rc = 0;
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id;
 
 	if (!ulp_ctx || !ulp_ctx->cfg_data)
 		return -EINVAL;
@@ -254,6 +275,23 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 		return -EINVAL;
 	}
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope free is not required\n");
+		return 0;
+	}
+
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 384dc5b..7696de2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -114,7 +114,8 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info *resource_info,
 	}
 
 	/* Store the handle as 64bit only for EM table entries */
-	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE &&
+	    params->resource_func != BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
 		resource_info->resource_type = params->resource_type;
 		resource_info->resource_sub_type = params->resource_sub_type;
@@ -145,7 +146,8 @@ ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info *resource_info,
 	/* use the helper function to get the resource func */
 	params->resource_func = ulp_flow_db_resource_func_get(resource_info);
 
-	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    params->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		params->resource_hndl = resource_info->resource_em_handle;
 	} else if (params->resource_func & ULP_FLOW_DB_RES_FUNC_NEED_LOWER) {
 		params->resource_hndl = resource_info->resource_hndl;
@@ -908,7 +910,9 @@ ulp_flow_db_resource_hndl_get(struct bnxt_ulp_context *ulp_ctx,
 				}
 
 			} else if (resource_func ==
-				   BNXT_ULP_RESOURCE_FUNC_EM_TABLE){
+				   BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+				   resource_func ==
+				   BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 				*res_hndl = fid_res->resource_em_handle;
 				return 0;
 			}
@@ -966,7 +970,8 @@ static void ulp_flow_db_res_dump(struct ulp_fdb_resource_info	*r,
 
 	BNXT_TF_DBG(DEBUG, "Resource func = %x, nxt_resource_idx = %x\n",
 		    res_func, (ULP_FLOW_DB_RES_NXT_MASK & r->nxt_resource_idx));
-	if (res_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE)
+	if (res_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    res_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		BNXT_TF_DBG(DEBUG, "EM Handle = 0x%016" PRIX64 "\n",
 			    r->resource_em_handle);
 	else
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index c6fba51..6562166 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -556,15 +556,18 @@ ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp,
 }
 
 static inline int32_t
-ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
-			  struct tf *tfp,
-			  struct ulp_flow_db_res_params *res)
+ulp_mapper_em_entry_free(struct bnxt_ulp_context *ulp,
+			 struct tf *tfp,
+			 struct ulp_flow_db_res_params *res)
 {
 	struct tf_delete_em_entry_parms fparms = { 0 };
 	int32_t rc;
 
 	fparms.dir		= res->direction;
-	fparms.mem		= TF_MEM_EXTERNAL;
+	if (res->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE)
+		fparms.mem = TF_MEM_EXTERNAL;
+	else
+		fparms.mem = TF_MEM_INTERNAL;
 	fparms.flow_handle	= res->resource_hndl;
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
@@ -1444,7 +1447,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 
 	/* do the transpose for the internal EM keys */
-	if (tbl->resource_type == TF_MEM_INTERNAL)
+	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		ulp_blob_perform_byte_reverse(&key);
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
@@ -2067,7 +2070,8 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
 			break;
-		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
 			rc = ulp_mapper_em_tbl_process(parms, tbl);
 			break;
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
@@ -2120,8 +2124,9 @@ ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
 	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
 		break;
-	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
-		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+	case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+	case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
+		rc = ulp_mapper_em_entry_free(ulp, tfp, res);
 		break;
 	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 5174223..b3527ec 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -87,6 +87,9 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	/* Need to allocate 2 * Num flows to account for hash type bit */
 	mark_tbl->gfid_num_entries = dparms->mark_db_gfid_entries;
+	if (!mark_tbl->gfid_num_entries)
+		goto gfid_not_required;
+
 	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
 					 mark_tbl->gfid_num_entries *
 					 sizeof(struct bnxt_gfid_mark_info),
@@ -109,6 +112,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_num_entries - 1,
 		    mark_tbl->gfid_mask);
 
+gfid_not_required:
 	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 	return 0;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index 9b6e736..e51338d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -179,7 +179,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -284,7 +284,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -389,7 +389,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 7c0dc5e..3168d29 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -149,10 +149,11 @@ enum bnxt_ulp_flow_mem_type {
 };
 
 enum bnxt_ulp_glb_regfile_index {
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
+	BNXT_ULP_GLB_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 1,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR = 3,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 4
 };
 
 enum bnxt_ulp_hdr_type {
@@ -257,8 +258,8 @@ enum bnxt_ulp_match_type_bitmask {
 
 enum bnxt_ulp_resource_func {
 	BNXT_ULP_RESOURCE_FUNC_INVALID = 0x00,
-	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 0x20,
-	BNXT_ULP_RESOURCE_FUNC_RSVD1 = 0x40,
+	BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE = 0x20,
+	BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE = 0x40,
 	BNXT_ULP_RESOURCE_FUNC_RSVD2 = 0x60,
 	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0x80,
 	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 0x81,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index d792ec9..f65aeae 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -321,7 +321,12 @@ struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	.mark_db_gfid_entries   = 65536,
 	.flow_count_db_entries  = 16384,
 	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2
+	.num_phy_ports          = 2,
+	.ext_cntr_table_type    = 0,
+	.byte_count_mask        = 0x00000003ffffffff,
+	.packet_count_mask      = 0xfffffffc00000000,
+	.byte_count_shift       = 0,
+	.packet_count_shift     = 36
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4bcd02b..5a7a7b9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -146,6 +146,11 @@ struct bnxt_ulp_device_params {
 	uint64_t			flow_db_num_entries;
 	uint32_t			flow_count_db_entries;
 	uint32_t			num_resources_per_flow;
+	uint32_t			ext_cntr_table_type;
+	uint64_t			byte_count_mask;
+	uint64_t			packet_count_mask;
+	uint32_t			byte_count_shift;
+	uint32_t			packet_count_shift;
 };
 
 /* Flow Mapper */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 39/50] net/bnxt: add support for conditional execution of mapper tables
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (37 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 38/50] net/bnxt: add support for internal exact match entries Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 40/50] net/bnxt: enable HWRM_PORT_MAC_QCFG for trusted vf Somnath Kotur
                   ` (11 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for conditional execution of the mapper tables so that
actions like count will have table processed only if action count
is configured.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           | 45 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h           |  1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       |  8 +++++
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h | 12 ++++++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h  |  2 ++
 5 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 6562166..f293a90 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2010,6 +2010,44 @@ ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 }
 
 /*
+ * Function to process the conditional opcode of the mapper table.
+ * returns 1 to skip the table.
+ * return 0 to continue processing the table.
+ */
+static int32_t
+ulp_mapper_tbl_cond_opcode_process(struct bnxt_ulp_mapper_parms *parms,
+				   struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	int32_t rc = 1;
+
+	switch (tbl->cond_opcode) {
+	case BNXT_ULP_COND_OPCODE_NOP:
+		rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_COMP_FIELD:
+		if (tbl->cond_operand < BNXT_ULP_CF_IDX_LAST &&
+		    ULP_COMP_FLD_IDX_RD(parms, tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_ACTION_BIT:
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_HDR_BIT:
+		if (ULP_BITMAP_ISSET(parms->hdr_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	default:
+		BNXT_TF_DBG(ERR,
+			    "Invalid arg in mapper tbl for cond opcode\n");
+		break;
+	}
+	return rc;
+}
+
+/*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
  */
@@ -2028,6 +2066,9 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	for (i = 0; i < parms->num_atbls; i++) {
 		tbl = &parms->atbls[i];
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
@@ -2066,6 +2107,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 	for (i = 0; i < parms->num_ctbls; i++) {
 		struct bnxt_ulp_mapper_tbl_info *tbl = &parms->ctbls[i];
 
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
@@ -2327,6 +2371,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	memset(&parms, 0, sizeof(parms));
 	parms.act_prop = cparms->act_prop;
 	parms.act_bitmap = cparms->act;
+	parms.hdr_bitmap = cparms->hdr_bitmap;
 	parms.regfile = &regfile;
 	parms.hdr_field = cparms->hdr_field;
 	parms.comp_fld = cparms->comp_fld;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 1df23db..5f89a0e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -62,6 +62,7 @@ struct bnxt_ulp_mapper_parms {
 	uint32_t				num_ctbls;
 	struct ulp_rte_act_prop			*act_prop;
 	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_bitmap		*hdr_bitmap;
 	struct ulp_rte_hdr_field		*hdr_field;
 	uint32_t				*comp_fld;
 	struct ulp_regfile			*regfile;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 41ac77c..8fffaec 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1128,6 +1128,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv4 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
@@ -1148,6 +1152,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv6 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 3168d29..27628a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -118,7 +118,17 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
 	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
 	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
-	BNXT_ULP_CF_IDX_LAST = 29
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 29,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 30,
+	BNXT_ULP_CF_IDX_LAST = 31
+};
+
+enum bnxt_ulp_cond_opcode {
+	BNXT_ULP_COND_OPCODE_NOP = 0,
+	BNXT_ULP_COND_OPCODE_COMP_FIELD = 1,
+	BNXT_ULP_COND_OPCODE_ACTION_BIT = 2,
+	BNXT_ULP_COND_OPCODE_HDR_BIT = 3,
+	BNXT_ULP_COND_OPCODE_LAST = 4
 };
 
 enum bnxt_ulp_critical_resource {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5a7a7b9..df999b1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -165,6 +165,8 @@ struct bnxt_ulp_mapper_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t			resource_type; /* TF_ enum type */
 	enum bnxt_ulp_resource_sub_type	resource_sub_type;
+	enum bnxt_ulp_cond_opcode	cond_opcode;
+	uint32_t			cond_operand;
 	uint8_t		direction;
 	uint32_t	priority;
 	uint8_t		srch_b4_alloc;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 40/50] net/bnxt: enable HWRM_PORT_MAC_QCFG for trusted vf
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (38 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 39/50] net/bnxt: add support for conditional execution of mapper tables Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 41/50] net/bnxt: enhancements for port db Somnath Kotur
                   ` (10 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_MAC_QCFG command on trusted vf to fetch the port count.

Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2605ef0..6ade32d 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3194,14 +3194,14 @@ int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 
 	bp->port_svif = BNXT_SVIF_INVALID;
 
-	if (!BNXT_PF(bp))
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
 		return 0;
 
 	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
-	HWRM_CHECK_RESULT();
+	HWRM_CHECK_RESULT_SILENT();
 
 	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
 	if (port_svif_info &
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 41/50] net/bnxt: enhancements for port db
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (39 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 40/50] net/bnxt: enable HWRM_PORT_MAC_QCFG for trusted vf Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 42/50] net/bnxt: fix for VF to VFR conduit Somnath Kotur
                   ` (9 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

1. Add "enum bnxt_ulp_intf_type” as the second parameter for the
   port & func helper functions
2. Return vfrep related port & func information in the helper functions
3. Allocate phy_port_list dynamically based on port count
4. Introduce ulp_func_id_tbl array for book keeping func related
   information indexed by func_id

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  10 ++-
 drivers/net/bnxt/bnxt_ethdev.c           |  64 ++++++++++++--
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h |   6 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c       |   2 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c  |   9 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.c    | 142 ++++++++++++++++++++++++-------
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    |  56 ++++++++++--
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  22 ++++-
 8 files changed, 249 insertions(+), 62 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 43e5e71..32acced 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -23,6 +23,7 @@
 
 #include "tf_core.h"
 #include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -879,10 +880,11 @@ extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
-uint16_t bnxt_get_vnic_id(uint16_t port);
-uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
-uint16_t bnxt_get_fw_func_id(uint16_t port);
-uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif,
+		       enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_parif(uint16_t port, enum bnxt_ulp_intf_type type);
 uint16_t bnxt_get_phy_port_id(uint16_t port);
 uint16_t bnxt_get_vport(uint16_t port);
 enum bnxt_ulp_intf_type
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 3550257..332644d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5067,25 +5067,48 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 }
 
 uint16_t
-bnxt_get_svif(uint16_t port_id, bool func_svif)
+bnxt_get_svif(uint16_t port_id, bool func_svif,
+	      enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->svif;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
 uint16_t
-bnxt_get_vnic_id(uint16_t port)
+bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt_vnic_info *vnic;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->dflt_vnic_id;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
@@ -5094,12 +5117,23 @@ bnxt_get_vnic_id(uint16_t port)
 }
 
 uint16_t
-bnxt_get_fw_func_id(uint16_t port)
+bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return bp->fw_fid;
@@ -5116,8 +5150,14 @@ bnxt_get_interface_type(uint16_t port)
 		return BNXT_ULP_INTF_TYPE_VF_REP;
 
 	bp = eth_dev->data->dev_private;
-	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
-			   : BNXT_ULP_INTF_TYPE_VF;
+	if (BNXT_PF(bp))
+		return BNXT_ULP_INTF_TYPE_PF;
+	else if (BNXT_VF_IS_TRUSTED(bp))
+		return BNXT_ULP_INTF_TYPE_TRUSTED_VF;
+	else if (BNXT_VF(bp))
+		return BNXT_ULP_INTF_TYPE_VF;
+
+	return BNXT_ULP_INTF_TYPE_INVALID;
 }
 
 uint16_t
@@ -5130,6 +5170,9 @@ bnxt_get_phy_port_id(uint16_t port_id)
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
 		vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
 		eth_dev = vfr->parent_dev;
 	}
 
@@ -5139,15 +5182,20 @@ bnxt_get_phy_port_id(uint16_t port_id)
 }
 
 uint16_t
-bnxt_get_parif(uint16_t port_id)
+bnxt_get_parif(uint16_t port_id, enum bnxt_ulp_intf_type type)
 {
-	struct bnxt_vf_representor *vfr;
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
-		vfr = eth_dev->data->dev_private;
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid - 1;
+
 		eth_dev = vfr->parent_dev;
 	}
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f772d49..ebb7140 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -6,6 +6,11 @@
 #ifndef _BNXT_TF_COMMON_H_
 #define _BNXT_TF_COMMON_H_
 
+#include <inttypes.h>
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db_enum.h"
+
 #define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
 
 #define BNXT_ULP_EM_FLOWS			8192
@@ -48,6 +53,7 @@ enum ulp_direction_type {
 enum bnxt_ulp_intf_type {
 	BNXT_ULP_INTF_TYPE_INVALID = 0,
 	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_TRUSTED_VF,
 	BNXT_ULP_INTF_TYPE_VF,
 	BNXT_ULP_INTF_TYPE_PF_REP,
 	BNXT_ULP_INTF_TYPE_VF_REP,
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 1b52861..e5e7e5f 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -658,7 +658,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	rc = ulp_dparms_init(bp, bp->ulp_ctx);
 
 	/* create the port database */
-	rc = ulp_port_db_init(bp->ulp_ctx);
+	rc = ulp_port_db_init(bp->ulp_ctx, bp->port_cnt);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to create the port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6eb2d61..138b0b7 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -128,7 +128,8 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	mapper_cparms.act_prop = &params.act_prop;
 	mapper_cparms.class_tid = class_id;
 	mapper_cparms.act_tid = act_tmpl;
-	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id,
+						    BNXT_ULP_INTF_TYPE_INVALID);
 	mapper_cparms.dir = params.dir;
 
 	/* Call the ulp mapper to create the flow in the hardware. */
@@ -226,7 +227,8 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	}
 
 	flow_id = (uint32_t)(uintptr_t)flow;
-	func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	func_id = bnxt_get_fw_func_id(dev->data->port_id,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
@@ -270,7 +272,8 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	if (ulp_ctx_deinit_allowed(bp)) {
 		ret = ulp_flow_db_session_flow_flush(ulp_ctx);
 	} else if (bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx)) {
-		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id);
+		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id,
+					      BNXT_ULP_INTF_TYPE_INVALID);
 		ret = ulp_flow_db_function_flow_flush(ulp_ctx, func_id);
 	}
 	if (ret)
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index ea27ef4..de276f7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -33,7 +33,7 @@ ulp_port_db_allocate_ifindex(struct bnxt_ulp_port_db *port_db)
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt)
 {
 	struct bnxt_ulp_port_db *port_db;
 
@@ -60,6 +60,18 @@ int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
 			    "Failed to allocate mem for port interface list\n");
 		goto error_free;
 	}
+
+	/* Allocate the phy port list */
+	port_db->phy_port_list = rte_zmalloc("bnxt_ulp_phy_port_list",
+					     port_cnt *
+					     sizeof(struct ulp_phy_port_info),
+					     0);
+	if (!port_db->phy_port_list) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate mem for phy port list\n");
+		goto error_free;
+	}
+
 	return 0;
 
 error_free:
@@ -89,6 +101,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 	bnxt_ulp_cntxt_ptr2_port_db_set(ulp_ctxt, NULL);
 
 	/* Free up all the memory. */
+	rte_free(port_db->phy_port_list);
 	rte_free(port_db->ulp_intf_list);
 	rte_free(port_db);
 	return 0;
@@ -110,6 +123,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	struct ulp_phy_port_info *port_data;
 	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	struct ulp_func_if_info *func;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -134,20 +148,47 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	intf = &port_db->ulp_intf_list[ifindex];
 
 	intf->type = bnxt_get_interface_type(port_id);
+	intf->drv_func_id = bnxt_get_fw_func_id(port_id,
+						BNXT_ULP_INTF_TYPE_INVALID);
+
+	func = &port_db->ulp_func_id_tbl[intf->drv_func_id];
+	if (!func->func_valid) {
+		func->func_svif = bnxt_get_svif(port_id, true,
+						BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_spif = bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+		func->func_valid = true;
+	}
 
-	intf->func_id = bnxt_get_fw_func_id(port_id);
-	intf->func_svif = bnxt_get_svif(port_id, 1);
-	intf->func_spif = bnxt_get_phy_port_id(port_id);
-	intf->func_parif = bnxt_get_parif(port_id);
-	intf->default_vnic = bnxt_get_vnic_id(port_id);
-	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+	if (intf->type == BNXT_ULP_INTF_TYPE_VF_REP) {
+		intf->vf_func_id =
+			bnxt_get_fw_func_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+
+		func = &port_db->ulp_func_id_tbl[intf->vf_func_id];
+		func->func_svif =
+			bnxt_get_svif(port_id, true, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->func_spif =
+			bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+	}
 
-	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
-		port_data = &port_db->phy_port_list[intf->phy_port_id];
-		port_data->port_svif = bnxt_get_svif(port_id, 0);
+	port_data = &port_db->phy_port_list[func->phy_port_id];
+	if (!port_data->port_valid) {
+		port_data->port_svif =
+			bnxt_get_svif(port_id, false, BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_spif = bnxt_get_phy_port_id(port_id);
-		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_vport = bnxt_get_vport(port_id);
+		port_data->port_valid = true;
 	}
 
 	return 0;
@@ -194,6 +235,7 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 			    uint32_t ifindex,
+			    uint32_t fid_type,
 			    uint16_t *func_id)
 {
 	struct bnxt_ulp_port_db *port_db;
@@ -203,7 +245,12 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*func_id =  port_db->ulp_intf_list[ifindex].func_id;
+
+	if (fid_type == BNXT_ULP_DRV_FUNC_FID)
+		*func_id =  port_db->ulp_intf_list[ifindex].drv_func_id;
+	else
+		*func_id =  port_db->ulp_intf_list[ifindex].vf_func_id;
+
 	return 0;
 }
 
@@ -212,7 +259,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * svif_type [in] the svif type of the given ifindex.
  * svif [out] the svif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -220,21 +267,27 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t svif_type,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*svif = port_db->ulp_intf_list[ifindex].func_svif;
+
+	if (svif_type == BNXT_ULP_DRV_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
+	} else if (svif_type == BNXT_ULP_VF_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*svif = port_db->phy_port_list[phy_port_id].port_svif;
 	}
 
@@ -246,7 +299,7 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * spif_type [in] the spif type of the given ifindex.
  * spif [out] the spif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -254,21 +307,27 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t spif_type,
 		     uint16_t *spif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+
+	if (spif_type == BNXT_ULP_DRV_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
+	} else if (spif_type == BNXT_ULP_VF_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*spif = port_db->phy_port_list[phy_port_id].port_spif;
 	}
 
@@ -280,7 +339,7 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * parif_type [in] the parif type of the given ifindex.
  * parif [out] the parif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -288,21 +347,26 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t parif_type,
 		     uint16_t *parif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	if (parif_type == BNXT_ULP_DRV_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
+	} else if (parif_type == BNXT_ULP_VF_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*parif = port_db->phy_port_list[phy_port_id].port_parif;
 	}
 
@@ -321,16 +385,26 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 			     uint32_t ifindex,
+			     uint32_t vnic_type,
 			     uint16_t *vnic)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	} else {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	}
+
 	return 0;
 }
 
@@ -348,14 +422,16 @@ ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 		      uint32_t ifindex, uint16_t *vport)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+
+	func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+	phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 	*vport = port_db->phy_port_list[phy_port_id].port_vport;
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 87de3bc..b1419a3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -9,19 +9,54 @@
 #include "bnxt_ulp.h"
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
+#define BNXT_PORT_DB_MAX_FUNC			2048
 
-/* Structure for the Port database resource information. */
-struct ulp_interface_info {
-	enum bnxt_ulp_intf_type	type;
-	uint16_t		func_id;
+enum bnxt_ulp_svif_type {
+	BNXT_ULP_DRV_FUNC_SVIF = 0,
+	BNXT_ULP_VF_FUNC_SVIF,
+	BNXT_ULP_PHY_PORT_SVIF
+};
+
+enum bnxt_ulp_spif_type {
+	BNXT_ULP_DRV_FUNC_SPIF = 0,
+	BNXT_ULP_VF_FUNC_SPIF,
+	BNXT_ULP_PHY_PORT_SPIF
+};
+
+enum bnxt_ulp_parif_type {
+	BNXT_ULP_DRV_FUNC_PARIF = 0,
+	BNXT_ULP_VF_FUNC_PARIF,
+	BNXT_ULP_PHY_PORT_PARIF
+};
+
+enum bnxt_ulp_vnic_type {
+	BNXT_ULP_DRV_FUNC_VNIC = 0,
+	BNXT_ULP_VF_FUNC_VNIC
+};
+
+enum bnxt_ulp_fid_type {
+	BNXT_ULP_DRV_FUNC_FID,
+	BNXT_ULP_VF_FUNC_FID
+};
+
+struct ulp_func_if_info {
+	uint16_t		func_valid;
 	uint16_t		func_svif;
 	uint16_t		func_spif;
 	uint16_t		func_parif;
-	uint16_t		default_vnic;
+	uint16_t		func_vnic;
 	uint16_t		phy_port_id;
 };
 
+/* Structure for the Port database resource information. */
+struct ulp_interface_info {
+	enum bnxt_ulp_intf_type	type;
+	uint16_t		drv_func_id;
+	uint16_t		vf_func_id;
+};
+
 struct ulp_phy_port_info {
+	uint16_t	port_valid;
 	uint16_t	port_svif;
 	uint16_t	port_spif;
 	uint16_t	port_parif;
@@ -35,7 +70,8 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
-	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	*phy_port_list;
+	struct ulp_func_if_info		ulp_func_id_tbl[BNXT_PORT_DB_MAX_FUNC];
 };
 
 /*
@@ -46,7 +82,7 @@ struct bnxt_ulp_port_db {
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt);
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt);
 
 /*
  * Deinitialize the port database. Memory is deallocated in
@@ -94,7 +130,8 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex, uint16_t *func_id);
+			    uint32_t ifindex, uint32_t fid_type,
+			    uint16_t *func_id);
 
 /*
  * Api to get the svif for a given ulp ifindex.
@@ -150,7 +187,8 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex, uint16_t *vnic);
+			     uint32_t ifindex, uint32_t vnic_type,
+			     uint16_t *vnic);
 
 /*
  * Api to get the vport id for a given ulp ifindex.
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 8fffaec..073b353 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -166,6 +166,8 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 	uint16_t port_id = svif;
 	uint32_t dir = 0;
 	struct ulp_rte_hdr_field *hdr_field;
+	enum bnxt_ulp_svif_type svif_type;
+	enum bnxt_ulp_intf_type if_type;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -187,7 +189,18 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 				    "Invalid port id\n");
 			return BNXT_TF_RC_ERROR;
 		}
-		ulp_port_db_svif_get(params->ulp_ctx, ifindex, dir, &svif);
+
+		if (dir == ULP_DIR_INGRESS) {
+			svif_type = BNXT_ULP_PHY_PORT_SVIF;
+		} else {
+			if_type = bnxt_get_interface_type(port_id);
+			if (if_type == BNXT_ULP_INTF_TYPE_VF_REP)
+				svif_type = BNXT_ULP_VF_FUNC_SVIF;
+			else
+				svif_type = BNXT_ULP_DRV_FUNC_SVIF;
+		}
+		ulp_port_db_svif_get(params->ulp_ctx, ifindex, svif_type,
+				     &svif);
 		svif = rte_cpu_to_be_16(svif);
 	}
 	hdr_field = &params->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX];
@@ -1256,7 +1269,7 @@ ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
 
 	/* copy the PF of the current device into VNIC Property */
 	svif = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
-	svif = bnxt_get_vnic_id(svif);
+	svif = bnxt_get_vnic_id(svif, BNXT_ULP_INTF_TYPE_INVALID);
 	svif = rte_cpu_to_be_32(svif);
 	memcpy(&params->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 	       &svif, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1280,7 +1293,8 @@ ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using VF conversion */
-		pid = bnxt_get_vnic_id(vf_action->id);
+		pid = bnxt_get_vnic_id(vf_action->id,
+				       BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1307,7 +1321,7 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using port conversion */
-		pid = bnxt_get_vnic_id(port_id->id);
+		pid = bnxt_get_vnic_id(port_id->id, BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 42/50] net/bnxt: fix for VF to VFR conduit
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (40 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 41/50] net/bnxt: enhancements for port db Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 43/50] net/bnxt: fix to parse representor along with other dev-args Somnath Kotur
                   ` (8 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

When VF-VFR conduits are created, a mark is added to the mark database.
mark_flag indicates whether the mark is valid and has VFR information
(VFR_ID bit in mark_flag). Rx path was checking for this VFR_ID bit.
However, while adding the mark to the mark database, VFR_ID bit is not
set in mark_flag.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index b3527ec..b2c8c34 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -18,6 +18,8 @@
 						BNXT_ULP_MARK_VALID)
 #define ULP_MARK_DB_ENTRY_IS_INVALID(mark_info) (!((mark_info)->flags &\
 						   BNXT_ULP_MARK_VALID))
+#define ULP_MARK_DB_ENTRY_SET_VFR_ID(mark_info) ((mark_info)->flags |=\
+						 BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_VFR_ID(mark_info) ((mark_info)->flags &\
 						BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_GLOBAL_HW_FID(mark_info) ((mark_info)->flags &\
@@ -263,6 +265,9 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
+
+		if (mark_flag & BNXT_ULP_MARK_VFR_ID)
+			ULP_MARK_DB_ENTRY_SET_VFR_ID(&mtbl->lfid_tbl[fid]);
 	}
 
 	return 0;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 43/50] net/bnxt: fix to parse representor along with other dev-args
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (41 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 42/50] net/bnxt: fix for VF to VFR conduit Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 44/50] net/bnxt: fill mapper parameters with default rules info Somnath Kotur
                   ` (7 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Representor dev-args need to be parsed during pci probe as they determine
subsequent probe of VF representor ports as well.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 332644d..0b38c84 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -98,8 +98,10 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+#define BNXT_DEVARG_REPRESENTOR	"representor"
 
 static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_REPRESENTOR,
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
 	BNXT_DEVARG_MAX_NUM_KFLOWS,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 44/50] net/bnxt: fill mapper parameters with default rules info
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (42 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 43/50] net/bnxt: fix to parse representor along with other dev-args Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 45/50] net/bnxt: add support for vf rep and stat templates Somnath Kotur
                   ` (6 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Default rules are needed for the packets to be punted between the
following entities in the non-offloaded path
1. Device PORT to DPDK App
2. DPDK App to Device PORT
3. VF Representor to VF
4. VF to VF Representor

This patch fills all the relevant information in the computed fields
& the act_prop fields for the flow mapper to create the necessary
tables in the hardware to enable the default rules.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 config/common_base                             |   4 +-
 drivers/net/bnxt/bnxt_ethdev.c                 |   6 +-
 drivers/net/bnxt/meson.build                   |   1 +
 drivers/net/bnxt/tf_ulp/Makefile               |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h             |  24 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c        |  30 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c        | 385 +++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           |  10 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h           |   3 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |   5 +
 10 files changed, 447 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c

diff --git a/config/common_base b/config/common_base
index ccd03aa..30fb3a4 100644
--- a/config/common_base
+++ b/config/common_base
@@ -104,7 +104,7 @@ CONFIG_RTE_LOG_HISTORY=256
 CONFIG_RTE_BACKTRACE=y
 CONFIG_RTE_LIBEAL_USE_HPET=n
 CONFIG_RTE_EAL_ALWAYS_PANIC_ON_ERROR=n
-CONFIG_RTE_EAL_IGB_UIO=n
+CONFIG_RTE_EAL_IGB_UIO=y
 CONFIG_RTE_EAL_VFIO=n
 CONFIG_RTE_MAX_VFIO_GROUPS=64
 CONFIG_RTE_MAX_VFIO_CONTAINERS=64
@@ -1137,3 +1137,5 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
 # Compile the eventdev application
 #
 CONFIG_RTE_APP_EVENTDEV=y
+
+CONFIG_RTE_LIBRTE_BNXT_TRUFLOW_DEBUG=y
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 0b38c84..de8e11a 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1275,9 +1275,6 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
-	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_deinit(bp);
-
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
@@ -1333,6 +1330,9 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
+	if (BNXT_TRUFLOW_EN(bp))
+		bnxt_ulp_deinit(bp);
+
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 8f6ed41..2939857 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -61,6 +61,7 @@ sources = files('bnxt_cpr.c',
 	'tf_ulp/ulp_rte_parser.c',
 	'tf_ulp/bnxt_ulp_flow.c',
 	'tf_ulp/ulp_port_db.c',
+	'tf_ulp/ulp_def_rules.c',
 
 	'rte_pmd_bnxt.c')
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 57341f8..3f1b43b 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -16,3 +16,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index eecc09c..3563f63 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -12,6 +12,8 @@
 
 #include "rte_ethdev.h"
 
+#include "ulp_template_db_enum.h"
+
 struct bnxt_ulp_data {
 	uint32_t			tbl_scope_id;
 	struct bnxt_ulp_mark_tbl	*mark_tbl;
@@ -49,6 +51,12 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+struct ulp_tlv_param {
+	enum bnxt_ulp_df_param_type type;
+	uint32_t length;
+	uint8_t value[16];
+};
+
 /*
  * Allow the deletion of context only for the bnxt device that
  * created the session
@@ -127,4 +135,20 @@ bnxt_ulp_cntxt_ptr2_port_db_set(struct bnxt_ulp_context	*ulp_ctx,
 struct bnxt_ulp_port_db *
 bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx);
 
+/* Function to create default flows. */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id);
+
+/* Function to destroy default flows. */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev,
+			 uint32_t flow_id);
+
+int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		      struct rte_flow_error *error);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 138b0b7..7ef306e 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -207,7 +207,7 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev,
 }
 
 /* Function to destroy the rte flow. */
-static int
+int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 		      struct rte_flow *flow,
 		      struct rte_flow_error *error)
@@ -220,9 +220,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
@@ -233,17 +234,22 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
 		BNXT_TF_DBG(ERR, "Incorrect device params\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
-	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id);
-	if (ret)
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret) {
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+		if (error)
+			rte_flow_error_set(error, -ret,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
+	}
 
 	return ret;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
new file mode 100644
index 0000000..46b558f
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt_tf_common.h"
+#include "ulp_template_struct.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_db_field.h"
+#include "ulp_utils.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+struct bnxt_ulp_def_param_handler {
+	int32_t (*vfr_func)(struct bnxt_ulp_context *ulp_ctx,
+			    struct ulp_tlv_param *param,
+			    struct bnxt_ulp_mapper_create_parms *mapper_params);
+};
+
+static int32_t
+ulp_set_svif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t svif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t svif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_svif_get(ulp_ctx, ifindex, svif_type, &svif);
+	if (rc)
+		return rc;
+
+	if (svif_type == BNXT_ULP_PHY_PORT_SVIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SVIF;
+	else if (svif_type == BNXT_ULP_DRV_FUNC_SVIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SVIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SVIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, svif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_spif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t spif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t spif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_spif_get(ulp_ctx, ifindex, spif_type, &spif);
+	if (rc)
+		return rc;
+
+	if (spif_type == BNXT_ULP_PHY_PORT_SPIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SPIF;
+	else if (spif_type == BNXT_ULP_DRV_FUNC_SPIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SPIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SPIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, spif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_parif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t  ifindex, uint8_t parif_type,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t parif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_parif_get(ulp_ctx, ifindex, parif_type, &parif);
+	if (rc)
+		return rc;
+
+	if (parif_type == BNXT_ULP_PHY_PORT_PARIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_PARIF;
+	else if (parif_type == BNXT_ULP_DRV_FUNC_PARIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_PARIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_PARIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, parif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vport_in_comp_fld(struct bnxt_ulp_context *ulp_ctx, uint32_t ifindex,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vport;
+	int rc;
+
+	rc = ulp_port_db_vport_get(ulp_ctx, ifindex, &vport);
+	if (rc)
+		return rc;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_PHY_PORT_VPORT,
+			    vport);
+	return 0;
+}
+
+static int32_t
+ulp_set_vnic_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t vnic_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vnic;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_default_vnic_get(ulp_ctx, ifindex, vnic_type, &vnic);
+	if (rc)
+		return rc;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_VNIC;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_VNIC;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, vnic);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vlan_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	struct ulp_rte_act_prop *act_prop = mapper_params->act_prop;
+
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_SET_VLAN_VID)) {
+		BNXT_TF_DBG(ERR,
+			    "VLAN already set, multiple VLANs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	port_id = rte_cpu_to_be_16(port_id);
+
+	ULP_BITMAP_SET(mapper_params->act->bits,
+		       BNXT_ULP_ACTION_BIT_SET_VLAN_VID);
+
+	memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG],
+	       &port_id, sizeof(port_id));
+
+	return 0;
+}
+
+static int32_t
+ulp_set_mark_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		BNXT_TF_DBG(ERR,
+			    "MARK already set, multiple MARKs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_DEV_PORT_ID,
+			    port_id);
+
+	return 0;
+}
+
+static int32_t
+ulp_df_dev_port_handler(struct bnxt_ulp_context *ulp_ctx,
+			struct ulp_tlv_param *param,
+			struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t port_id;
+	uint32_t ifindex;
+	int rc;
+
+	port_id = param->value[0] | param->value[1];
+
+	rc = ulp_port_db_dev_port_to_ulp_index(ulp_ctx, port_id, &ifindex);
+	if (rc) {
+		BNXT_TF_DBG(ERR,
+				"Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* Set port SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_PHY_PORT_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_DRV_FUNC_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_PARIF,
+				       mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set uplink VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, true, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, false, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VPORT */
+	rc = ulp_set_vport_in_comp_fld(ulp_ctx, ifindex, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VLAN */
+	rc = ulp_set_vlan_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set MARK */
+	rc = ulp_set_mark_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
+struct bnxt_ulp_def_param_handler ulp_def_handler_tbl[] = {
+	[BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID] = {
+			.vfr_func = ulp_df_dev_port_handler }
+};
+
+/*
+ * Function to create default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * param_list [in] Ptr to a list of parameters (Currently, only DPDK port_id).
+ * ulp_class_tid [in] Class template ID number.
+ * flow_id [out] Ptr to flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id)
+{
+	struct ulp_rte_hdr_field	hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	uint32_t			comp_fld[BNXT_ULP_CF_IDX_LAST];
+	struct bnxt_ulp_mapper_create_parms mapper_params = { 0 };
+	struct ulp_rte_act_prop		act_prop;
+	struct ulp_rte_act_bitmap	act = { 0 };
+	struct bnxt_ulp_context		*ulp_ctx;
+	uint32_t type;
+	int rc;
+
+	memset(&mapper_params, 0, sizeof(mapper_params));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(comp_fld, 0, sizeof(comp_fld));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	mapper_params.hdr_field = hdr_field;
+	mapper_params.act = &act;
+	mapper_params.act_prop = &act_prop;
+	mapper_params.comp_fld = comp_fld;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized. "
+				 "Failed to create default flow.\n");
+		return -EINVAL;
+	}
+
+	type = param_list->type;
+	while (type != BNXT_ULP_DF_PARAM_TYPE_LAST) {
+		if (ulp_def_handler_tbl[type].vfr_func) {
+			rc = ulp_def_handler_tbl[type].vfr_func(ulp_ctx,
+								param_list,
+								&mapper_params);
+			if (rc) {
+				BNXT_TF_DBG(ERR,
+					    "Failed to create default flow.\n");
+				return rc;
+			}
+		}
+
+		param_list++;
+		type = param_list->type;
+	}
+
+	mapper_params.class_tid = ulp_class_tid;
+
+	rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, flow_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create default flow.\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/*
+ * Function to destroy default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * flow_id [in] Flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int rc;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return -EINVAL;
+	}
+
+	rc = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				     BNXT_ULP_DEFAULT_FLOW_TABLE);
+	if (rc)
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index f293a90..b7b528b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2275,16 +2275,15 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 }
 
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type)
 {
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
 		return -EINVAL;
 	}
 
-	return ulp_mapper_resources_free(ulp_ctx,
-					 fid,
-					 BNXT_ULP_REGULAR_FLOW_TABLE);
+	return ulp_mapper_resources_free(ulp_ctx, fid, flow_tbl_type);
 }
 
 /* Function to handle the default global templates that are allocated during
@@ -2487,7 +2486,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 flow_error:
 	/* Free all resources that were allocated during flow creation */
-	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
 	if (trc)
 		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 5f89a0e..f1399dc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -108,7 +108,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
 
 /* Function that frees all resources associated with the flow. */
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type);
 
 /*
  * Function that frees all resources and can be called on default or regular
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 27628a5..2346797 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -145,6 +145,11 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
+enum bnxt_ulp_df_param_type {
+	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
+	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
+};
+
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 45/50] net/bnxt: add support for vf rep and stat templates
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (43 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 44/50] net/bnxt: fill mapper parameters with default rules info Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 46/50] net/bnxt: create default flow rules for the VF-rep conduit Somnath Kotur
                   ` (5 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The support for VF representor and counters is added to the
ulp templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            |   21 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |    2 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c   |  424 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_class.c | 5198 ++++++++++++++++++-----
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h  |  409 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_field.h |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c   |   88 +-
 7 files changed, 4948 insertions(+), 1657 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index b7b528b..9bf855f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -22,7 +22,7 @@ ulp_mapper_glb_resource_info_list_get(uint32_t *num_entries)
 {
 	if (!num_entries)
 		return NULL;
-	*num_entries = BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+	*num_entries = BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 	return ulp_glb_resource_tbl;
 }
 
@@ -119,11 +119,6 @@ ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_identifier(tfp, &fparms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
-		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
-#endif
 	return rc;
 }
 
@@ -182,11 +177,6 @@ ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_tbl_entry(tfp, &free_parms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
-		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
-#endif
 	return rc;
 }
 
@@ -1442,9 +1432,6 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			return rc;
 		}
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	ulp_mapper_result_dump("EEM Result", tbl, &data);
-#endif
 
 	/* do the transpose for the internal EM keys */
 	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
@@ -1595,10 +1582,6 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	/* if encap bit swap is enabled perform the bit swap */
 	if (parms->device_params->encap_byte_swap && encap_flds) {
 		ulp_blob_perform_encap_swap(&data);
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
-		ulp_mapper_blob_dump(&data);
-#endif
 	}
 
 	/*
@@ -2256,7 +2239,7 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Iterate the global resources and process each one */
 	for (dir = TF_DIR_RX; dir < TF_DIR_MAX; dir++) {
-		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 		      idx++) {
 			ent = &mapper_data->glb_res_tbl[dir][idx];
 			if (ent->resource_func ==
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index f1399dc..54b9507 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -46,7 +46,7 @@ struct bnxt_ulp_mapper_glb_resource_entry {
 
 struct bnxt_ulp_mapper_data {
 	struct bnxt_ulp_mapper_glb_resource_entry
-		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ];
+		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ];
 	struct bnxt_ulp_mapper_cache_entry
 		*cache_tbl[BNXT_ULP_CACHE_TBL_MAX_SZ];
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 9b14fa0..3d65073 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -9,70 +9,301 @@
 #include "ulp_rte_parser.h"
 
 uint16_t ulp_act_sig_tbl[BNXT_ULP_ACT_SIG_TBL_MAX_SZ] = {
-	[BNXT_ULP_ACT_HID_00a1] = 1,
-	[BNXT_ULP_ACT_HID_0029] = 2,
-	[BNXT_ULP_ACT_HID_0040] = 3
+	[BNXT_ULP_ACT_HID_0002] = 1,
+	[BNXT_ULP_ACT_HID_0022] = 2,
+	[BNXT_ULP_ACT_HID_0026] = 3,
+	[BNXT_ULP_ACT_HID_0006] = 4,
+	[BNXT_ULP_ACT_HID_0009] = 5,
+	[BNXT_ULP_ACT_HID_0029] = 6,
+	[BNXT_ULP_ACT_HID_002d] = 7,
+	[BNXT_ULP_ACT_HID_004b] = 8,
+	[BNXT_ULP_ACT_HID_004a] = 9,
+	[BNXT_ULP_ACT_HID_004f] = 10,
+	[BNXT_ULP_ACT_HID_004e] = 11,
+	[BNXT_ULP_ACT_HID_006c] = 12,
+	[BNXT_ULP_ACT_HID_0070] = 13,
+	[BNXT_ULP_ACT_HID_0021] = 14,
+	[BNXT_ULP_ACT_HID_0025] = 15,
+	[BNXT_ULP_ACT_HID_0043] = 16,
+	[BNXT_ULP_ACT_HID_0042] = 17,
+	[BNXT_ULP_ACT_HID_0047] = 18,
+	[BNXT_ULP_ACT_HID_0046] = 19,
+	[BNXT_ULP_ACT_HID_0064] = 20,
+	[BNXT_ULP_ACT_HID_0068] = 21,
+	[BNXT_ULP_ACT_HID_00a1] = 22,
+	[BNXT_ULP_ACT_HID_00df] = 23
 };
 
 struct bnxt_ulp_act_match_info ulp_act_match_list[] = {
 	[1] = {
-	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_hid = BNXT_ULP_ACT_HID_0002,
 	.act_sig = { .bits =
-		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
-		BNXT_ULP_ACTION_BIT_MARK |
-		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DROP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
-	.act_tid = 0
+	.act_tid = 1
 	},
 	[2] = {
+	.act_hid = BNXT_ULP_ACT_HID_0022,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[3] = {
+	.act_hid = BNXT_ULP_ACT_HID_0026,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[4] = {
+	.act_hid = BNXT_ULP_ACT_HID_0006,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[5] = {
+	.act_hid = BNXT_ULP_ACT_HID_0009,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[6] = {
 	.act_hid = BNXT_ULP_ACT_HID_0029,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
 		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[7] = {
+	.act_hid = BNXT_ULP_ACT_HID_002d,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
 		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.act_tid = 1
 	},
-	[3] = {
-	.act_hid = BNXT_ULP_ACT_HID_0040,
+	[8] = {
+	.act_hid = BNXT_ULP_ACT_HID_004b,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[9] = {
+	.act_hid = BNXT_ULP_ACT_HID_004a,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[10] = {
+	.act_hid = BNXT_ULP_ACT_HID_004f,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[11] = {
+	.act_hid = BNXT_ULP_ACT_HID_004e,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[12] = {
+	.act_hid = BNXT_ULP_ACT_HID_006c,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[13] = {
+	.act_hid = BNXT_ULP_ACT_HID_0070,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[14] = {
+	.act_hid = BNXT_ULP_ACT_HID_0021,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[15] = {
+	.act_hid = BNXT_ULP_ACT_HID_0025,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[16] = {
+	.act_hid = BNXT_ULP_ACT_HID_0043,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[17] = {
+	.act_hid = BNXT_ULP_ACT_HID_0042,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[18] = {
+	.act_hid = BNXT_ULP_ACT_HID_0047,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[19] = {
+	.act_hid = BNXT_ULP_ACT_HID_0046,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[20] = {
+	.act_hid = BNXT_ULP_ACT_HID_0064,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[21] = {
+	.act_hid = BNXT_ULP_ACT_HID_0068,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[22] = {
+	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 2
+	},
+	[23] = {
+	.act_hid = BNXT_ULP_ACT_HID_00df,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_VXLAN_ENCAP |
 		BNXT_ULP_ACTION_BIT_VPORT |
 		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
-	.act_tid = 2
+	.act_tid = 3
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 0
+	.num_tbls = 2,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 1,
-	.start_tbl_idx = 1
+	.start_tbl_idx = 2,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 2
+	.num_tbls = 3,
+	.start_tbl_idx = 3,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_STATS_64,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_ACTION_BIT,
+	.cond_operand = BNXT_ULP_ACTION_BIT_COUNT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 0,
+	.result_start_idx = 1,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -87,7 +318,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 26,
+	.result_start_idx = 27,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -97,12 +328,46 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 53,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 56,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_TX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 52,
+	.result_start_idx = 59,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
@@ -114,10 +379,19 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
-	.field_bit_size = 14,
+	.field_bit_size = 64,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
 	.field_bit_size = 1,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
@@ -131,7 +405,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_COUNT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -187,7 +471,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -195,11 +489,7 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.result_operand = {
-		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
@@ -212,7 +502,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -224,7 +524,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DROP & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 14,
@@ -308,7 +618,11 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 12,
@@ -336,6 +650,50 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 128,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 14,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index e51338d..fa7f793 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -10,8 +10,8 @@
 
 uint16_t ulp_class_sig_tbl[BNXT_ULP_CLASS_SIG_TBL_MAX_SZ] = {
 	[BNXT_ULP_CLASS_HID_0080] = 1,
-	[BNXT_ULP_CLASS_HID_0000] = 2,
-	[BNXT_ULP_CLASS_HID_0087] = 3
+	[BNXT_ULP_CLASS_HID_0087] = 2,
+	[BNXT_ULP_CLASS_HID_0000] = 3
 };
 
 struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
@@ -23,1871 +23,4722 @@ struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
 		BNXT_ULP_HDR_BIT_O_UDP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 0,
+	.class_tid = 8,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[2] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0000,
+	.class_hid = BNXT_ULP_CLASS_HID_0087,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
+		BNXT_ULP_HDR_BIT_T_VXLAN |
+		BNXT_ULP_HDR_BIT_I_ETH |
+		BNXT_ULP_HDR_BIT_I_IPV4 |
+		BNXT_ULP_HDR_BIT_I_UDP |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT |
+		BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 1,
+	.class_tid = 9,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[3] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0087,
+	.class_hid = BNXT_ULP_CLASS_HID_0000,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_HDR_BIT_T_VXLAN |
-		BNXT_ULP_HDR_BIT_I_ETH |
-		BNXT_ULP_HDR_BIT_I_IPV4 |
-		BNXT_ULP_HDR_BIT_I_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
 	.field_sig = { .bits =
-		BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT |
-		BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 2,
+	.class_tid = 10,
 	.act_vnic = 0,
 	.wc_pri = 0
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 4,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 2,
+	.start_tbl_idx = 4,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 6,
+	.start_tbl_idx = 6,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((4 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 0
+	.start_tbl_idx = 12,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((5 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 17,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((6 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 20,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((7 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 23,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((8 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 5
+	.start_tbl_idx = 24,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((9 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 5,
+	.start_tbl_idx = 29,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
+	},
+	[((10 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 10
+	.start_tbl_idx = 34,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 0,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
 	.result_start_idx = 0,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 0,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 2,
+	.key_start_idx = 0,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 1,
+	.result_start_idx = 26,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 15,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 14,
-	.result_bit_size = 10,
+	.result_start_idx = 39,
+	.result_bit_size = 32,
 	.result_num_fields = 1,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
-	.ident_nums = 1,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 40,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 41,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 18,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 15,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 67,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 60,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 23,
-	.result_bit_size = 64,
-	.result_num_fields = 9,
-	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 80,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 71,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 32,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 92,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 73,
+	.key_start_idx = 26,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 33,
+	.result_start_idx = 118,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 131,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 86,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 46,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.key_start_idx = 39,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 157,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
-	.ident_nums = 1,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_TX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 89,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 47,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 52,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 170,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 131,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 55,
+	.key_start_idx = 65,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 183,
 	.result_bit_size = 64,
-	.result_num_fields = 9,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 196,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 197,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 142,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 64,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 198,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
-	.ident_nums = 1,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_VFR_FLAG,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 144,
+	.key_start_idx = 78,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 65,
+	.result_start_idx = 224,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 157,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 78,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 237,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 249,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 160,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 79,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 91,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 275,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 202,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 87,
-	.result_bit_size = 64,
+	.result_start_idx = 288,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 104,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 314,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 117,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 327,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 340,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_GLOBAL,
+	.index_operand = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 130,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 366,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 132,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 367,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 145,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 380,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 148,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 381,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 190,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 389,
+	.result_bit_size = 64,
 	.result_num_fields = 9,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
-	}
-};
-
-struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 201,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 398,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 203,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 399,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 216,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 412,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 219,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 413,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 261,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 421,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 272,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 430,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 274,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 431,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 287,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 444,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 290,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 445,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 332,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 453,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	}
+};
+
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_PARIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_PARIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_DST_PORT & 0xff,
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT & 0xff,
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x02}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_DST_PORT & 0xff,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_DST_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
-	}
-};
-
-struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
 	.field_bit_size = 10,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
@@ -2309,7 +5160,12 @@ struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 16,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 2346797..6955464 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -6,7 +6,7 @@
 #ifndef ULP_TEMPLATE_DB_H_
 #define ULP_TEMPLATE_DB_H_
 
-#define BNXT_ULP_REGFILE_MAX_SZ 16
+#define BNXT_ULP_REGFILE_MAX_SZ 17
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_CACHE_TBL_MAX_SZ 4
@@ -18,15 +18,15 @@
 #define BNXT_ULP_CLASS_HID_SHFTL 23
 #define BNXT_ULP_CLASS_HID_MASK 255
 #define BNXT_ULP_ACT_SIG_TBL_MAX_SZ 256
-#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 4
+#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 24
 #define BNXT_ULP_ACT_HID_LOW_PRIME 7919
 #define BNXT_ULP_ACT_HID_HIGH_PRIME 7919
-#define BNXT_ULP_ACT_HID_SHFTR 0
+#define BNXT_ULP_ACT_HID_SHFTR 23
 #define BNXT_ULP_ACT_HID_SHFTL 23
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
-#define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
-#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
+#define BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ 5
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -242,7 +242,8 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
 	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
 	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
-	BNXT_ULP_REGFILE_INDEX_LAST = 16
+	BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR = 16,
+	BNXT_ULP_REGFILE_INDEX_LAST = 17
 };
 
 enum bnxt_ulp_search_before_alloc {
@@ -252,18 +253,18 @@ enum bnxt_ulp_search_before_alloc {
 };
 
 enum bnxt_ulp_fdb_resource_flags {
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01,
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00,
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01
 };
 
 enum bnxt_ulp_fdb_type {
-	BNXT_ULP_FDB_TYPE_DEFAULT = 1,
-	BNXT_ULP_FDB_TYPE_REGULAR = 0
+	BNXT_ULP_FDB_TYPE_REGULAR = 0,
+	BNXT_ULP_FDB_TYPE_DEFAULT = 1
 };
 
 enum bnxt_ulp_flow_dir_bitmask {
-	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000,
-	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000
+	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000,
+	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000
 };
 
 enum bnxt_ulp_match_type_bitmask {
@@ -285,190 +286,66 @@ enum bnxt_ulp_resource_func {
 };
 
 enum bnxt_ulp_resource_sub_type {
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1
 };
 
 enum bnxt_ulp_sym {
-	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
-	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
-	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
-	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
-	BNXT_ULP_SYM_ECV_VALID_NO = 0,
-	BNXT_ULP_SYM_ECV_VALID_YES = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
-	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
-	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
-	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
-	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
-	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
-	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
-	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
-	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
-	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
-	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
-	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
-	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
-	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
-	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
-	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
-	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
-	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
-	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
-	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
-	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_PKT_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
-	BNXT_ULP_SYM_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_POP_VLAN_YES = 1,
 	BNXT_ULP_SYM_RECYCLE_CNT_IGNORE = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
 	BNXT_ULP_SYM_RECYCLE_CNT_ONE = 1,
-	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
 	BNXT_ULP_SYM_RECYCLE_CNT_TWO = 2,
-	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
+	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
+	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
 	BNXT_ULP_SYM_RESERVED_IGNORE = 0,
-	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
-	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
+	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
 	BNXT_ULP_SYM_TL2_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_NO = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_NO = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_YES = 1,
@@ -478,40 +355,164 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_TL4_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
-	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
+	BNXT_ULP_SYM_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_ECV_VALID_NO = 0,
+	BNXT_ULP_SYM_ECV_VALID_YES = 1,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
 	BNXT_ULP_SYM_WH_PLUS_INT_ACT_REC = 1,
-	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
-	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
 	BNXT_ULP_SYM_WH_PLUS_UC_ACT_REC = 0,
+	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
+	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
+	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
+	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
+	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
+	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
+	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
+	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
 enum bnxt_ulp_wh_plus {
-	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448
 };
 
 enum bnxt_ulp_act_prop_sz {
@@ -604,18 +605,44 @@ enum bnxt_ulp_act_prop_idx {
 
 enum bnxt_ulp_class_hid {
 	BNXT_ULP_CLASS_HID_0080 = 0x0080,
-	BNXT_ULP_CLASS_HID_0000 = 0x0000,
-	BNXT_ULP_CLASS_HID_0087 = 0x0087
+	BNXT_ULP_CLASS_HID_0087 = 0x0087,
+	BNXT_ULP_CLASS_HID_0000 = 0x0000
 };
 
 enum bnxt_ulp_act_hid {
-	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_0002 = 0x0002,
+	BNXT_ULP_ACT_HID_0022 = 0x0022,
+	BNXT_ULP_ACT_HID_0026 = 0x0026,
+	BNXT_ULP_ACT_HID_0006 = 0x0006,
+	BNXT_ULP_ACT_HID_0009 = 0x0009,
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
-	BNXT_ULP_ACT_HID_0040 = 0x0040
+	BNXT_ULP_ACT_HID_002d = 0x002d,
+	BNXT_ULP_ACT_HID_004b = 0x004b,
+	BNXT_ULP_ACT_HID_004a = 0x004a,
+	BNXT_ULP_ACT_HID_004f = 0x004f,
+	BNXT_ULP_ACT_HID_004e = 0x004e,
+	BNXT_ULP_ACT_HID_006c = 0x006c,
+	BNXT_ULP_ACT_HID_0070 = 0x0070,
+	BNXT_ULP_ACT_HID_0021 = 0x0021,
+	BNXT_ULP_ACT_HID_0025 = 0x0025,
+	BNXT_ULP_ACT_HID_0043 = 0x0043,
+	BNXT_ULP_ACT_HID_0042 = 0x0042,
+	BNXT_ULP_ACT_HID_0047 = 0x0047,
+	BNXT_ULP_ACT_HID_0046 = 0x0046,
+	BNXT_ULP_ACT_HID_0064 = 0x0064,
+	BNXT_ULP_ACT_HID_0068 = 0x0068,
+	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_00df = 0x00df
 };
 
 enum bnxt_ulp_df_tpl {
 	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
-	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2,
+	BNXT_ULP_DF_TPL_VFREP_TO_VF = 3,
+	BNXT_ULP_DF_TPL_VF_TO_VFREP = 4,
+	BNXT_ULP_DF_TPL_DRV_FUNC_SVIF_PUSH_VLAN = 5,
+	BNXT_ULP_DF_TPL_PORT_SVIF_VID_VNIC_POP_VLAN = 6,
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC = 7
 };
+
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
index 84b9523..7695420 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
@@ -6,220 +6,275 @@
 #ifndef ULP_HDR_FIELD_ENUMS_H_
 #define ULP_HDR_FIELD_ENUMS_H_
 
-enum bnxt_ulp_hf0 {
-	BNXT_ULP_HF0_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF0_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF0_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF0_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF0_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF0_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF0_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF0_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF0_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF0_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF0_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF0_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF0_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF0_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF0_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF0_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF0_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF0_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF0_IDX_O_UDP_CSUM              = 23
-};
-
 enum bnxt_ulp_hf1 {
-	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF1_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF1_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF1_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF1_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF1_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF1_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF1_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF1_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF1_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF1_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF1_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF1_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF1_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF1_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF1_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF1_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF1_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF1_IDX_O_UDP_CSUM              = 23
+	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0
 };
 
 enum bnxt_ulp_hf2 {
-	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF2_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF2_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF2_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF2_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF2_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF2_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF2_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF2_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF2_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF2_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF2_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF2_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF2_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF2_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF2_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF2_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF2_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF2_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF2_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF2_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF2_IDX_O_UDP_CSUM              = 23,
-	BNXT_ULP_HF2_IDX_T_VXLAN_FLAGS           = 24,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD0           = 25,
-	BNXT_ULP_HF2_IDX_T_VXLAN_VNI             = 26,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD1           = 27,
-	BNXT_ULP_HF2_IDX_I_ETH_DMAC              = 28,
-	BNXT_ULP_HF2_IDX_I_ETH_SMAC              = 29,
-	BNXT_ULP_HF2_IDX_I_ETH_TYPE              = 30,
-	BNXT_ULP_HF2_IDX_IO_VLAN_CFI_PRI         = 31,
-	BNXT_ULP_HF2_IDX_IO_VLAN_VID             = 32,
-	BNXT_ULP_HF2_IDX_IO_VLAN_TYPE            = 33,
-	BNXT_ULP_HF2_IDX_II_VLAN_CFI_PRI         = 34,
-	BNXT_ULP_HF2_IDX_II_VLAN_VID             = 35,
-	BNXT_ULP_HF2_IDX_II_VLAN_TYPE            = 36,
-	BNXT_ULP_HF2_IDX_I_IPV4_VER              = 37,
-	BNXT_ULP_HF2_IDX_I_IPV4_TOS              = 38,
-	BNXT_ULP_HF2_IDX_I_IPV4_LEN              = 39,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_ID          = 40,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_OFF         = 41,
-	BNXT_ULP_HF2_IDX_I_IPV4_TTL              = 42,
-	BNXT_ULP_HF2_IDX_I_IPV4_NEXT_PID         = 43,
-	BNXT_ULP_HF2_IDX_I_IPV4_CSUM             = 44,
-	BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR         = 45,
-	BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR         = 46,
-	BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT          = 47,
-	BNXT_ULP_HF2_IDX_I_UDP_DST_PORT          = 48,
-	BNXT_ULP_HF2_IDX_I_UDP_LENGTH            = 49,
-	BNXT_ULP_HF2_IDX_I_UDP_CSUM              = 50
-};
-
-enum bnxt_ulp_hf_bitmask0 {
-	BNXT_ULP_HF0_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf3 {
+	BNXT_ULP_HF3_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf4 {
+	BNXT_ULP_HF4_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf5 {
+	BNXT_ULP_HF5_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf6 {
+	BNXT_ULP_HF6_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf7 {
+	BNXT_ULP_HF7_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf8 {
+	BNXT_ULP_HF8_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF8_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF8_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF8_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF8_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF8_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF8_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF8_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF8_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF8_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF8_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF8_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF8_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF8_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF8_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF8_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF8_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF8_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF8_IDX_O_UDP_CSUM              = 23
+};
+
+enum bnxt_ulp_hf9 {
+	BNXT_ULP_HF9_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF9_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF9_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF9_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF9_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF9_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF9_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF9_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF9_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF9_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF9_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF9_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF9_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF9_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF9_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF9_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF9_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF9_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF9_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF9_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF9_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF9_IDX_O_UDP_CSUM              = 23,
+	BNXT_ULP_HF9_IDX_T_VXLAN_FLAGS           = 24,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD0           = 25,
+	BNXT_ULP_HF9_IDX_T_VXLAN_VNI             = 26,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD1           = 27,
+	BNXT_ULP_HF9_IDX_I_ETH_DMAC              = 28,
+	BNXT_ULP_HF9_IDX_I_ETH_SMAC              = 29,
+	BNXT_ULP_HF9_IDX_I_ETH_TYPE              = 30,
+	BNXT_ULP_HF9_IDX_IO_VLAN_CFI_PRI         = 31,
+	BNXT_ULP_HF9_IDX_IO_VLAN_VID             = 32,
+	BNXT_ULP_HF9_IDX_IO_VLAN_TYPE            = 33,
+	BNXT_ULP_HF9_IDX_II_VLAN_CFI_PRI         = 34,
+	BNXT_ULP_HF9_IDX_II_VLAN_VID             = 35,
+	BNXT_ULP_HF9_IDX_II_VLAN_TYPE            = 36,
+	BNXT_ULP_HF9_IDX_I_IPV4_VER              = 37,
+	BNXT_ULP_HF9_IDX_I_IPV4_TOS              = 38,
+	BNXT_ULP_HF9_IDX_I_IPV4_LEN              = 39,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_ID          = 40,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_OFF         = 41,
+	BNXT_ULP_HF9_IDX_I_IPV4_TTL              = 42,
+	BNXT_ULP_HF9_IDX_I_IPV4_PROTO_ID         = 43,
+	BNXT_ULP_HF9_IDX_I_IPV4_CSUM             = 44,
+	BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR         = 45,
+	BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR         = 46,
+	BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT          = 47,
+	BNXT_ULP_HF9_IDX_I_UDP_DST_PORT          = 48,
+	BNXT_ULP_HF9_IDX_I_UDP_LENGTH            = 49,
+	BNXT_ULP_HF9_IDX_I_UDP_CSUM              = 50
+};
+
+enum bnxt_ulp_hf10 {
+	BNXT_ULP_HF10_IDX_SVIF_INDEX             = 0,
+	BNXT_ULP_HF10_IDX_O_ETH_DMAC             = 1,
+	BNXT_ULP_HF10_IDX_O_ETH_SMAC             = 2,
+	BNXT_ULP_HF10_IDX_O_ETH_TYPE             = 3,
+	BNXT_ULP_HF10_IDX_OO_VLAN_CFI_PRI        = 4,
+	BNXT_ULP_HF10_IDX_OO_VLAN_VID            = 5,
+	BNXT_ULP_HF10_IDX_OO_VLAN_TYPE           = 6,
+	BNXT_ULP_HF10_IDX_OI_VLAN_CFI_PRI        = 7,
+	BNXT_ULP_HF10_IDX_OI_VLAN_VID            = 8,
+	BNXT_ULP_HF10_IDX_OI_VLAN_TYPE           = 9,
+	BNXT_ULP_HF10_IDX_O_IPV4_VER             = 10,
+	BNXT_ULP_HF10_IDX_O_IPV4_TOS             = 11,
+	BNXT_ULP_HF10_IDX_O_IPV4_LEN             = 12,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_ID         = 13,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_OFF        = 14,
+	BNXT_ULP_HF10_IDX_O_IPV4_TTL             = 15,
+	BNXT_ULP_HF10_IDX_O_IPV4_PROTO_ID        = 16,
+	BNXT_ULP_HF10_IDX_O_IPV4_CSUM            = 17,
+	BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR        = 18,
+	BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR        = 19,
+	BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT         = 20,
+	BNXT_ULP_HF10_IDX_O_UDP_DST_PORT         = 21,
+	BNXT_ULP_HF10_IDX_O_UDP_LENGTH           = 22,
+	BNXT_ULP_HF10_IDX_O_UDP_CSUM             = 23
 };
 
 enum bnxt_ulp_hf_bitmask1 {
-	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000
 };
 
 enum bnxt_ulp_hf_bitmask2 {
-	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_VID         = 0x0000000010000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_VER          = 0x0000000004000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_NEXT_PID     = 0x0000000000100000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_CSUM          = 0x0000000000002000
+	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask3 {
+	BNXT_ULP_HF3_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask4 {
+	BNXT_ULP_HF4_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask5 {
+	BNXT_ULP_HF5_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask6 {
+	BNXT_ULP_HF6_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask7 {
+	BNXT_ULP_HF7_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask8 {
+	BNXT_ULP_HF8_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+};
+
+enum bnxt_ulp_hf_bitmask9 {
+	BNXT_ULP_HF9_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_VID         = 0x0000000010000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_VER          = 0x0000000004000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_PROTO_ID     = 0x0000000000100000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_CSUM          = 0x0000000000002000
 };
 
+enum bnxt_ulp_hf_bitmask10 {
+	BNXT_ULP_HF10_BITMASK_SVIF_INDEX         = 0x8000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_DMAC         = 0x4000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_SMAC         = 0x2000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_TYPE         = 0x1000000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_CFI_PRI    = 0x0800000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_VID        = 0x0400000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_TYPE       = 0x0200000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_CFI_PRI    = 0x0100000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_VID        = 0x0080000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_TYPE       = 0x0040000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_VER         = 0x0020000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TOS         = 0x0010000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_LEN         = 0x0008000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_ID     = 0x0004000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_OFF    = 0x0002000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TTL         = 0x0001000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_PROTO_ID    = 0x0000800000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_CSUM        = 0x0000400000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR    = 0x0000200000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR    = 0x0000100000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT     = 0x0000080000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT     = 0x0000040000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_LENGTH       = 0x0000020000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_CSUM         = 0x0000010000000000
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index f65aeae..f0a57cf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -294,60 +294,72 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 
 struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[] = {
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	}
 };
 
 struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
-	.flow_mem_type          = BNXT_ULP_FLOW_MEM_TYPE_EXT,
-	.byte_order             = BNXT_ULP_BYTE_ORDER_LE,
-	.encap_byte_swap        = 1,
-	.flow_db_num_entries    = 32768,
-	.mark_db_lfid_entries   = 65536,
-	.mark_db_gfid_entries   = 65536,
-	.flow_count_db_entries  = 16384,
-	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2,
-	.ext_cntr_table_type    = 0,
-	.byte_count_mask        = 0x00000003ffffffff,
-	.packet_count_mask      = 0xfffffffc00000000,
-	.byte_count_shift       = 0,
-	.packet_count_shift     = 36
+		.flow_mem_type           = BNXT_ULP_FLOW_MEM_TYPE_EXT,
+		.byte_order              = BNXT_ULP_BYTE_ORDER_LE,
+		.encap_byte_swap         = 1,
+		.flow_db_num_entries     = 32768,
+		.mark_db_lfid_entries    = 65536,
+		.mark_db_gfid_entries    = 65536,
+		.flow_count_db_entries   = 16384,
+		.num_resources_per_flow  = 8,
+		.num_phy_ports           = 2,
+		.ext_cntr_table_type     = 0,
+		.byte_count_mask         = 0x0000000fffffffff,
+		.packet_count_mask       = 0xffffffff00000000,
+		.byte_count_shift        = 0,
+		.packet_count_shift      = 36
 	}
 };
 
 struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[] = {
 	[0] = {
-	.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index       = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction               = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_RX
 	},
 	[1] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction          = TF_DIR_TX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_TX
 	},
 	[2] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_L2_CTXT,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
-	.direction          = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_RX
+	},
+	[3] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_TX
+	},
+	[4] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+		.resource_type           = TF_TBL_TYPE_FULL_ACT_RECORD,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR,
+		.direction               = TF_DIR_TX
 	}
 };
 
@@ -547,11 +559,11 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
 };
 
 uint32_t bnxt_ulp_encap_vtag_map[] = {
-	[0] = BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
-	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
-	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
 
 uint32_t ulp_glb_template_tbl[] = {
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC
 };
-
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 46/50] net/bnxt: create default flow rules for the VF-rep conduit
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (44 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 45/50] net/bnxt: add support for vf rep and stat templates Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 47/50] net/bnxt: add ingress & egress port default rules Somnath Kotur
                   ` (4 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Invoked 3 new APIs for the default flow create/destroy and to get
the action ptr for a default flow.
Changed ulp_intf_update() to accept rte_eth_dev as input and invoke
the same from the VF rep start function.
ULP Mark Manager will indicate if the cfa_code returned in the
Rx completion descriptor was for one of the default flow rules
created for the VF representor conduit. The mark_id returned
in such a case would be the VF rep's DPDK Port id, which can be
used to get the corresponding rte_eth_dev struct in bnxt_vf_recv

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   4 +-
 drivers/net/bnxt/bnxt_reps.c | 134 ++++++++++++++++++++++++++++++-------------
 drivers/net/bnxt/bnxt_reps.h |   3 +-
 drivers/net/bnxt/bnxt_rxr.c  |  24 ++++----
 drivers/net/bnxt/bnxt_txq.h  |   1 +
 5 files changed, 111 insertions(+), 55 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 32acced..f16bf33 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -806,8 +806,10 @@ struct bnxt_vf_representor {
 	uint16_t		fw_fid;
 	uint16_t		dflt_vnic_id;
 	uint16_t		svif;
-	uint16_t		tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	uint16_t		rx_cfa_code;
+	uint32_t		rep2vf_flow_id;
+	uint32_t		vf2rep_flow_id;
 	/* Private data store of associated PF/Trusted VF */
 	struct rte_eth_dev	*parent_dev;
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b6964ab..584846a 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -12,6 +12,9 @@
 #include "bnxt_txr.h"
 #include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
+#include "bnxt_tf_common.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
@@ -29,30 +32,20 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 };
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf)
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf)
 {
 	struct bnxt_sw_rx_bd *prod_rx_buf;
 	struct bnxt_rx_ring_info *rep_rxr;
 	struct bnxt_rx_queue *rep_rxq;
 	struct rte_eth_dev *vfr_eth_dev;
 	struct bnxt_vf_representor *vfr_bp;
-	uint16_t vf_id;
 	uint16_t mask;
 	uint8_t que;
 
-	vf_id = bp->cfa_code_map[cfa_code];
-	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
-	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
-		return 1;
-	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	vfr_eth_dev = &rte_eth_devices[port_id];
 	if (!vfr_eth_dev)
 		return 1;
 	vfr_bp = vfr_eth_dev->data->dev_private;
-	if (vfr_bp->rx_cfa_code != cfa_code) {
-		/* cfa_code not meant for this VF rep!!?? */
-		return 1;
-	}
 	/* If rxq_id happens to be > max rep_queue, use rxq0 */
 	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
 	rep_rxq = vfr_bp->rx_queues[que];
@@ -127,7 +120,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	pthread_mutex_lock(&parent->rep_info->vfr_lock);
 	ptxq = parent->tx_queues[qid];
 
-	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+	ptxq->vfr_tx_cfa_action = vf_rep_bp->vfr_tx_cfa_action;
 
 	for (i = 0; i < nb_pkts; i++) {
 		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
@@ -135,7 +128,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	}
 
 	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
-	ptxq->tx_cfa_action = 0;
+	ptxq->vfr_tx_cfa_action = 0;
 	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
 
 	return rc;
@@ -252,10 +245,67 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
-static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+static int bnxt_tf_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
+{
+	int rc;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
+	struct rte_eth_dev *parent_dev = vfr->parent_dev;
+	struct bnxt *parent_bp = parent_dev->data->dev_private;
+	uint16_t vfr_port_id = vfr_ethdev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(vfr_port_id >> 8) & 0xff, vfr_port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	ulp_port_db_dev_port_intf_update(parent_bp->ulp_ctx, vfr_ethdev);
+
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VFREP_TO_VF,
+				     &vfr->rep2vf_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VFR->VF failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VFR->VF! ***\n");
+	BNXT_TF_DBG(DEBUG, "rep2vf_flow_id = %d\n", vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_db_cfa_action_get(parent_bp->ulp_ctx,
+						vfr->rep2vf_flow_id,
+						&vfr->vfr_tx_cfa_action);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Failed to get action_ptr for VFR->VF dflt rule\n");
+		return -EIO;
+	}
+	BNXT_TF_DBG(DEBUG, "tx_cfa_action = %d\n", vfr->vfr_tx_cfa_action);
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VF_TO_VFREP,
+				     &vfr->vf2rep_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VF->VFR failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VF->VFR! ***\n");
+	BNXT_TF_DBG(DEBUG, "vfr2rep_flow_id = %d\n", vfr->vf2rep_flow_id);
+
+	return 0;
+}
+
+static int bnxt_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
 {
 	int rc = 0;
-	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
 
 	if (!vfr || !vfr->parent_dev) {
 		PMD_DRV_LOG(ERR,
@@ -263,10 +313,8 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 		return -ENOMEM;
 	}
 
-	parent_bp = vfr->parent_dev->data->dev_private;
-
 	/* Check if representor has been already allocated in FW */
-	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+	if (vfr->vfr_tx_cfa_action && vfr->rx_cfa_code)
 		return 0;
 
 	/*
@@ -274,24 +322,14 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 	 * Otherwise the FW will create the VF-rep rules with
 	 * default drop action.
 	 */
-
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
-				     vfr->vf_id,
-				     &vfr->tx_cfa_action,
-				     &vfr->rx_cfa_code);
-	 */
-	if (!rc) {
-		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+	rc = bnxt_tf_vfr_alloc(vfr_ethdev);
+	if (!rc)
 		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
 			    vfr->vf_id);
-	} else {
+	else
 		PMD_DRV_LOG(ERR,
 			    "Failed to alloc representor %d in FW\n",
 			    vfr->vf_id);
-	}
 
 	return rc;
 }
@@ -312,7 +350,7 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
 	int rc;
 
-	rc = bnxt_vfr_alloc(rep_bp);
+	rc = bnxt_vfr_alloc(eth_dev);
 
 	if (!rc) {
 		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
@@ -327,6 +365,25 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
+static int bnxt_tf_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->rep2vf_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed rep2vf flowid: %d\n",
+			    vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->vf2rep_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed vf2rep flowid: %d\n",
+			    vfr->vf2rep_flow_id);
+	return 0;
+}
+
 static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 {
 	int rc = 0;
@@ -341,15 +398,10 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp = vfr->parent_dev->data->dev_private;
 
 	/* Check if representor has been already freed in FW */
-	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+	if (!vfr->vfr_tx_cfa_action && !vfr->rx_cfa_code)
 		return 0;
 
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
-				    vfr->vf_id);
-	 */
+	rc = bnxt_tf_vfr_free(vfr);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
 			    "Failed to free representor %d in FW\n",
@@ -360,7 +412,7 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
 	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
 		    vfr->vf_id);
-	vfr->tx_cfa_action = 0;
+	vfr->vfr_tx_cfa_action = 0;
 	vfr->rx_cfa_code = 0;
 
 	return rc;
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index c8a3c7d..dd8eeb9 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -13,8 +13,7 @@
 #define BNXT_VF_IDX_INVALID             0xffff
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf);
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 0ecf199..6405887 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -403,9 +403,9 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
-static void
+static uint32_t
 bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
-			  struct rte_mbuf *mbuf)
+			  struct rte_mbuf *mbuf, uint32_t *vfr_flag)
 {
 	uint32_t cfa_code;
 	uint32_t meta_fmt;
@@ -414,7 +414,6 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	uint32_t mark_id;
 	uint32_t flags2;
 	uint32_t gfid_support = 0;
-	uint32_t vfr_flag;
 	int rc;
 
 	if (BNXT_GFID_ENABLED(bp))
@@ -484,19 +483,21 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	}
 
 	rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid,
-				  cfa_code, &vfr_flag, &mark_id);
+				  cfa_code, vfr_flag, &mark_id);
 	if (!rc) {
 		/* Got the mark, write it to the mbuf and return */
 		mbuf->hash.fdir.hi = mark_id;
 		mbuf->udata64 = (cfa_code & 0xffffffffull) << 32;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
 		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-		return;
+		return mark_id;
 	}
 
 skip_mark:
 	mbuf->hash.fdir.hi = 0;
 	mbuf->hash.fdir.id = 0;
+
+	return 0;
 }
 
 void bnxt_set_mark_in_mbuf(struct bnxt *bp,
@@ -552,7 +553,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	int rc = 0;
 	uint8_t agg_buf = 0;
 	uint16_t cmp_type;
-	uint32_t flags2_f = 0;
+	uint32_t flags2_f = 0, vfr_flag = 0, mark_id = 0;
 	uint16_t flags_type;
 	struct bnxt *bp = rxq->bp;
 
@@ -631,7 +632,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	}
 
 	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+		mark_id = bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf,
+						    &vfr_flag);
 	else
 		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
@@ -735,10 +737,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
-	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
-	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
-		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
-				   mbuf)) {
+	if (BNXT_TRUFLOW_EN(bp) &&
+	    (BNXT_VF_IS_TRUSTED(bp) || BNXT_PF(bp)) &&
+	    vfr_flag) {
+		if (!bnxt_vfr_recv(mark_id, rxq->queue_id, mbuf)) {
 			/* Now return an error so that nb_rx_pkts is not
 			 * incremented.
 			 * This packet was meant to be given to the representor.
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 69ff89a..a1ab3f3 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -30,6 +30,7 @@ struct bnxt_tx_queue {
 	int			index;
 	int			tx_wake_thresh;
 	uint32_t                tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 47/50] net/bnxt: add ingress & egress port default rules
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (45 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 46/50] net/bnxt: create default flow rules for the VF-rep conduit Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 48/50] net/bnxt: fill cfa_action in the tx buffer descriptor properly Somnath Kotur
                   ` (3 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

ingress & egress port default rules are needed to send the packet
from port_to_dpdk & dpdk_to_port respectively.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c     | 76 +++++++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h |  3 ++
 2 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de8e11a..2a19c50 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -29,6 +29,7 @@
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
 #include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -1162,6 +1163,73 @@ static int bnxt_handle_if_change_status(struct bnxt *bp)
 	return rc;
 }
 
+static int32_t
+bnxt_create_port_app_df_rule(struct bnxt *bp, uint8_t flow_type,
+			     uint32_t *flow_id)
+{
+	uint16_t port_id = bp->eth_dev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(port_id >> 8) & 0xff, port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	return ulp_default_flow_create(bp->eth_dev, param_list, flow_type,
+				       flow_id);
+}
+
+static int32_t
+bnxt_create_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+	int rc;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_PORT_TO_VS,
+					  &cfg_data->port_to_app_flow_id);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to create port to app default rule\n");
+		return rc;
+	}
+
+	BNXT_TF_DBG(DEBUG, "***** created port to app default rule ******\n");
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_VS_TO_PORT,
+					  &cfg_data->app_to_port_flow_id);
+	if (!rc) {
+		rc = ulp_default_flow_db_cfa_action_get(bp->ulp_ctx,
+							cfg_data->app_to_port_flow_id,
+							&cfg_data->tx_cfa_action);
+		if (rc)
+			goto err;
+
+		BNXT_TF_DBG(DEBUG,
+			    "***** created app to port default rule *****\n");
+		return 0;
+	}
+
+err:
+	BNXT_TF_DBG(DEBUG, "Failed to create app to port default rule\n");
+	return rc;
+}
+
+static void
+bnxt_destroy_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->port_to_app_flow_id);
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->app_to_port_flow_id);
+}
+
 static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
@@ -1330,8 +1398,11 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
-	if (BNXT_TRUFLOW_EN(bp))
+	if (BNXT_TRUFLOW_EN(bp)) {
+		if (bp->rep_info != NULL)
+			bnxt_destroy_df_rules(bp);
 		bnxt_ulp_deinit(bp);
+	}
 
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
@@ -1581,6 +1652,9 @@ static int bnxt_promiscuous_disable_op(struct rte_eth_dev *eth_dev)
 	if (rc != 0)
 		vnic->flags = old_flags;
 
+	if (BNXT_TRUFLOW_EN(bp) && bp->rep_info != NULL)
+		bnxt_create_df_rules(bp);
+
 	return rc;
 }
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 3563f63..4843da5 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,9 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	uint32_t			port_to_app_flow_id;
+	uint32_t			app_to_port_flow_id;
+	uint32_t			tx_cfa_action;
 };
 
 struct bnxt_ulp_context {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 48/50] net/bnxt: fill cfa_action in the tx buffer descriptor properly
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (46 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 47/50] net/bnxt: add ingress & egress port default rules Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 49/50] net/bnxt: support for ULP Flow counter Manager Somnath Kotur
                   ` (2 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Currently, only vfrep transmit requires cfa_action to be filled
in the tx buffer descriptor. However with truflow, dpdk(non vfrep)
to port also requires cfa_action to be filled in the tx buffer
descriptor.

This patch uses the correct cfa_action pointer while transmitting
the packet. Based on whether the packet is transmitted on non-vfrep
or vfrep, tx_cfa_action or vfr_tx_cfa_action inside txq will be
filled in the tx buffer descriptor.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index d7e193d..f588426 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
+				PKT_TX_QINQ_PKT) ||
+	     txq->bp->ulp_ctx->cfg_data->tx_cfa_action ||
+	     txq->vfr_tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +186,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = txq->tx_cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp)) {
+			if (txq->vfr_tx_cfa_action)
+				cfa_action = txq->vfr_tx_cfa_action;
+			else
+				cfa_action =
+				      txq->bp->ulp_ctx->cfg_data->tx_cfa_action;
+		}
+
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
@@ -212,7 +222,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					&txr->tx_desc_ring[txr->tx_prod];
 		txbd1->lflags = 0;
 		txbd1->cfa_meta = vlan_tag_flags;
-		txbd1->cfa_action = cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp))
+			txbd1->cfa_action = cfa_action;
 
 		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
 			uint16_t hdr_size;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 49/50] net/bnxt: support for ULP Flow counter Manager
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (47 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 48/50] net/bnxt: fill cfa_action in the tx buffer descriptor properly Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 50/50] net/bnxt: Add support for flow query with action_type COUNT Somnath Kotur
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

The Flow counter manager allocates memory to hold the software view
of the counters where the on-chip counter data will be accumulated
along with another memory block that will be shadowing the on-chip
counter data i.e where the raw counter data will be DMAed into from
the chip.
It also keeps track of the first HW counter ID as that will be needed
to retrieve the counter data in bulk using a TF API. It issues this cmd
in an rte_alarm thread that keeps running every second.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_ulp/Makefile      |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    |  35 +++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h    |   8 +
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c  | 465 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h  | 148 +++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c |  27 ++
 7 files changed, 685 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 2939857..5fb0ed3 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -46,6 +46,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
 	'tf_core/tf_em_host.c',
+	'tf_ulp/ulp_fc_mgr.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 3f1b43b..abb6815 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -17,3 +17,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_fc_mgr.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index e5e7e5f..c058611 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -18,6 +18,7 @@
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "ulp_mark_mgr.h"
+#include "ulp_fc_mgr.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 #include "ulp_port_db.h"
@@ -705,6 +706,12 @@ bnxt_ulp_init(struct bnxt *bp)
 		goto jump_to_error;
 	}
 
+	rc = ulp_fc_mgr_init(bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize ulp flow counter mgr\n");
+		goto jump_to_error;
+	}
+
 	return rc;
 
 jump_to_error:
@@ -752,6 +759,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	/* cleanup the ulp mapper */
 	ulp_mapper_deinit(bp->ulp_ctx);
 
+	/* Delete the Flow Counter Manager */
+	ulp_fc_mgr_deinit(bp->ulp_ctx);
+
 	/* Delete the Port database */
 	ulp_port_db_deinit(bp->ulp_ctx);
 
@@ -963,3 +973,28 @@ bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx)
 
 	return ulp_ctx->cfg_data->port_db;
 }
+
+/* Function to set the flow counter info into the context */
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->fc_info = ulp_fc_info;
+
+	return 0;
+}
+
+/* Function to retrieve the flow counter info from the context. */
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->fc_info;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 4843da5..a133284 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,7 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	struct bnxt_ulp_fc_info		*fc_info;
 	uint32_t			port_to_app_flow_id;
 	uint32_t			app_to_port_flow_id;
 	uint32_t			tx_cfa_action;
@@ -154,4 +155,11 @@ int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *error);
 
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info);
+
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
new file mode 100644
index 0000000..f70d4a2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -0,0 +1,465 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_alarm.h>
+#include "bnxt.h"
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "ulp_fc_mgr.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_struct.h"
+#include "tf_tbl.h"
+
+static int
+ulp_fc_mgr_shadow_mem_alloc(struct hw_fc_mem_info *parms, int size)
+{
+	/* Allocate memory*/
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("ulp_fc_info",
+				    RTE_CACHE_LINE_ROUNDUP(size),
+				    4096);
+	if (parms->mem_va == NULL) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	rte_mem_lock_page(parms->mem_va);
+
+	parms->mem_pa = (void *)(uintptr_t)rte_mem_virt2phy(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void
+ulp_fc_mgr_shadow_mem_free(struct hw_fc_mem_info *parms)
+{
+	rte_free(parms->mem_va);
+}
+
+/*
+ * Allocate and Initialize all Flow Counter Manager resources for this ulp
+ * context.
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager.
+ *
+ */
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id, sw_acc_cntr_tbl_sz, hw_fc_mem_info_sz;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i, rc;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	ulp_fc_info = rte_zmalloc("ulp_fc_info", sizeof(*ulp_fc_info), 0);
+	if (!ulp_fc_info)
+		goto error;
+
+	rc = pthread_mutex_init(&ulp_fc_info->fc_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Failed to initialize fc mutex\n");
+		goto error;
+	}
+
+	/* Add the FC info tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, ulp_fc_info);
+
+	sw_acc_cntr_tbl_sz = sizeof(struct sw_acc_counter) *
+				dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		ulp_fc_info->sw_acc_tbl[i] = rte_zmalloc("ulp_sw_acc_cntr_tbl",
+							 sw_acc_cntr_tbl_sz, 0);
+		if (!ulp_fc_info->sw_acc_tbl[i])
+			goto error;
+	}
+
+	hw_fc_mem_info_sz = sizeof(uint64_t) * dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_fc_mgr_shadow_mem_alloc(&ulp_fc_info->shadow_hw_tbl[i],
+						 hw_fc_mem_info_sz);
+		if (rc)
+			goto error;
+	}
+
+	return 0;
+
+error:
+	ulp_fc_mgr_deinit(ctxt);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for fc mgr\n");
+
+	return -ENOMEM;
+}
+
+/*
+ * Release all resources in the Flow Counter Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EINVAL;
+
+	ulp_fc_mgr_thread_cancel(ctxt);
+
+	pthread_mutex_destroy(&ulp_fc_info->fc_lock);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		rte_free(ulp_fc_info->sw_acc_tbl[i]);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		ulp_fc_mgr_shadow_mem_free(&ulp_fc_info->shadow_hw_tbl[i]);
+
+
+	rte_free(ulp_fc_info);
+
+	/* Safe to ignore on deinit */
+	(void)bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, NULL);
+
+	return 0;
+}
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	return !!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD);
+}
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD)) {
+		rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+				  ulp_fc_mgr_alarm_cb,
+				  (void *)ctxt);
+		ulp_fc_info->flags |= ULP_FLAG_FC_THREAD;
+	}
+
+	return 0;
+}
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	ulp_fc_info->flags &= ~ULP_FLAG_FC_THREAD;
+	rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt);
+}
+
+/*
+ * DMA-in the raw counter data from the HW and accumulate in the
+ * local accumulator table using the TF-Core API
+ *
+ * tfp [in] The TF-Core context
+ *
+ * fc_info [in] The ULP Flow counter info ptr
+ *
+ * dir [in] The direction of the flow
+ *
+ * num_counters [in] The number of counters
+ *
+ */
+static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+				       struct bnxt_ulp_fc_info *fc_info,
+				       enum tf_dir dir, uint32_t num_counters)
+{
+	int rc = 0;
+	struct tf_tbl_get_bulk_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD: Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t *stats = NULL;
+	uint16_t i = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
+	parms.num_entries = num_counters;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.entry_sz_in_bytes = sizeof(uint64_t);
+	stats = (uint64_t *)fc_info->shadow_hw_tbl[dir].mem_va;
+	parms.physical_mem_addr = (uintptr_t)fc_info->shadow_hw_tbl[dir].mem_pa;
+
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Memory not initialized id:0x%x dir:%d\n",
+			    parms.starting_idx, dir);
+		return -EINVAL;
+	}
+
+	rc = tf_tbl_bulk_get(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Get failed for id:0x%x rc:%d\n",
+			    parms.starting_idx, rc);
+		return rc;
+	}
+
+	for (i = 0; i < num_counters; i++) {
+		/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+		sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
+		if (!sw_acc_tbl_entry->valid)
+			continue;
+		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i]);
+		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i]);
+	}
+
+	return rc;
+}
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg)
+{
+	int rc = 0, i;
+	struct bnxt_ulp_context *ctxt = arg;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct bnxt_ulp_device_params *dparms;
+	struct tf *tfp;
+	uint32_t dev_id;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ctxt);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return;
+	}
+
+	/*
+	 * Take the fc_lock to ensure no flow is destroyed
+	 * during the bulk get
+	 */
+	if (pthread_mutex_trylock(&ulp_fc_info->fc_lock))
+		goto out;
+
+	if (!ulp_fc_info->num_entries) {
+		pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
+					     dparms->flow_count_db_entries);
+		if (rc)
+			break;
+	}
+
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	/*
+	 * If cmd fails once, no need of
+	 * invoking again every second
+	 */
+
+	if (rc) {
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+out:
+	rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+			  ulp_fc_mgr_alarm_cb,
+			  (void *)ctxt);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	/* Assuming start_idx of 0 is invalid */
+	return (ulp_fc_info->shadow_hw_tbl[dir].start_idx != 0);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+				 uint32_t start_idx)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EIO;
+
+	/* Assuming that 0 is an invalid counter ID ? */
+	if (ulp_fc_info->shadow_hw_tbl[dir].start_idx == 0)
+		ulp_fc_info->shadow_hw_tbl[dir].start_idx = start_idx;
+
+	return 0;
+}
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			    uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->num_entries++;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
+
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			      uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
+	ulp_fc_info->num_entries--;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
new file mode 100644
index 0000000..faa77dd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FC_MGR_H_
+#define _ULP_FC_MGR_H_
+
+#include "bnxt_ulp.h"
+#include "tf_core.h"
+
+#define ULP_FLAG_FC_THREAD			BIT(0)
+#define ULP_FC_TIMER	1/* Timer freq in Sec Flow Counters */
+
+/* Macros to extract packet/byte counters from a 64-bit flow counter. */
+#define FLOW_CNTR_BYTE_WIDTH 36
+#define FLOW_CNTR_BYTE_MASK  (((uint64_t)1 << FLOW_CNTR_BYTE_WIDTH) - 1)
+
+#define FLOW_CNTR_PKTS(v) ((v) >> FLOW_CNTR_BYTE_WIDTH)
+#define FLOW_CNTR_BYTES(v) ((v) & FLOW_CNTR_BYTE_MASK)
+
+struct sw_acc_counter {
+	uint64_t pkt_count;
+	uint64_t byte_count;
+	bool	valid;
+};
+
+struct hw_fc_mem_info {
+	/*
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/*
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+	uint32_t start_idx;
+};
+
+struct bnxt_ulp_fc_info {
+	struct sw_acc_counter	*sw_acc_tbl[TF_DIR_MAX];
+	struct hw_fc_mem_info	shadow_hw_tbl[TF_DIR_MAX];
+	uint32_t		flags;
+	uint32_t		num_entries;
+	pthread_mutex_t		fc_lock;
+};
+
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Release all resources in the flow counter manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg);
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			     uint32_t start_idx);
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			uint32_t hw_cntr_id);
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			  uint32_t hw_cntr_id);
+/*
+ * Check if the starting HW counter ID value is set in the
+ * flow counter manager.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_FC_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 7696de2..a3cfe54 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -10,6 +10,7 @@
 #include "ulp_utils.h"
 #include "ulp_template_struct.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 
 #define ULP_FLOW_DB_RES_DIR_BIT		31
 #define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
@@ -484,6 +485,21 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 		ulp_flow_db_res_params_to_info(fid_resource, params);
 	}
 
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		/* Store the first HW counter ID for this table */
+		if (!ulp_fc_mgr_start_idx_isset(ulp_ctxt, params->direction))
+			ulp_fc_mgr_start_idx_set(ulp_ctxt, params->direction,
+						 params->resource_hndl);
+
+		ulp_fc_mgr_cntr_set(ulp_ctxt, params->direction,
+				    params->resource_hndl);
+
+		if (!ulp_fc_mgr_thread_isstarted(ulp_ctxt))
+			ulp_fc_mgr_thread_start(ulp_ctxt);
+	}
+
 	/* all good, return success */
 	return 0;
 }
@@ -574,6 +590,17 @@ int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
 					nxt_idx);
 	}
 
+	/* Now that the HW Flow counter resource is deleted, reset it's
+	 * corresponding slot in the SW accumulation table in the Flow Counter
+	 * manager
+	 */
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		ulp_fc_mgr_cntr_reset(ulp_ctxt, params->direction,
+				      params->resource_hndl);
+	}
+
 	/* all good, return success */
 	return 0;
 }
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH 50/50] net/bnxt: Add support for flow query with action_type COUNT
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (48 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 49/50] net/bnxt: support for ULP Flow counter Manager Somnath Kotur
@ 2020-06-12 13:29 ` Somnath Kotur
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Somnath Kotur @ 2020-06-12 13:29 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Use the flow counter manager to fetch the accumulated stats for
a flow.

Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c |  46 ++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c    | 141 ++++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h    |  17 +++-
 3 files changed, 197 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 7ef306e..a7a0769 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -9,6 +9,7 @@
 #include "ulp_matcher.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 #include <rte_malloc.h>
 
 static int32_t
@@ -289,11 +290,54 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	return ret;
 }
 
+/* Function to query the rte flows. */
+static int32_t
+bnxt_ulp_flow_query(struct rte_eth_dev *eth_dev,
+		    struct rte_flow *flow,
+		    const struct rte_flow_action *action,
+		    void *data,
+		    struct rte_flow_error *error)
+{
+	int rc = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	struct rte_flow_query_count *count;
+	uint32_t flow_id;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to query flow.");
+		return -EINVAL;
+	}
+
+	flow_id = (uint32_t)(uintptr_t)flow;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		count = data;
+		rc = ulp_fc_mgr_query_count_get(ulp_ctx, flow_id, count);
+		if (rc) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to query flow.");
+		}
+		break;
+	default:
+		rte_flow_error_set(error, -rc, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "Unsupported action item");
+	}
+
+	return rc;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
 	.flush = bnxt_ulp_flow_flush,
-	.query = NULL,
+	.query = bnxt_ulp_flow_query,
 	.isolate = NULL
 };
+
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index f70d4a2..9944e9e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -11,6 +11,7 @@
 #include "bnxt_ulp.h"
 #include "bnxt_tf_common.h"
 #include "ulp_fc_mgr.h"
+#include "ulp_flow_db.h"
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "tf_tbl.h"
@@ -226,9 +227,10 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
  * num_counters [in] The number of counters
  *
  */
-static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+__rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 				       struct bnxt_ulp_fc_info *fc_info,
 				       enum tf_dir dir, uint32_t num_counters)
+/* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
 {
 	int rc = 0;
 	struct tf_tbl_get_bulk_parms parms = { 0 };
@@ -275,6 +277,45 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 
 	return rc;
 }
+
+static int ulp_get_single_flow_stat(struct tf *tfp,
+				    struct bnxt_ulp_fc_info *fc_info,
+				    enum tf_dir dir,
+				    uint32_t hw_cntr_id)
+{
+	int rc = 0;
+	struct tf_get_tbl_entry_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD:Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t stats = 0;
+	uint32_t sw_cntr_indx = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.idx = hw_cntr_id;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.data_sz_in_bytes = sizeof(uint64_t);
+	parms.data = (uint8_t *)&stats;
+	rc = tf_get_tbl_entry(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Get failed for id:0x%x rc:%d\n",
+			    parms.idx, rc);
+		return rc;
+	}
+
+	/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+	sw_cntr_indx = hw_cntr_id - fc_info->shadow_hw_tbl[dir].start_idx;
+	sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][sw_cntr_indx];
+	sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats);
+	sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats);
+
+	return rc;
+}
+
 /*
  * Alarm handler that will issue the TF-Core API to fetch
  * data from the chip's internal flow counters
@@ -282,15 +323,18 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
+
 void
 ulp_fc_mgr_alarm_cb(void *arg)
 {
-	int rc = 0, i;
+	int rc = 0;
+	unsigned int j;
+	enum tf_dir i;
 	struct bnxt_ulp_context *ctxt = arg;
 	struct bnxt_ulp_fc_info *ulp_fc_info;
 	struct bnxt_ulp_device_params *dparms;
 	struct tf *tfp;
-	uint32_t dev_id;
+	uint32_t dev_id, hw_cntr_id = 0;
 
 	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
 	if (!ulp_fc_info)
@@ -325,13 +369,27 @@ ulp_fc_mgr_alarm_cb(void *arg)
 		ulp_fc_mgr_thread_cancel(ctxt);
 		return;
 	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
+	/*
+	 * Commented for now till GET_BULK is resolved, just get the first flow
+	 * stat for now
+	 for (i = 0; i < TF_DIR_MAX; i++) {
 		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
 					     dparms->flow_count_db_entries);
 		if (rc)
 			break;
 	}
+	*/
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		for (j = 0; j < ulp_fc_info->num_entries; j++) {
+			if (!ulp_fc_info->sw_acc_tbl[i][j].valid)
+				continue;
+			hw_cntr_id = ulp_fc_info->sw_acc_tbl[i][j].hw_cntr_id;
+			rc = ulp_get_single_flow_stat(tfp, ulp_fc_info, i,
+						      hw_cntr_id);
+			if (rc)
+				break;
+		}
+	}
 
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -425,6 +483,7 @@ int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = hw_cntr_id;
 	ulp_fc_info->num_entries++;
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -456,6 +515,7 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
 	ulp_fc_info->num_entries--;
@@ -463,3 +523,74 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 
 	return 0;
 }
+
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ctxt,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count)
+{
+	int rc = 0;
+	uint32_t nxt_resource_index = 0;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct ulp_flow_db_res_params params;
+	enum tf_dir dir;
+	uint32_t hw_cntr_id = 0, sw_cntr_idx = 0;
+	struct sw_acc_counter sw_acc_tbl_entry;
+	bool found_cntr_resource = false;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -ENODEV;
+
+	do {
+		rc = ulp_flow_db_resource_get(ctxt,
+					      BNXT_ULP_REGULAR_FLOW_TABLE,
+					      flow_id,
+					      &nxt_resource_index,
+					      &params);
+		if (params.resource_func ==
+		     BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE &&
+		     (params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT ||
+		      params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT)) {
+			found_cntr_resource = true;
+			break;
+		}
+
+	} while (!rc);
+
+	if (rc)
+		return rc;
+
+	if (found_cntr_resource) {
+		dir = params.direction;
+		hw_cntr_id = params.resource_hndl;
+		sw_cntr_idx = hw_cntr_id -
+				ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+		sw_acc_tbl_entry = ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx];
+		if (params.resource_sub_type ==
+			BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+			count->hits_set = 1;
+			count->bytes_set = 1;
+			count->hits = sw_acc_tbl_entry.pkt_count;
+			count->bytes = sw_acc_tbl_entry.byte_count;
+		} else {
+			/* TBD: Handle External counters */
+			rc = -EINVAL;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
index faa77dd..2072670 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -23,6 +23,7 @@ struct sw_acc_counter {
 	uint64_t pkt_count;
 	uint64_t byte_count;
 	bool	valid;
+	uint32_t hw_cntr_id;
 };
 
 struct hw_fc_mem_info {
@@ -142,7 +143,21 @@ bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
-
 bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ulp_ctx,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count);
 #endif /* _ULP_FC_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management
  2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
                   ` (49 preceding siblings ...)
  2020-06-12 13:29 ` [dpdk-dev] [PATCH 50/50] net/bnxt: Add support for flow query with action_type COUNT Somnath Kotur
@ 2020-07-01  6:51 ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 01/51] net/bnxt: add basic infrastructure for VF representors Ajit Khaparde
                     ` (51 more replies)
  50 siblings, 52 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev

This patchset introduces support for VF representors, flow
counters and on-chip exact match flows.
Also implements the driver hook for the rte_flow_query API.

v1->v2:
 - update commit message
 - rebase patches against latest changes in the tree
 - fix signed-off-by tags
 - update release notes

Ajit Khaparde (1):
  doc: update release notes

Jay Ding (5):
  net/bnxt: implement support for TCAM access
  net/bnxt: support two level priority for TCAMs
  net/bnxt: add external action alloc and free
  net/bnxt: implement IF tables set and get
  net/bnxt: add global config set and get APIs

Kishore Padmanabha (8):
  net/bnxt: integrate with the latest tf core changes
  net/bnxt: add support for if table processing
  net/bnxt: disable Tx vector mode if truflow is enabled
  net/bnxt: add index opcode and operand to mapper table
  net/bnxt: add support for global resource templates
  net/bnxt: add support for internal exact match entries
  net/bnxt: add support for conditional execution of mapper tables
  net/bnxt: add VF-rep and stat templates

Lance Richardson (1):
  net/bnxt: initialize parent PF information

Michael Wildt (7):
  net/bnxt: add multi device support
  net/bnxt: update multi device design support
  net/bnxt: multiple device implementation
  net/bnxt: update identifier with remap support
  net/bnxt: update RM with residual checker
  net/bnxt: update table get to use new design
  net/bnxt: add TF register and unregister

Mike Baucom (1):
  net/bnxt: add support for internal encap records

Peter Spreadborough (7):
  net/bnxt: add support for exact match
  net/bnxt: modify EM insert and delete to use HWRM direct
  net/bnxt: support EM and TCAM lookup with table scope
  net/bnxt: remove table scope from session
  net/bnxt: add HCAPI interface support
  net/bnxt: update RM to support HCAPI only
  net/bnxt: add support for EEM System memory

Randy Schacher (2):
  net/bnxt: add core changes for EM and EEM lookups
  net/bnxt: align CFA resources with RM

Shahaji Bhosle (2):
  net/bnxt: support bulk table get and mirror
  net/bnxt: support two-level priority for TCAMs

Somnath Kotur (7):
  net/bnxt: add basic infrastructure for VF representors
  net/bnxt: add support for VF-reps data path
  net/bnxt: get IDs for VF-Rep endpoint
  net/bnxt: parse representor along with other dev-args
  net/bnxt: create default flow rules for the VF-rep conduit
  net/bnxt: add ULP Flow counter Manager
  net/bnxt: add support for count action in flow query

Venkat Duvvuru (10):
  net/bnxt: modify port db dev interface
  net/bnxt: get port and function info
  net/bnxt: add support for hwrm port phy qcaps
  net/bnxt: modify port db to handle more info
  net/bnxt: enable port MAC qcfg command for trusted VF
  net/bnxt: enhancements for port db
  net/bnxt: manage VF to VFR conduit
  net/bnxt: fill mapper parameters with default rules info
  net/bnxt: add port default rules for ingress and egress
  net/bnxt: fill cfa action in the Tx descriptor

 config/common_base                            |    1 +
 doc/guides/rel_notes/release_20_08.rst        |   10 +-
 drivers/net/bnxt/Makefile                     |    8 +-
 drivers/net/bnxt/bnxt.h                       |  121 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  519 +-
 drivers/net/bnxt/bnxt_hwrm.c                  |  122 +-
 drivers/net/bnxt/bnxt_hwrm.h                  |    7 +
 drivers/net/bnxt/bnxt_reps.c                  |  773 +++
 drivers/net/bnxt/bnxt_reps.h                  |   45 +
 drivers/net/bnxt/bnxt_rxr.c                   |   39 +-
 drivers/net/bnxt/bnxt_rxr.h                   |    1 +
 drivers/net/bnxt/bnxt_txq.h                   |    2 +
 drivers/net/bnxt/bnxt_txr.c                   |   18 +-
 drivers/net/bnxt/hcapi/Makefile               |   10 +
 drivers/net/bnxt/hcapi/cfa_p40_hw.h           |  781 +++
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h          |  303 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h            |  276 +
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h       |  672 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c         |  399 ++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h         |  467 ++
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++--
 drivers/net/bnxt/meson.build                  |   21 +-
 drivers/net/bnxt/tf_core/Makefile             |   29 +-
 drivers/net/bnxt/tf_core/bitalloc.c           |  107 +
 drivers/net/bnxt/tf_core/bitalloc.h           |    5 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h |  293 +
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  995 +---
 drivers/net/bnxt/tf_core/ll.c                 |   52 +
 drivers/net/bnxt/tf_core/ll.h                 |   46 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |   10 +-
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_common.h          |   43 +
 drivers/net/bnxt/tf_core/tf_core.c            | 1495 +++--
 drivers/net/bnxt/tf_core/tf_core.h            |  874 ++-
 drivers/net/bnxt/tf_core/tf_device.c          |  271 +
 drivers/net/bnxt/tf_core/tf_device.h          |  650 ++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  147 +
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  104 +
 drivers/net/bnxt/tf_core/tf_em.c              |  515 --
 drivers/net/bnxt/tf_core/tf_em.h              |  492 +-
 drivers/net/bnxt/tf_core/tf_em_common.c       | 1048 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  134 +
 drivers/net/bnxt/tf_core/tf_em_host.c         |  531 ++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  352 ++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  533 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c      |  199 +
 drivers/net/bnxt/tf_core/tf_global_cfg.h      |  170 +
 drivers/net/bnxt/tf_core/tf_identifier.c      |  186 +
 drivers/net/bnxt/tf_core/tf_identifier.h      |  147 +
 drivers/net/bnxt/tf_core/tf_if_tbl.c          |  178 +
 drivers/net/bnxt/tf_core/tf_if_tbl.h          |  236 +
 drivers/net/bnxt/tf_core/tf_msg.c             | 1681 +++---
 drivers/net/bnxt/tf_core/tf_msg.h             |  409 +-
 drivers/net/bnxt/tf_core/tf_resources.h       |  531 --
 drivers/net/bnxt/tf_core/tf_rm.c              | 3840 +++---------
 drivers/net/bnxt/tf_core/tf_rm.h              |  554 +-
 drivers/net/bnxt/tf_core/tf_session.c         |  776 +++
 drivers/net/bnxt/tf_core/tf_session.h         |  565 +-
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      |  240 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     |  239 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1930 +-----
 drivers/net/bnxt/tf_core/tf_tbl.h             |  469 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  430 ++
 drivers/net/bnxt/tf_core/tf_tcam.h            |  360 ++
 drivers/net/bnxt/tf_core/tf_util.c            |  176 +
 drivers/net/bnxt/tf_core/tf_util.h            |   98 +
 drivers/net/bnxt/tf_core/tfp.c                |   33 +-
 drivers/net/bnxt/tf_core/tfp.h                |  153 +-
 drivers/net/bnxt/tf_ulp/Makefile              |    2 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   16 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  129 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   35 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |   84 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       |  385 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  596 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |  163 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   42 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  481 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    6 +-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   10 +
 drivers/net/bnxt/tf_ulp/ulp_port_db.c         |  235 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.h         |  122 +-
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |   30 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  433 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5217 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  537 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   85 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   23 +-
 drivers/net/bnxt/tf_ulp/ulp_utils.c           |    2 +-
 94 files changed, 28009 insertions(+), 11248 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 01/51] net/bnxt: add basic infrastructure for VF representors
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
                     ` (50 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru, Kalesh AP

From: Somnath Kotur <somnath.kotur@broadcom.com>

Defines data structures and code to init/uninit
VF representors during pci_probe and pci_remove
respectively.
Most of the dev_ops for the VF representor are just
stubs for now and will be will be filled out in next patch.

To create a representor using testpmd:
testpmd -c 0xff -wB:D.F,representor=1 -- -i
testpmd -c 0xff -w05:02.0,representor=[1] -- -i

To create a representor using ovs-dpdk:
1. Firt add the trusted VF port to a bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0
2. Add the representor port to the bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0,representor=1

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile      |   2 +
 drivers/net/bnxt/bnxt.h        |  64 +++++++-
 drivers/net/bnxt/bnxt_ethdev.c | 225 ++++++++++++++++++++------
 drivers/net/bnxt/bnxt_reps.c   | 287 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_reps.h   |  35 ++++
 drivers/net/bnxt/meson.build   |   1 +
 6 files changed, 566 insertions(+), 48 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index a375299c3..365627499 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -14,6 +14,7 @@ LIB = librte_pmd_bnxt.a
 EXPORT_MAP := rte_pmd_bnxt_version.map
 
 CFLAGS += -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 CFLAGS += $(WERROR_FLAGS)
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
@@ -38,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_reps.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c
 ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index d455f8d84..9b7b87cee 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -220,6 +220,7 @@ struct bnxt_child_vf_info {
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
+#define BNXT_MAX_VF_REPS	64
 #define BNXT_TOTAL_VFS(bp)	((bp)->pf->total_vfs)
 #define BNXT_FIRST_VF_FID	128
 #define BNXT_PF_RINGS_USED(bp)	bnxt_get_num_queues(bp)
@@ -492,6 +493,10 @@ struct bnxt_mark_info {
 	bool		valid;
 };
 
+struct bnxt_rep_info {
+	struct rte_eth_dev	*vfr_eth_dev;
+};
+
 /* address space location of register */
 #define BNXT_FW_STATUS_REG_TYPE_MASK	3
 /* register is located in PCIe config space */
@@ -515,6 +520,40 @@ struct bnxt_mark_info {
 #define BNXT_FW_STATUS_HEALTHY		0x8000
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
+#define BNXT_ETH_RSS_SUPPORT (	\
+	ETH_RSS_IPV4 |		\
+	ETH_RSS_NONFRAG_IPV4_TCP |	\
+	ETH_RSS_NONFRAG_IPV4_UDP |	\
+	ETH_RSS_IPV6 |		\
+	ETH_RSS_NONFRAG_IPV6_TCP |	\
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
+				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_CKSUM | \
+				     DEV_TX_OFFLOAD_UDP_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_TSO | \
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_QINQ_INSERT | \
+				     DEV_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
+				     DEV_RX_OFFLOAD_VLAN_STRIP | \
+				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_UDP_CKSUM | \
+				     DEV_RX_OFFLOAD_TCP_CKSUM | \
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
+				     DEV_RX_OFFLOAD_KEEP_CRC | \
+				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
+				     DEV_RX_OFFLOAD_TCP_LRO | \
+				     DEV_RX_OFFLOAD_SCATTER | \
+				     DEV_RX_OFFLOAD_RSS_HASH)
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -682,6 +721,9 @@ struct bnxt {
 #define BNXT_MAX_RINGS(bp) \
 	(RTE_MIN((((bp)->max_cp_rings - BNXT_NUM_ASYNC_CPR(bp)) / 2U), \
 		 BNXT_MAX_TX_RINGS(bp)))
+
+#define BNXT_MAX_VF_REP_RINGS	8
+
 	uint16_t		max_nq_rings;
 	uint16_t		max_l2_ctx;
 	uint16_t		max_rx_em_flows;
@@ -711,7 +753,9 @@ struct bnxt {
 
 	uint16_t		fw_reset_min_msecs;
 	uint16_t		fw_reset_max_msecs;
-
+	uint16_t		switch_domain_id;
+	uint16_t		num_reps;
+	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -732,6 +776,18 @@ struct bnxt {
 
 #define BNXT_FC_TIMER	1 /* Timer freq in Sec Flow Counters */
 
+/**
+ * Structure to store private data for each VF representor instance
+ */
+struct bnxt_vf_representor {
+	uint16_t switch_domain_id;
+	uint16_t vf_id;
+	/* Private data store of associated PF/Trusted VF */
+	struct bnxt	*parent_priv;
+	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
 int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 		     bool exp_link_status);
@@ -744,7 +800,13 @@ void bnxt_schedule_fw_health_check(struct bnxt *bp);
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
 bool bnxt_stratus_device(struct bnxt *bp);
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete);
+
 extern const struct rte_flow_ops bnxt_flow_ops;
+
 #define bnxt_acquire_flow_lock(bp) \
 	pthread_mutex_lock(&(bp)->flow_lock)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7022f6d52..4911745af 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -18,6 +18,7 @@
 #include "bnxt_filter.h"
 #include "bnxt_hwrm.h"
 #include "bnxt_irq.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxq.h"
 #include "bnxt_rxr.h"
@@ -93,40 +94,6 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-#define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
-				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_VLAN_STRIP | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
-
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
@@ -163,7 +130,6 @@ static int bnxt_devarg_max_num_kflow_invalid(uint16_t max_num_kflows)
 }
 
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev);
 static int bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev);
@@ -198,7 +164,7 @@ static uint16_t bnxt_rss_ctxts(const struct bnxt *bp)
 				    BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
-static uint16_t  bnxt_rss_hash_tbl_size(const struct bnxt *bp)
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 {
 	if (!BNXT_CHIP_THOR(bp))
 		return HW_HASH_INDEX_SIZE;
@@ -1047,7 +1013,7 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	return -ENOSPC;
 }
 
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 {
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
@@ -1273,6 +1239,12 @@ static int bnxt_dev_set_link_down_op(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static void bnxt_free_switch_domain(struct bnxt *bp)
+{
+	if (bp->switch_domain_id)
+		rte_eth_switch_domain_free(bp->switch_domain_id);
+}
+
 /* Unload the driver, release resources */
 static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
@@ -1341,6 +1313,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
+	bnxt_free_switch_domain(bp);
+
 	bnxt_uninit_resources(bp, false);
 
 	bnxt_free_leds_info(bp);
@@ -1522,8 +1496,8 @@ int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 	return rc;
 }
 
-static int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
-			       int wait_to_complete)
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete)
 {
 	return bnxt_link_update(eth_dev, wait_to_complete, ETH_LINK_UP);
 }
@@ -5477,8 +5451,26 @@ bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
 	rte_kvargs_free(kvlist);
 }
 
+static int bnxt_alloc_switch_domain(struct bnxt *bp)
+{
+	int rc = 0;
+
+	if (BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) {
+		rc = rte_eth_switch_domain_alloc(&bp->switch_domain_id);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Failed to alloc switch domain: %d\n", rc);
+		else
+			PMD_DRV_LOG(INFO,
+				    "Switch domain allocated %d\n",
+				    bp->switch_domain_id);
+	}
+
+	return rc;
+}
+
 static int
-bnxt_dev_init(struct rte_eth_dev *eth_dev)
+bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	static int version_printed;
@@ -5557,6 +5549,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	if (rc)
 		goto error_free;
 
+	bnxt_alloc_switch_domain(bp);
+
 	/* Pass the information to the rte_eth_dev_close() that it should also
 	 * release the private port resources.
 	 */
@@ -5689,25 +5683,162 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *bp = eth_dev->data->dev_private;
+	struct rte_eth_dev *vf_rep_eth_dev;
+	int ret = 0, i;
+
+	if (!bp)
+		return -EINVAL;
+
+	for (i = 0; i < bp->num_reps; i++) {
+		vf_rep_eth_dev = bp->rep_info[i].vfr_eth_dev;
+		if (!vf_rep_eth_dev)
+			continue;
+		rte_eth_dev_destroy(vf_rep_eth_dev, bnxt_vf_representor_uninit);
+	}
+	ret = rte_eth_dev_destroy(eth_dev, bnxt_dev_uninit);
+
+	return ret;
+}
+
 static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct bnxt),
-		bnxt_dev_init);
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
+	uint16_t num_rep;
+	int i, ret = 0;
+	struct bnxt *backing_bp;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+	}
+
+	if (num_rep > BNXT_MAX_VF_REPS) {
+		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
+			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
+		ret = -EINVAL;
+		return ret;
+	}
+
+	/* probe representor ports now */
+	if (!backing_eth_dev)
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = -ENODEV;
+		return ret;
+	}
+	backing_bp = backing_eth_dev->data->dev_private;
+
+	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
+		PMD_DRV_LOG(ERR,
+			    "Not a PF or trusted VF. No Representor support\n");
+		/* Returning an error is not an option.
+		 * Applications are not handling this correctly
+		 */
+		return ret;
+	}
+
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		struct bnxt_vf_representor representor = {
+			.vf_id = eth_da.representor_ports[i],
+			.switch_domain_id = backing_bp->switch_domain_id,
+			.parent_priv = backing_bp
+		};
+
+		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
+			PMD_DRV_LOG(ERR, "VF-Rep id %d >= %d MAX VF ID\n",
+				    representor.vf_id, BNXT_MAX_VF_REPS);
+			continue;
+		}
+
+		/* representor port net_bdf_port */
+		snprintf(name, sizeof(name), "net_%s_representor_%d",
+			 pci_dev->device.name, eth_da.representor_ports[i]);
+
+		ret = rte_eth_dev_create(&pci_dev->device, name,
+					 sizeof(struct bnxt_vf_representor),
+					 NULL, NULL,
+					 bnxt_vf_representor_init,
+					 &representor);
+
+		if (!ret) {
+			vf_rep_eth_dev = rte_eth_dev_allocated(name);
+			if (!vf_rep_eth_dev) {
+				PMD_DRV_LOG(ERR, "Failed to find the eth_dev"
+					    " for VF-Rep: %s.", name);
+				bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+				ret = -ENODEV;
+				return ret;
+			}
+			backing_bp->rep_info[representor.vf_id].vfr_eth_dev =
+				vf_rep_eth_dev;
+			backing_bp->num_reps++;
+		} else {
+			PMD_DRV_LOG(ERR, "failed to create bnxt vf "
+				    "representor %s.", name);
+			bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+		}
+	}
+
+	return ret;
 }
 
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
-		return rte_eth_dev_pci_generic_remove(pci_dev,
-				bnxt_dev_uninit);
-	else
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!eth_dev)
+		return ENODEV; /* Invoked typically only by OVS-DPDK, by the
+				* time it comes here the eth_dev is already
+				* deleted by rte_eth_dev_close(), so returning
+				* +ve value will atleast help in proper cleanup
+				*/
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_vf_representor_uninit);
+		else
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_dev_uninit);
+	} else {
 		return rte_eth_dev_pci_generic_remove(pci_dev, NULL);
+	}
 }
 
 static struct rte_pci_driver bnxt_rte_pmd = {
 	.id_table = bnxt_pci_id_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+			RTE_PCI_DRV_PROBE_AGAIN, /* Needed in case of VF-REPs
+						  * and OVS-DPDK
+						  */
 	.probe = bnxt_pci_probe,
 	.remove = bnxt_pci_remove,
 };
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
new file mode 100644
index 000000000..21f1b0765
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_ring.h"
+#include "bnxt_reps.h"
+#include "hsi_struct_def_dpdk.h"
+
+static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
+	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
+	.dev_configure = bnxt_vf_rep_dev_configure_op,
+	.dev_start = bnxt_vf_rep_dev_start_op,
+	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.link_update = bnxt_vf_rep_link_update_op,
+	.dev_close = bnxt_vf_rep_dev_close_op,
+	.dev_stop = bnxt_vf_rep_dev_stop_op
+};
+
+static uint16_t
+bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
+		     __rte_unused struct rte_mbuf **rx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
+		     __rte_unused struct rte_mbuf **tx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
+{
+	struct bnxt_vf_representor *vf_rep_bp = eth_dev->data->dev_private;
+	struct bnxt_vf_representor *rep_params =
+				 (struct bnxt_vf_representor *)params;
+	struct rte_eth_link *link;
+	struct bnxt *parent_bp;
+
+	vf_rep_bp->vf_id = rep_params->vf_id;
+	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
+	vf_rep_bp->parent_priv = rep_params->parent_priv;
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+	eth_dev->data->representor_id = rep_params->vf_id;
+
+	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
+	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
+	       sizeof(vf_rep_bp->mac_addr));
+	eth_dev->data->mac_addrs =
+		(struct rte_ether_addr *)&vf_rep_bp->mac_addr;
+	eth_dev->dev_ops = &bnxt_vf_rep_dev_ops;
+
+	/* No data-path, but need stub Rx/Tx functions to avoid crash
+	 * when testing with ovs-dpdk
+	 */
+	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
+	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
+	/* Link state. Inherited from PF or trusted VF */
+	parent_bp = vf_rep_bp->parent_priv;
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+
+	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
+	bnxt_print_link_info(eth_dev);
+
+	/* Pass the information to the rte_eth_dev_close() that it should also
+	 * release the private port resources.
+	 */
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+	PMD_DRV_LOG(INFO,
+		    "Switch domain id %d: Representor Device %d init done\n",
+		    vf_rep_bp->switch_domain_id, vf_rep_bp->vf_id);
+
+	return 0;
+}
+
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+
+	uint16_t vf_id;
+
+	eth_dev->data->mac_addrs = NULL;
+
+	parent_bp = rep->parent_priv;
+	if (parent_bp) {
+		parent_bp->num_reps--;
+		vf_id = rep->vf_id;
+		if (parent_bp->rep_info) {
+			memset(&parent_bp->rep_info[vf_id], 0,
+			       sizeof(parent_bp->rep_info[vf_id]));
+			/* mark that this representor has been freed */
+		}
+	}
+	eth_dev->dev_ops = NULL;
+	return 0;
+}
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+	struct rte_eth_link *link;
+	int rc;
+
+	parent_bp = rep->parent_priv;
+	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
+
+	/* Link state. Inherited from PF or trusted VF */
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+	bnxt_print_link_info(eth_dev);
+
+	return rc;
+}
+
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_rep_link_update_op(eth_dev, 1);
+
+	return 0;
+}
+
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
+{
+	eth_dev = eth_dev;
+}
+
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_representor_uninit(eth_dev);
+}
+
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp;
+	uint16_t max_vnics, i, j, vpool, vrxq;
+	unsigned int max_rx_rings;
+	int rc = 0;
+
+	/* MAC Specifics */
+	parent_bp = rep_bp->parent_priv;
+	if (!parent_bp) {
+		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
+		return rc;
+	}
+	PMD_DRV_LOG(DEBUG, "Representor dev_info_get_op\n");
+	dev_info->max_mac_addrs = parent_bp->max_l2_ctx;
+	dev_info->max_hash_mac_addrs = 0;
+
+	max_rx_rings = BNXT_MAX_VF_REP_RINGS;
+	/* For the sake of symmetry, max_rx_queues = max_tx_queues */
+	dev_info->max_rx_queues = max_rx_rings;
+	dev_info->max_tx_queues = max_rx_rings;
+	dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
+	dev_info->hash_key_size = 40;
+	max_vnics = parent_bp->max_vnics;
+
+	/* MTU specifics */
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+	dev_info->max_mtu = BNXT_MAX_MTU;
+
+	/* Fast path specifics */
+	dev_info->min_rx_bufsize = 1;
+	dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+
+	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
+	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
+		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
+	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
+
+	/* *INDENT-OFF* */
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 0,
+		},
+		.rx_free_thresh = 32,
+		/* If no descriptors available, pkts are dropped by default */
+		.rx_drop_en = 1,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = 32,
+			.hthresh = 0,
+			.wthresh = 0,
+		},
+		.tx_free_thresh = 32,
+		.tx_rs_thresh = 32,
+	};
+	eth_dev->data->dev_conf.intr_conf.lsc = 1;
+
+	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+	dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC;
+	dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC;
+
+	/* *INDENT-ON* */
+
+	/*
+	 * TODO: default_rxconf, default_txconf, rx_desc_lim, and tx_desc_lim
+	 *       need further investigation.
+	 */
+
+	/* VMDq resources */
+	vpool = 64; /* ETH_64_POOLS */
+	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	for (i = 0; i < 4; vpool >>= 1, i++) {
+		if (max_vnics > vpool) {
+			for (j = 0; j < 5; vrxq >>= 1, j++) {
+				if (dev_info->max_rx_queues > vrxq) {
+					if (vpool > vrxq)
+						vpool = vrxq;
+					goto found;
+				}
+			}
+			/* Not enough resources to support VMDq */
+			break;
+		}
+	}
+	/* Not enough resources to support VMDq */
+	vpool = 0;
+	vrxq = 0;
+found:
+	dev_info->max_vmdq_pools = vpool;
+	dev_info->vmdq_queue_num = vrxq;
+
+	dev_info->vmdq_pool_base = 0;
+	dev_info->vmdq_queue_base = 0;
+
+	return 0;
+}
+
+int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
+{
+	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	return 0;
+}
+
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
+
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
new file mode 100644
index 000000000..6048faf08
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_REPS_H_
+#define _BNXT_REPS_H_
+
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info);
+int bnxt_vf_rep_dev_configure_op(struct rte_eth_dev *eth_dev);
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl);
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp);
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf);
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+#endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 4306c6039..5c7859cb5 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -21,6 +21,7 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+	'bnxt_reps.c',
 
 	'tf_core/tf_core.c',
 	'tf_core/bitalloc.c',
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 02/51] net/bnxt: add support for VF-reps data path
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 01/51] net/bnxt: add basic infrastructure for VF representors Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
                     ` (49 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Added code to support Tx/Rx from a VF representor port.
The VF-reps use the RX/TX rings of the Trusted VF/PF.
For each VF-rep, the Trusted VF/PF driver issues a VFR_ALLOC FW cmd that
returns "cfa_code" and "cfa_action" values.
The FW sets up the filter tables in such a way that VF traffic by
default (in absence of other rules) gets punted to the parent function
i.e. either the Trusted VF or the PF.
The cfa_code value in the RX-compl informs the driver of the source VF.
For traffic being transmitted from the VF-rep, the TX BD is tagged with
a cfa_action value that informs the HW to punt it to the corresponding
VF.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  30 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 150 ++++++++---
 drivers/net/bnxt/bnxt_reps.c   | 476 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/bnxt_reps.h   |  11 +
 drivers/net/bnxt/bnxt_rxr.c    |  22 +-
 drivers/net/bnxt/bnxt_rxr.h    |   1 +
 drivers/net/bnxt/bnxt_txq.h    |   1 +
 drivers/net/bnxt/bnxt_txr.c    |   4 +-
 8 files changed, 616 insertions(+), 79 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 9b7b87cee..443d9fee4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -495,6 +495,7 @@ struct bnxt_mark_info {
 
 struct bnxt_rep_info {
 	struct rte_eth_dev	*vfr_eth_dev;
+	pthread_mutex_t		vfr_lock;
 };
 
 /* address space location of register */
@@ -755,7 +756,8 @@ struct bnxt {
 	uint16_t		fw_reset_max_msecs;
 	uint16_t		switch_domain_id;
 	uint16_t		num_reps;
-	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
+	struct bnxt_rep_info	*rep_info;
+	uint16_t                *cfa_code_map;
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -780,12 +782,28 @@ struct bnxt {
  * Structure to store private data for each VF representor instance
  */
 struct bnxt_vf_representor {
-	uint16_t switch_domain_id;
-	uint16_t vf_id;
+	uint16_t		switch_domain_id;
+	uint16_t		vf_id;
+	uint16_t		tx_cfa_action;
+	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
-	struct bnxt	*parent_priv;
-	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
-	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct rte_eth_dev	*parent_dev;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t			dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct bnxt_rx_queue	**rx_queues;
+	unsigned int		rx_nr_rings;
+	unsigned int		tx_nr_rings;
+	uint64_t                tx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                tx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_bytes[BNXT_MAX_VF_REP_RINGS];
+};
+
+struct bnxt_vf_rep_tx_queue {
+	struct bnxt_tx_queue *txq;
+	struct bnxt_vf_representor *bp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4911745af..4202904c9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -137,6 +137,7 @@ static void bnxt_cancel_fw_health_check(struct bnxt *bp);
 static int bnxt_restore_vlan_filters(struct bnxt *bp);
 static void bnxt_dev_recover(void *arg);
 static void bnxt_free_error_recovery_info(struct bnxt *bp);
+static void bnxt_free_rep_info(struct bnxt *bp);
 
 int is_bnxt_in_error(struct bnxt *bp)
 {
@@ -5243,7 +5244,7 @@ bnxt_init_locks(struct bnxt *bp)
 
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 {
-	int rc;
+	int rc = 0;
 
 	rc = bnxt_init_fw(bp);
 	if (rc)
@@ -5642,6 +5643,8 @@ bnxt_uninit_locks(struct bnxt *bp)
 {
 	pthread_mutex_destroy(&bp->flow_lock);
 	pthread_mutex_destroy(&bp->def_cp_lock);
+	if (bp->rep_info)
+		pthread_mutex_destroy(&bp->rep_info->vfr_lock);
 }
 
 static int
@@ -5664,6 +5667,7 @@ bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev)
 
 	bnxt_uninit_locks(bp);
 	bnxt_free_flow_stats_info(bp);
+	bnxt_free_rep_info(bp);
 	rte_free(bp->ptp_cfg);
 	bp->ptp_cfg = NULL;
 	return rc;
@@ -5703,56 +5707,73 @@ static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static void bnxt_free_rep_info(struct bnxt *bp)
 {
-	char name[RTE_ETH_NAME_MAX_LEN];
-	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
-	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
-	uint16_t num_rep;
-	int i, ret = 0;
-	struct bnxt *backing_bp;
+	rte_free(bp->rep_info);
+	bp->rep_info = NULL;
+	rte_free(bp->cfa_code_map);
+	bp->cfa_code_map = NULL;
+}
 
-	if (pci_dev->device.devargs) {
-		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
-					    &eth_da);
-		if (ret)
-			return ret;
-	}
+static int bnxt_init_rep_info(struct bnxt *bp)
+{
+	int i = 0, rc;
 
-	num_rep = eth_da.nb_representor_ports;
-	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
-		    num_rep);
+	if (bp->rep_info)
+		return 0;
 
-	/* We could come here after first level of probe is already invoked
-	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
-	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
-	 */
-	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
-					 sizeof(struct bnxt),
-					 eth_dev_pci_specific_init, pci_dev,
-					 bnxt_dev_init, NULL);
+	bp->rep_info = rte_zmalloc("bnxt_rep_info",
+				   sizeof(bp->rep_info[0]) * BNXT_MAX_VF_REPS,
+				   0);
+	if (!bp->rep_info) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for rep info\n");
+		return -ENOMEM;
+	}
+	bp->cfa_code_map = rte_zmalloc("bnxt_cfa_code_map",
+				       sizeof(*bp->cfa_code_map) *
+				       BNXT_MAX_CFA_CODE, 0);
+	if (!bp->cfa_code_map) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for cfa_code_map\n");
+		bnxt_free_rep_info(bp);
+		return -ENOMEM;
+	}
 
-		if (ret || !num_rep)
-			return ret;
+	for (i = 0; i < BNXT_MAX_CFA_CODE; i++)
+		bp->cfa_code_map[i] = BNXT_VF_IDX_INVALID;
+
+	rc = pthread_mutex_init(&bp->rep_info->vfr_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Unable to initialize vfr_lock\n");
+		bnxt_free_rep_info(bp);
+		return rc;
 	}
+	return rc;
+}
+
+static int bnxt_rep_port_probe(struct rte_pci_device *pci_dev,
+			       struct rte_eth_devargs eth_da,
+			       struct rte_eth_dev *backing_eth_dev)
+{
+	struct rte_eth_dev *vf_rep_eth_dev;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct bnxt *backing_bp;
+	uint16_t num_rep;
+	int i, ret = 0;
 
+	num_rep = eth_da.nb_representor_ports;
 	if (num_rep > BNXT_MAX_VF_REPS) {
 		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
-			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
-		ret = -EINVAL;
-		return ret;
+			    num_rep, BNXT_MAX_VF_REPS);
+		return -EINVAL;
 	}
 
-	/* probe representor ports now */
-	if (!backing_eth_dev)
-		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = -ENODEV;
-		return ret;
+	if (num_rep > RTE_MAX_ETHPORTS) {
+		PMD_DRV_LOG(ERR,
+			    "nb_representor_ports = %d > %d MAX ETHPORTS\n",
+			    num_rep, RTE_MAX_ETHPORTS);
+		return -EINVAL;
 	}
+
 	backing_bp = backing_eth_dev->data->dev_private;
 
 	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
@@ -5761,14 +5782,17 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		/* Returning an error is not an option.
 		 * Applications are not handling this correctly
 		 */
-		return ret;
+		return 0;
 	}
 
-	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+	if (bnxt_init_rep_info(backing_bp))
+		return 0;
+
+	for (i = 0; i < num_rep; i++) {
 		struct bnxt_vf_representor representor = {
 			.vf_id = eth_da.representor_ports[i],
 			.switch_domain_id = backing_bp->switch_domain_id,
-			.parent_priv = backing_bp
+			.parent_dev = backing_eth_dev
 		};
 
 		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
@@ -5809,6 +5833,48 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return ret;
 }
 
+static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			  struct rte_pci_device *pci_dev)
+{
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev;
+	uint16_t num_rep;
+	int ret = 0;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	}
+
+	/* probe representor ports now */
+	ret = bnxt_rep_port_probe(pci_dev, eth_da, backing_eth_dev);
+
+	return ret;
+}
+
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct rte_eth_dev *eth_dev;
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 21f1b0765..777179558 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -6,6 +6,11 @@
 #include "bnxt.h"
 #include "bnxt_ring.h"
 #include "bnxt_reps.h"
+#include "bnxt_rxq.h"
+#include "bnxt_rxr.h"
+#include "bnxt_txq.h"
+#include "bnxt_txr.h"
+#include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
@@ -13,25 +18,128 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_configure = bnxt_vf_rep_dev_configure_op,
 	.dev_start = bnxt_vf_rep_dev_start_op,
 	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.rx_queue_release = bnxt_vf_rep_rx_queue_release_op,
 	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.tx_queue_release = bnxt_vf_rep_tx_queue_release_op,
 	.link_update = bnxt_vf_rep_link_update_op,
 	.dev_close = bnxt_vf_rep_dev_close_op,
-	.dev_stop = bnxt_vf_rep_dev_stop_op
+	.dev_stop = bnxt_vf_rep_dev_stop_op,
+	.stats_get = bnxt_vf_rep_stats_get_op,
+	.stats_reset = bnxt_vf_rep_stats_reset_op,
 };
 
-static uint16_t
-bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
-		     __rte_unused struct rte_mbuf **rx_pkts,
-		     __rte_unused uint16_t nb_pkts)
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf)
 {
+	struct bnxt_sw_rx_bd *prod_rx_buf;
+	struct bnxt_rx_ring_info *rep_rxr;
+	struct bnxt_rx_queue *rep_rxq;
+	struct rte_eth_dev *vfr_eth_dev;
+	struct bnxt_vf_representor *vfr_bp;
+	uint16_t vf_id;
+	uint16_t mask;
+	uint8_t que;
+
+	vf_id = bp->cfa_code_map[cfa_code];
+	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
+	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
+		return 1;
+	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	if (!vfr_eth_dev)
+		return 1;
+	vfr_bp = vfr_eth_dev->data->dev_private;
+	if (vfr_bp->rx_cfa_code != cfa_code) {
+		/* cfa_code not meant for this VF rep!!?? */
+		return 1;
+	}
+	/* If rxq_id happens to be > max rep_queue, use rxq0 */
+	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
+	rep_rxq = vfr_bp->rx_queues[que];
+	rep_rxr = rep_rxq->rx_ring;
+	mask = rep_rxr->rx_ring_struct->ring_mask;
+
+	/* Put this mbuf on the RxQ of the Representor */
+	prod_rx_buf =
+		&rep_rxr->rx_buf_ring[rep_rxr->rx_prod++ & mask];
+	if (!prod_rx_buf->mbuf) {
+		prod_rx_buf->mbuf = mbuf;
+		vfr_bp->rx_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_pkts[que]++;
+	} else {
+		vfr_bp->rx_drop_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_drop_pkts[que]++;
+		rte_free(mbuf); /* Representor Rx ring full, drop pkt */
+	}
+
 	return 0;
 }
 
 static uint16_t
-bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
-		     __rte_unused struct rte_mbuf **tx_pkts,
+bnxt_vf_rep_rx_burst(void *rx_queue,
+		     struct rte_mbuf **rx_pkts,
+		     uint16_t nb_pkts)
+{
+	struct bnxt_rx_queue *rxq = rx_queue;
+	struct bnxt_sw_rx_bd *cons_rx_buf;
+	struct bnxt_rx_ring_info *rxr;
+	uint16_t nb_rx_pkts = 0;
+	uint16_t mask, i;
+
+	if (!rxq)
+		return 0;
+
+	rxr = rxq->rx_ring;
+	mask = rxr->rx_ring_struct->ring_mask;
+	for (i = 0; i < nb_pkts; i++) {
+		cons_rx_buf = &rxr->rx_buf_ring[rxr->rx_cons & mask];
+		if (!cons_rx_buf->mbuf)
+			return nb_rx_pkts;
+		rx_pkts[nb_rx_pkts] = cons_rx_buf->mbuf;
+		rx_pkts[nb_rx_pkts]->port = rxq->port_id;
+		cons_rx_buf->mbuf = NULL;
+		nb_rx_pkts++;
+		rxr->rx_cons++;
+	}
+
+	return nb_rx_pkts;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
 		     __rte_unused uint16_t nb_pkts)
 {
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+	struct bnxt_tx_queue *ptxq;
+	struct bnxt *parent;
+	struct  bnxt_vf_representor *vf_rep_bp;
+	int qid;
+	int rc;
+	int i;
+
+	if (!vfr_txq)
+		return 0;
+
+	qid = vfr_txq->txq->queue_id;
+	vf_rep_bp = vfr_txq->bp;
+	parent = vf_rep_bp->parent_dev->data->dev_private;
+	pthread_mutex_lock(&parent->rep_info->vfr_lock);
+	ptxq = parent->tx_queues[qid];
+
+	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+
+	for (i = 0; i < nb_pkts; i++) {
+		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
+		vf_rep_bp->tx_pkts[qid]++;
+	}
+
+	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
+	ptxq->tx_cfa_action = 0;
+	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
+
+	return rc;
+
 	return 0;
 }
 
@@ -45,7 +153,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
-	vf_rep_bp->parent_priv = rep_params->parent_priv;
+	vf_rep_bp->parent_dev = rep_params->parent_dev;
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	eth_dev->data->representor_id = rep_params->vf_id;
@@ -63,7 +171,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
 	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
 	/* Link state. Inherited from PF or trusted VF */
-	parent_bp = vf_rep_bp->parent_priv;
+	parent_bp = vf_rep_bp->parent_dev->data->dev_private;
 	link = &parent_bp->eth_dev->data->dev_link;
 
 	eth_dev->data->dev_link.link_speed = link->link_speed;
@@ -94,18 +202,18 @@ int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
 	uint16_t vf_id;
 
 	eth_dev->data->mac_addrs = NULL;
-
-	parent_bp = rep->parent_priv;
-	if (parent_bp) {
-		parent_bp->num_reps--;
-		vf_id = rep->vf_id;
-		if (parent_bp->rep_info) {
-			memset(&parent_bp->rep_info[vf_id], 0,
-			       sizeof(parent_bp->rep_info[vf_id]));
-			/* mark that this representor has been freed */
-		}
-	}
 	eth_dev->dev_ops = NULL;
+
+	parent_bp = rep->parent_dev->data->dev_private;
+	if (!parent_bp)
+		return 0;
+
+	parent_bp->num_reps--;
+	vf_id = rep->vf_id;
+	if (parent_bp->rep_info)
+		memset(&parent_bp->rep_info[vf_id], 0,
+		       sizeof(parent_bp->rep_info[vf_id]));
+		/* mark that this representor has been freed */
 	return 0;
 }
 
@@ -117,7 +225,7 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	struct rte_eth_link *link;
 	int rc;
 
-	parent_bp = rep->parent_priv;
+	parent_bp = rep->parent_dev->data->dev_private;
 	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
 
 	/* Link state. Inherited from PF or trusted VF */
@@ -132,16 +240,134 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
+static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already allocated in FW */
+	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * Alloc VF rep rules in CFA after default VNIC is created.
+	 * Otherwise the FW will create the VF-rep rules with
+	 * default drop action.
+	 */
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
+				     vfr->vf_id,
+				     &vfr->tx_cfa_action,
+				     &vfr->rx_cfa_code);
+	 */
+	if (!rc) {
+		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
+			    vfr->vf_id);
+	} else {
+		PMD_DRV_LOG(ERR,
+			    "Failed to alloc representor %d in FW\n",
+			    vfr->vf_id);
+	}
+
+	return rc;
+}
+
+static void bnxt_vf_rep_free_rx_mbufs(struct bnxt_vf_representor *rep_bp)
+{
+	struct bnxt_rx_queue *rxq;
+	unsigned int i;
+
+	for (i = 0; i < rep_bp->rx_nr_rings; i++) {
+		rxq = rep_bp->rx_queues[i];
+		bnxt_rx_queue_release_mbufs(rxq);
+	}
+}
+
 int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 {
-	bnxt_vf_rep_link_update_op(eth_dev, 1);
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int rc;
 
-	return 0;
+	rc = bnxt_vfr_alloc(rep_bp);
+
+	if (!rc) {
+		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
+		eth_dev->tx_pkt_burst = &bnxt_vf_rep_tx_burst;
+
+		bnxt_vf_rep_link_update_op(eth_dev, 1);
+	} else {
+		eth_dev->data->dev_link.link_status = 0;
+		bnxt_vf_rep_free_rx_mbufs(rep_bp);
+	}
+
+	return rc;
+}
+
+static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already freed in FW */
+	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
+				    vfr->vf_id);
+	 */
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to free representor %d in FW\n",
+			    vfr->vf_id);
+		return rc;
+	}
+
+	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
+	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
+		    vfr->vf_id);
+	vfr->tx_cfa_action = 0;
+	vfr->rx_cfa_code = 0;
+
+	return rc;
 }
 
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *vfr_bp = eth_dev->data->dev_private;
+
+	/* Avoid crashes as we are about to free queues */
+	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
+	eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+
+	bnxt_vfr_free(vfr_bp);
+
+	if (eth_dev->data->dev_started)
+		eth_dev->data->dev_link.link_status = 0;
+
+	bnxt_vf_rep_free_rx_mbufs(vfr_bp);
 }
 
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
@@ -159,7 +385,7 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	int rc = 0;
 
 	/* MAC Specifics */
-	parent_bp = rep_bp->parent_priv;
+	parent_bp = rep_bp->parent_dev->data->dev_private;
 	if (!parent_bp) {
 		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
 		return rc;
@@ -257,7 +483,13 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
 {
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+
 	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	rep_bp->rx_queues = (void *)eth_dev->data->rx_queues;
+	rep_bp->tx_nr_rings = eth_dev->data->nb_tx_queues;
+	rep_bp->rx_nr_rings = eth_dev->data->nb_rx_queues;
+
 	return 0;
 }
 
@@ -269,9 +501,94 @@ int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  rx_conf,
 				  __rte_unused struct rte_mempool *mp)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_rx_queue *parent_rxq;
+	struct bnxt_rx_queue *rxq;
+	struct bnxt_sw_rx_bd *buf_ring;
+	int rc = 0;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Rx ring %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_RX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid\n", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_rxq = parent_bp->rx_queues[queue_idx];
+	if (!parent_rxq) {
+		PMD_DRV_LOG(ERR, "Parent RxQ has not been configured yet\n");
+		return -EINVAL;
+	}
+
+	if (nb_desc != parent_rxq->nb_rx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent rxq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->rx_queues) {
+		rxq = eth_dev->data->rx_queues[queue_idx];
+		if (rxq)
+			bnxt_rx_queue_release_op(rxq);
+	}
+
+	rxq = rte_zmalloc_socket("bnxt_vfr_rx_queue",
+				 sizeof(struct bnxt_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_rx_queue allocation failed!\n");
+		return -ENOMEM;
+	}
+
+	rxq->nb_rx_desc = nb_desc;
+
+	rc = bnxt_init_rx_ring_struct(rxq, socket_id);
+	if (rc)
+		goto out;
+
+	buf_ring = rte_zmalloc_socket("bnxt_rx_vfr_buf_ring",
+				      sizeof(struct bnxt_sw_rx_bd) *
+				      rxq->rx_ring->rx_ring_struct->ring_size,
+				      RTE_CACHE_LINE_SIZE, socket_id);
+	if (!buf_ring) {
+		PMD_DRV_LOG(ERR, "bnxt_rx_vfr_buf_ring allocation failed!\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rxq->rx_ring->rx_buf_ring = buf_ring;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = eth_dev->data->port_id;
+	eth_dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
+
+out:
+	if (rxq)
+		bnxt_rx_queue_release_op(rxq);
+
+	return rc;
+}
+
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue)
+{
+	struct bnxt_rx_queue *rxq = (struct bnxt_rx_queue *)rx_queue;
+
+	if (!rxq)
+		return;
+
+	bnxt_rx_queue_release_mbufs(rxq);
+
+	bnxt_free_ring(rxq->rx_ring->rx_ring_struct);
+	bnxt_free_ring(rxq->rx_ring->ag_ring_struct);
+	bnxt_free_ring(rxq->cp_ring->cp_ring_struct);
+
+	rte_free(rxq);
 }
 
 int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
@@ -281,7 +598,112 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_tx_queue *parent_txq, *txq;
+	struct bnxt_vf_rep_tx_queue *vfr_txq;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Tx rings %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_TX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_txq = parent_bp->tx_queues[queue_idx];
+	if (!parent_txq) {
+		PMD_DRV_LOG(ERR, "Parent TxQ has not been configured yet\n");
+		return -EINVAL;
+	}
 
+	if (nb_desc != parent_txq->nb_tx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent txq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->tx_queues) {
+		vfr_txq = eth_dev->data->tx_queues[queue_idx];
+		bnxt_vf_rep_tx_queue_release_op(vfr_txq);
+		vfr_txq = NULL;
+	}
+
+	vfr_txq = rte_zmalloc_socket("bnxt_vfr_tx_queue",
+				     sizeof(struct bnxt_vf_rep_tx_queue),
+				     RTE_CACHE_LINE_SIZE, socket_id);
+	if (!vfr_txq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_tx_queue allocation failed!");
+		return -ENOMEM;
+	}
+	txq = rte_zmalloc_socket("bnxt_tx_queue",
+				 sizeof(struct bnxt_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!");
+		rte_free(vfr_txq);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->queue_id = queue_idx;
+	txq->port_id = eth_dev->data->port_id;
+	vfr_txq->txq = txq;
+	vfr_txq->bp = rep_bp;
+	eth_dev->data->tx_queues[queue_idx] = vfr_txq;
+
+	return 0;
+}
+
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue)
+{
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+
+	if (!vfr_txq)
+		return;
+
+	rte_free(vfr_txq->txq);
+	rte_free(vfr_txq);
+}
+
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		stats->obytes += rep_bp->tx_bytes[i];
+		stats->opackets += rep_bp->tx_pkts[i];
+		stats->ibytes += rep_bp->rx_bytes[i];
+		stats->ipackets += rep_bp->rx_pkts[i];
+		stats->imissed += rep_bp->rx_drop_pkts[i];
+
+		stats->q_ipackets[i] = rep_bp->rx_pkts[i];
+		stats->q_ibytes[i] = rep_bp->rx_bytes[i];
+		stats->q_opackets[i] = rep_bp->tx_pkts[i];
+		stats->q_obytes[i] = rep_bp->tx_bytes[i];
+		stats->q_errors[i] = rep_bp->rx_drop_pkts[i];
+	}
+
+	return 0;
+}
+
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		rep_bp->tx_pkts[i] = 0;
+		rep_bp->tx_bytes[i] = 0;
+		rep_bp->rx_pkts[i] = 0;
+		rep_bp->rx_bytes[i] = 0;
+		rep_bp->rx_drop_pkts[i] = 0;
+	}
 	return 0;
 }
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 6048faf08..5c2e0a0b9 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -9,6 +9,12 @@
 #include <rte_malloc.h>
 #include <rte_ethdev.h>
 
+#define BNXT_MAX_CFA_CODE               65536
+#define BNXT_VF_IDX_INVALID             0xffff
+
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
@@ -30,6 +36,11 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused unsigned int socket_id,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf);
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue);
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue);
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats);
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev);
 #endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 11807f409..37b534fc2 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -12,6 +12,7 @@
 #include <rte_memory.h>
 
 #include "bnxt.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxr.h"
 #include "bnxt_rxq.h"
@@ -539,7 +540,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
-			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
+		       struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
@@ -735,6 +736,20 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
+	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
+	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
+		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
+				   mbuf)) {
+			/* Now return an error so that nb_rx_pkts is not
+			 * incremented.
+			 * This packet was meant to be given to the representor.
+			 * So no need to account the packet and give it to
+			 * parent Rx burst function.
+			 */
+			rc = -ENODEV;
+		}
+	}
+
 next_rx:
 
 	*raw_cons = tmp_raw_cons;
@@ -751,6 +766,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint32_t raw_cons = cpr->cp_raw_cons;
 	uint32_t cons;
 	int nb_rx_pkts = 0;
+	int nb_rep_rx_pkts = 0;
 	struct rx_pkt_cmpl *rxcmp;
 	uint16_t prod = rxr->rx_prod;
 	uint16_t ag_prod = rxr->ag_prod;
@@ -784,6 +800,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				nb_rx_pkts++;
 			if (rc == -EBUSY)	/* partial completion */
 				break;
+			if (rc == -ENODEV)	/* completion for representor */
+				nb_rep_rx_pkts++;
 		} else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
 			evt =
 			bnxt_event_hwrm_resp_handler(rxq->bp,
@@ -802,7 +820,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 
 	cpr->cp_raw_cons = raw_cons;
-	if (!nb_rx_pkts && !evt) {
+	if (!nb_rx_pkts && !nb_rep_rx_pkts && !evt) {
 		/*
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 811dcd86b..e60c97fa1 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -188,6 +188,7 @@ struct bnxt_sw_rx_bd {
 struct bnxt_rx_ring_info {
 	uint16_t		rx_prod;
 	uint16_t		ag_prod;
+	uint16_t                rx_cons; /* Needed for representor */
 	struct bnxt_db_info     rx_db;
 	struct bnxt_db_info     ag_db;
 
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 37a3f9539..69ff89aab 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -29,6 +29,7 @@ struct bnxt_tx_queue {
 	struct bnxt		*bp;
 	int			index;
 	int			tx_wake_thresh;
+	uint32_t                tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 16021407e..d7e193d38 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT))
+				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +184,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = 0;
+		cfa_action = txq->tx_cfa_action;
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 03/51] net/bnxt: get IDs for VF-Rep endpoint
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 01/51] net/bnxt: add basic infrastructure for VF representors Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
                     ` (48 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use 'first_vf_id' and the 'vf_id' that is input as part of adding
a representor to obtain the PCI function ID(FID) of the VF(VFR endpoint).
Use the FID as an input to FUNC_QCFG HWRM cmd to obtain the default
vnic ID of the VF.
Along with getting the default vNIC ID by supplying the FW FID of
the VF-rep endpoint to HWRM_FUNC_QCFG, obtain and store it's
function svif.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |  3 +++
 drivers/net/bnxt/bnxt_hwrm.c | 27 +++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h |  4 ++++
 drivers/net/bnxt/bnxt_reps.c | 12 ++++++++++++
 4 files changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 443d9fee4..7afbd5cab 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -784,6 +784,9 @@ struct bnxt {
 struct bnxt_vf_representor {
 	uint16_t		switch_domain_id;
 	uint16_t		vf_id;
+	uint16_t		fw_fid;
+	uint16_t		dflt_vnic_id;
+	uint16_t		svif;
 	uint16_t		tx_cfa_action;
 	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 945bc9018..ed42e58d4 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,33 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t svif_info;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	req.fid = rte_cpu_to_le_16(fid);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	if (vnic_id)
+		*vnic_id = rte_le_to_cpu_16(resp->dflt_vnic_id);
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif && (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID))
+		*svif = svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 {
 	struct hwrm_port_mac_qcfg_input req = {0};
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 58b414d4f..8d19998df 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -270,4 +270,8 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 				 enum bnxt_flow_dir dir,
 				 uint16_t cntr,
 				 uint16_t num_entries);
+int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
+			       uint16_t *vnic_id);
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif);
 #endif
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 777179558..ea6f0010f 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -150,6 +150,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 				 (struct bnxt_vf_representor *)params;
 	struct rte_eth_link *link;
 	struct bnxt *parent_bp;
+	int rc = 0;
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
@@ -179,6 +180,17 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_link.link_status = link->link_status;
 	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
 
+	vf_rep_bp->fw_fid = rep_params->vf_id + parent_bp->first_vf_id;
+	PMD_DRV_LOG(INFO, "vf_rep->fw_fid = %d\n", vf_rep_bp->fw_fid);
+	rc = bnxt_hwrm_get_dflt_vnic_svif(parent_bp, vf_rep_bp->fw_fid,
+					  &vf_rep_bp->dflt_vnic_id,
+					  &vf_rep_bp->svif);
+	if (rc)
+		PMD_DRV_LOG(ERR, "Failed to get default vnic id of VF\n");
+	else
+		PMD_DRV_LOG(INFO, "vf_rep->dflt_vnic_id = %d\n",
+			    vf_rep_bp->dflt_vnic_id);
+
 	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
 	bnxt_print_link_info(eth_dev);
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 04/51] net/bnxt: initialize parent PF information
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (2 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
                     ` (47 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev
  Cc: Lance Richardson, Venkat Duvvuru, Somnath Kotur, Kalesh AP,
	Kishore Padmanabha

From: Lance Richardson <lance.richardson@broadcom.com>

Add support to query parent PF information (MAC address,
function ID, port ID and default VNIC) from firmware.

Current firmware returns zero for parent default vnic,
a temporary Wh+-specific workaround is included until
that can be fixed.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  9 ++++++++
 drivers/net/bnxt/bnxt_ethdev.c | 23 +++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 42 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 75 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 7afbd5cab..2b87899a4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -217,6 +217,14 @@ struct bnxt_child_vf_info {
 	bool			persist_stats;
 };
 
+struct bnxt_parent_info {
+#define	BNXT_PF_FID_INVALID	0xFFFF
+	uint16_t		fid;
+	uint16_t		vnic;
+	uint16_t		port_id;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
@@ -738,6 +746,7 @@ struct bnxt {
 #define BNXT_OUTER_TPID_BD_SHFT	16
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
+	struct bnxt_parent_info	*parent;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4202904c9..bf018be16 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -97,6 +97,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+
 static const char *const bnxt_dev_args[] = {
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
@@ -173,6 +174,11 @@ uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 	return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
+static void bnxt_free_parent_info(struct bnxt *bp)
+{
+	rte_free(bp->parent);
+}
+
 static void bnxt_free_pf_info(struct bnxt *bp)
 {
 	rte_free(bp->pf);
@@ -223,6 +229,16 @@ static void bnxt_free_mem(struct bnxt *bp, bool reconfig)
 	bp->grp_info = NULL;
 }
 
+static int bnxt_alloc_parent_info(struct bnxt *bp)
+{
+	bp->parent = rte_zmalloc("bnxt_parent_info",
+				 sizeof(struct bnxt_parent_info), 0);
+	if (bp->parent == NULL)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int bnxt_alloc_pf_info(struct bnxt *bp)
 {
 	bp->pf = rte_zmalloc("bnxt_pf_info", sizeof(struct bnxt_pf_info), 0);
@@ -1322,6 +1338,7 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	bnxt_free_cos_queues(bp);
 	bnxt_free_link_info(bp);
 	bnxt_free_pf_info(bp);
+	bnxt_free_parent_info(bp);
 
 	eth_dev->dev_ops = NULL;
 	eth_dev->rx_pkt_burst = NULL;
@@ -5210,6 +5227,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_port_mac_qcfg(bp);
 
+	bnxt_hwrm_parent_pf_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
@@ -5528,6 +5547,10 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 	if (rc)
 		goto error_free;
 
+	rc = bnxt_alloc_parent_info(bp);
+	if (rc)
+		goto error_free;
+
 	rc = bnxt_alloc_hwrm_resources(bp);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ed42e58d4..347e1c71e 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,48 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	int rc;
+
+	if (!BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	if (!bp->parent)
+		return -EINVAL;
+
+	bp->parent->fid = BNXT_PF_FID_INVALID;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+
+	req.fid = rte_cpu_to_le_16(0xfffe); /* Request parent PF information. */
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	memcpy(bp->parent->mac_addr, resp->mac_address, RTE_ETHER_ADDR_LEN);
+	bp->parent->vnic = rte_le_to_cpu_16(resp->dflt_vnic_id);
+	bp->parent->fid = rte_le_to_cpu_16(resp->fid);
+	bp->parent->port_id = rte_le_to_cpu_16(resp->port_id);
+
+	/* FIXME: Temporary workaround - remove when firmware issue is fixed. */
+	if (bp->parent->vnic == 0) {
+		PMD_DRV_LOG(ERR, "Error: parent VNIC unavailable.\n");
+		/* Use hard-coded values appropriate for current Wh+ fw. */
+		if (bp->parent->fid == 2)
+			bp->parent->vnic = 0x100;
+		else
+			bp->parent->vnic = 1;
+	}
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 8d19998df..ef8997500 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -274,4 +274,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 			       uint16_t *vnic_id);
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 05/51] net/bnxt: modify port db dev interface
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (3 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 06/51] net/bnxt: get port and function info Ajit Khaparde
                     ` (46 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Modify ulp_port_db_dev_port_intf_update prototype to take
"struct rte_eth_dev *" as the second parameter.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    | 4 ++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 5 +++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.h | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 0c3c638ce..c7281ab9a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -548,7 +548,7 @@ bnxt_ulp_init(struct bnxt *bp)
 		}
 
 		/* update the port database */
-		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 		if (rc) {
 			BNXT_TF_DBG(ERR,
 				    "Failed to update port database\n");
@@ -584,7 +584,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	}
 
 	/* update the port database */
-	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to update port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index e3b924289..66b584026 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -104,10 +104,11 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp)
+					 struct rte_eth_dev *eth_dev)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint32_t port_id = bp->eth_dev->data->port_id;
+	struct bnxt *bp = eth_dev->data->dev_private;
+	uint32_t port_id = eth_dev->data->port_id;
 	uint32_t ifindex;
 	struct ulp_interface_info *intf;
 	int32_t rc;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 271c29a47..929a5a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -71,7 +71,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp);
+					 struct rte_eth_dev *eth_dev);
 
 /*
  * Api to get the ulp ifindex for a given device port.
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 06/51] net/bnxt: get port and function info
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (4 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
                     ` (45 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

add helper functions to get port & function related information
like parif, physical port id & vport id.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  8 ++++
 drivers/net/bnxt/bnxt_ethdev.c           | 58 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h | 10 ++++
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    | 10 ----
 4 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2b87899a4..0bdf8f5ba 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -855,6 +855,9 @@ extern const struct rte_flow_ops bnxt_flow_ops;
 	} \
 } while (0)
 
+#define	BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)	\
+		((eth_dev)->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+
 extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG_RAW(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
@@ -870,6 +873,11 @@ void bnxt_ulp_deinit(struct bnxt *bp);
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 uint16_t bnxt_get_fw_func_id(uint16_t port);
+uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_phy_port_id(uint16_t port);
+uint16_t bnxt_get_vport(uint16_t port);
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port);
 
 void bnxt_cancel_fc_thread(struct bnxt *bp);
 void bnxt_flow_cnt_alarm_cb(void *arg);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index bf018be16..af88b360f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -28,6 +28,7 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
+#include "bnxt_tf_common.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -5101,6 +5102,63 @@ bnxt_get_fw_func_id(uint16_t port)
 	return bp->fw_fid;
 }
 
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev))
+		return BNXT_ULP_INTF_TYPE_VF_REP;
+
+	bp = eth_dev->data->dev_private;
+	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
+			   : BNXT_ULP_INTF_TYPE_VF;
+}
+
+uint16_t
+bnxt_get_phy_port_id(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->pf->port_id : bp->parent->port_id;
+}
+
+uint16_t
+bnxt_get_parif(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->fw_fid - 1 : bp->parent->fid - 1;
+}
+
+uint16_t
+bnxt_get_vport(uint16_t port_id)
+{
+	return (1 << bnxt_get_phy_port_id(port_id));
+}
+
 static void bnxt_alloc_error_recovery_info(struct bnxt *bp)
 {
 	struct bnxt_error_recovery_info *info = bp->recovery_info;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f41757908..f772d4919 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -44,6 +44,16 @@ enum ulp_direction_type {
 	ULP_DIR_EGRESS,
 };
 
+/* enumeration of the interface types */
+enum bnxt_ulp_intf_type {
+	BNXT_ULP_INTF_TYPE_INVALID = 0,
+	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_VF,
+	BNXT_ULP_INTF_TYPE_PF_REP,
+	BNXT_ULP_INTF_TYPE_VF_REP,
+	BNXT_ULP_INTF_TYPE_LAST
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 929a5a510..604c4385a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -10,16 +10,6 @@
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
 
-/* enumeration of the interface types */
-enum bnxt_ulp_intf_type {
-	BNXT_ULP_INTF_TYPE_INVALID = 0,
-	BNXT_ULP_INTF_TYPE_PF = 1,
-	BNXT_ULP_INTF_TYPE_VF,
-	BNXT_ULP_INTF_TYPE_PF_REP,
-	BNXT_ULP_INTF_TYPE_VF_REP,
-	BNXT_ULP_INTF_TYPE_LAST
-};
-
 /* Structure for the Port database resource information. */
 struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 07/51] net/bnxt: add support for hwrm port phy qcaps
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (5 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 06/51] net/bnxt: get port and function info Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
                     ` (44 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_PHY_QCAPS to the firmware to get the physical
port count of the device.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c |  2 ++
 drivers/net/bnxt/bnxt_hwrm.c   | 22 ++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0bdf8f5ba..65862abdc 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -747,6 +747,7 @@ struct bnxt {
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
 	struct bnxt_parent_info	*parent;
+	uint8_t			port_cnt;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index af88b360f..697cd6651 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5287,6 +5287,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_parent_pf_qcfg(bp);
 
+	bnxt_hwrm_port_phy_qcaps(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 347e1c71e..e6a28d07c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1330,6 +1330,28 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	return rc;
 }
 
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp)
+{
+	int rc = 0;
+	struct hwrm_port_phy_qcaps_input req = {0};
+	struct hwrm_port_phy_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCAPS, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	bp->port_cnt = resp->port_cnt;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static bool bnxt_find_lossy_profile(struct bnxt *bp)
 {
 	int i = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index ef8997500..87cd40779 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -275,4 +275,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 08/51] net/bnxt: modify port db to handle more info
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (6 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 09/51] net/bnxt: add support for exact match Ajit Khaparde
                     ` (43 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Apart from func_svif, func_id & vnic, port_db now stores and
retrieves func_spif, func_parif, phy_port_id, port_svif, port_spif,
port_parif, port_vport. New helper functions have been added to
support the same.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 145 +++++++++++++++++++++-----
 drivers/net/bnxt/tf_ulp/ulp_port_db.h |  72 ++++++++++---
 2 files changed, 179 insertions(+), 38 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 66b584026..ea27ef41f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -106,13 +106,12 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 					 struct rte_eth_dev *eth_dev)
 {
-	struct bnxt_ulp_port_db *port_db;
-	struct bnxt *bp = eth_dev->data->dev_private;
 	uint32_t port_id = eth_dev->data->port_id;
-	uint32_t ifindex;
+	struct ulp_phy_port_info *port_data;
+	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	uint32_t ifindex;
 	int32_t rc;
-	struct bnxt_vnic_info *vnic;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db) {
@@ -133,22 +132,22 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 
 	/* update the interface details */
 	intf = &port_db->ulp_intf_list[ifindex];
-	if (BNXT_PF(bp) || BNXT_VF(bp)) {
-		if (BNXT_PF(bp)) {
-			intf->type = BNXT_ULP_INTF_TYPE_PF;
-			intf->port_svif = bp->port_svif;
-		} else {
-			intf->type = BNXT_ULP_INTF_TYPE_VF;
-		}
-		intf->func_id = bp->fw_fid;
-		intf->func_svif = bp->func_svif;
-		vnic = BNXT_GET_DEFAULT_VNIC(bp);
-		if (vnic)
-			intf->default_vnic = vnic->fw_vnic_id;
-		intf->bp = bp;
-		memcpy(intf->mac_addr, bp->mac_addr, sizeof(intf->mac_addr));
-	} else {
-		BNXT_TF_DBG(ERR, "Invalid interface type\n");
+
+	intf->type = bnxt_get_interface_type(port_id);
+
+	intf->func_id = bnxt_get_fw_func_id(port_id);
+	intf->func_svif = bnxt_get_svif(port_id, 1);
+	intf->func_spif = bnxt_get_phy_port_id(port_id);
+	intf->func_parif = bnxt_get_parif(port_id);
+	intf->default_vnic = bnxt_get_vnic_id(port_id);
+	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+
+	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
+		port_data = &port_db->phy_port_list[intf->phy_port_id];
+		port_data->port_svif = bnxt_get_svif(port_id, 0);
+		port_data->port_spif = bnxt_get_phy_port_id(port_id);
+		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_vport = bnxt_get_vport(port_id);
 	}
 
 	return 0;
@@ -209,7 +208,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 }
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -225,16 +224,88 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS)
+	if (dir == ULP_DIR_EGRESS) {
 		*svif = port_db->ulp_intf_list[ifindex].func_svif;
-	else
-		*svif = port_db->ulp_intf_list[ifindex].port_svif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*svif = port_db->phy_port_list[phy_port_id].port_svif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *spif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*spif = port_db->phy_port_list[phy_port_id].port_spif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *parif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*parif = port_db->phy_port_list[phy_port_id].port_parif;
+	}
+
 	return 0;
 }
 
@@ -262,3 +333,29 @@ ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
 	return 0;
 }
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint16_t *vport)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+	*vport = port_db->phy_port_list[phy_port_id].port_vport;
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 604c4385a..87de3bcbc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -15,11 +15,17 @@ struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
 	uint16_t		func_id;
 	uint16_t		func_svif;
-	uint16_t		port_svif;
+	uint16_t		func_spif;
+	uint16_t		func_parif;
 	uint16_t		default_vnic;
-	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
-	/* back pointer to the bnxt driver, it is null for rep ports */
-	struct bnxt		*bp;
+	uint16_t		phy_port_id;
+};
+
+struct ulp_phy_port_info {
+	uint16_t	port_svif;
+	uint16_t	port_spif;
+	uint16_t	port_parif;
+	uint16_t	port_vport;
 };
 
 /* Structure for the Port database */
@@ -29,6 +35,7 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
 };
 
 /*
@@ -74,8 +81,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
-				  uint32_t port_id,
-				  uint32_t *ifindex);
+				  uint32_t port_id, uint32_t *ifindex);
 
 /*
  * Api to get the function id for a given ulp ifindex.
@@ -88,11 +94,10 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex,
-			    uint16_t *func_id);
+			    uint32_t ifindex, uint16_t *func_id);
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -103,9 +108,36 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
-		     uint32_t ifindex,
-		     uint32_t dir,
-		     uint16_t *svif);
+		     uint32_t ifindex, uint32_t dir, uint16_t *svif);
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex, uint32_t dir, uint16_t *spif);
+
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint32_t dir, uint16_t *parif);
 
 /*
  * Api to get the vnic id for a given ulp ifindex.
@@ -118,7 +150,19 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex,
-			     uint16_t *vnic);
+			     uint32_t ifindex, uint16_t *vnic);
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex,	uint16_t *vport);
 
 #endif /* _ULP_PORT_DB_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 09/51] net/bnxt: add support for exact match
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (7 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
                     ` (42 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Add Exact Match support
- Create EM table pool of memory indices
- Insert exact match internal entry API
- Sends EM internal insert and delete request to firmware

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

---
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++++++++---
 drivers/net/bnxt/tf_core/hwrm_tf.h            |    9 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_core.c            |  144 +-
 drivers/net/bnxt/tf_core/tf_core.h            |  383 +-
 drivers/net/bnxt/tf_core/tf_em.c              |   98 +-
 drivers/net/bnxt/tf_core/tf_em.h              |   31 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_msg.c             |   86 +-
 drivers/net/bnxt/tf_core/tf_msg.h             |   13 +
 drivers/net/bnxt/tf_core/tf_session.h         |   18 +
 drivers/net/bnxt/tf_core/tf_tbl.c             |   99 +-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   57 +-
 drivers/net/bnxt/tf_core/tfp.h                |  123 +-
 16 files changed, 3493 insertions(+), 690 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index 7e30c9ffc..30516eb75 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -611,6 +611,10 @@ struct cmd_nums {
 	#define HWRM_FUNC_VF_BW_QCFG                      UINT32_C(0x196)
 	/* Queries pf ids belong to specified host(s) */
 	#define HWRM_FUNC_HOST_PF_IDS_QUERY               UINT32_C(0x197)
+	/* Queries extended stats per function */
+	#define HWRM_FUNC_QSTATS_EXT                      UINT32_C(0x198)
+	/* Queries exteded statitics context */
+	#define HWRM_STAT_EXT_CTX_QUERY                   UINT32_C(0x199)
 	/* Experimental */
 	#define HWRM_SELFTEST_QLIST                       UINT32_C(0x200)
 	/* Experimental */
@@ -647,41 +651,49 @@ struct cmd_nums {
 	/* Experimental */
 	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
 	/* Experimental */
-	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	#define HWRM_TF_SESSION_REGISTER                  UINT32_C(0x2c8)
 	/* Experimental */
-	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	#define HWRM_TF_SESSION_UNREGISTER                UINT32_C(0x2c9)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2ca)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2cb)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2cc)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cd)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2ce)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cf)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2da)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2db)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2e4)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2e5)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2e6)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2e7)
 	/* Experimental */
-	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2e8)
 	/* Experimental */
-	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2e9)
 	/* Experimental */
-	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	#define HWRM_TF_EM_INSERT                         UINT32_C(0x2ea)
 	/* Experimental */
-	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	#define HWRM_TF_EM_DELETE                         UINT32_C(0x2eb)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2f8)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2f9)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2fa)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2fb)
 	/* Experimental */
 	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
@@ -715,6 +727,13 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
 	/* Send driver debug information to firmware */
 	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
+	/* Query debug capabilities of firmware */
+	#define HWRM_DBG_QCAPS                            UINT32_C(0xff20)
+	/* Retrieve debug settings of firmware */
+	#define HWRM_DBG_QCFG                             UINT32_C(0xff21)
+	/* Set destination parameters for crashdump medium */
+	#define HWRM_DBG_CRASHDUMP_MEDIUM_CFG             UINT32_C(0xff22)
+	#define HWRM_NVM_REQ_ARBITRATION                  UINT32_C(0xffed)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -914,8 +933,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 30
-#define HWRM_VERSION_STR "1.10.1.30"
+#define HWRM_VERSION_RSVD 45
+#define HWRM_VERSION_STR "1.10.1.45"
 
 /****************
  * hwrm_ver_get *
@@ -2292,6 +2311,35 @@ struct cmpl_base {
 	 * Completion of TX packet. Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_TX_L2             UINT32_C(0x0)
+	/*
+	 * NO-OP completion:
+	 * Completion of NO-OP. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_NO_OP             UINT32_C(0x1)
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of coalesced TX packet. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_COAL        UINT32_C(0x2)
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of PTP TX packet. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_PTP         UINT32_C(0x3)
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_TPA_START_V2   UINT32_C(0xd)
+	/*
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_L2_V2          UINT32_C(0xf)
 	/*
 	 * RX L2 completion:
 	 * Completion of and L2 RX packet. Length = 32B
@@ -2321,6 +2369,24 @@ struct cmpl_base {
 	 * Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_STAT_EJECT        UINT32_C(0x1a)
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by
+	 * the Primate and processed by the VEE hardware to ensure that
+	 * all completions on a VEE function have been processed by the
+	 * VEE hardware before FLR process is completed.
+	 */
+	#define CMPL_BASE_TYPE_VEE_FLUSH         UINT32_C(0x1c)
+	/*
+	 * Mid Path Short Completion :
+	 * Completion of a Mid Path Command. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_SHORT    UINT32_C(0x1e)
+	/*
+	 * Mid Path Long Completion :
+	 * Completion of a Mid Path Command. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_LONG     UINT32_C(0x1f)
 	/*
 	 * HWRM Command Completion:
 	 * Completion of an HWRM command.
@@ -2398,7 +2464,9 @@ struct tx_cmpl {
 	uint16_t	unused_0;
 	/*
 	 * This is a copy of the opaque field from the first TX BD of this
-	 * transmitted packet.
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
 	 */
 	uint32_t	opaque;
 	uint16_t	errors_v;
@@ -2407,58 +2475,352 @@ struct tx_cmpl {
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	#define TX_CMPL_V                              UINT32_C(0x1)
-	#define TX_CMPL_ERRORS_MASK                    UINT32_C(0xfffe)
-	#define TX_CMPL_ERRORS_SFT                     1
+	#define TX_CMPL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_ERRORS_SFT                         1
 	/*
 	 * This error indicates that there was some sort of problem
 	 * with the BDs for the packet.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK        UINT32_C(0xe)
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT             1
 	/* No error */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR      (UINT32_C(0x0) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
 	/*
 	 * Bad Format:
 	 * BDs were not formatted correctly.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT       (UINT32_C(0x2) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
 	#define TX_CMPL_ERRORS_BUFFER_ERROR_LAST \
 		TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT
 	/*
 	 * When this bit is '1', it indicates that the length of
 	 * the packet was zero. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT          UINT32_C(0x10)
+	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
 	/*
 	 * When this bit is '1', it indicates that the packet
 	 * was longer than the programmed limit in TDI. No
 	 * packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH      UINT32_C(0x20)
+	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
 	/*
 	 * When this bit is '1', it indicates that one or more of the
 	 * BDs associated with this packet generated a PCI error.
 	 * This probably means the address was not valid.
 	 */
-	#define TX_CMPL_ERRORS_DMA_ERROR                UINT32_C(0x40)
+	#define TX_CMPL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
 	/*
 	 * When this bit is '1', it indicates that the packet was longer
 	 * than indicated by the hint. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_HINT_TOO_SHORT           UINT32_C(0x80)
+	#define TX_CMPL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
 	/*
 	 * When this bit is '1', it indicates that the packet was
 	 * dropped due to Poison TLP error on one or more of the
 	 * TLPs in the PXP completion.
 	 */
-	#define TX_CMPL_ERRORS_POISON_TLP_ERROR         UINT32_C(0x100)
+	#define TX_CMPL_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such completions
+	 * are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
 	/* unused2 is 16 b */
 	uint16_t	unused_1;
 	/* unused3 is 32 b */
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* tx_cmpl_coal (size:128b/16B) */
+struct tx_cmpl_coal {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_COAL_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_COAL_TYPE_SFT        0
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of TX packet. Length = 16B
+	 */
+	#define TX_CMPL_COAL_TYPE_TX_L2_COAL   UINT32_C(0x2)
+	#define TX_CMPL_COAL_TYPE_LAST        TX_CMPL_COAL_TYPE_TX_L2_COAL
+	#define TX_CMPL_COAL_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_COAL_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_COAL_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_COAL_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of the packet
+	 * which corresponds with the reported sq_cons_idx. Note that, with
+	 * coalesced completions, completions are generated for only some of the
+	 * packets. The driver will see the opaque field for only those packets.
+	 * Note that, if the packet was described by a short CSO or short CSO
+	 * inline BD, then the 16-bit opaque field from the short CSO BD will
+	 * appear in the bottom 16 bits of this field. For TX rings with
+	 * completion coalescing enabled (which would use the coalesced
+	 * completion record), it is suggested that the driver populate the
+	 * opaque field to indicate the specific TX ring with which the
+	 * completion is associated, then utilize the opaque and sq_cons_idx
+	 * fields in the coalesced completion record to determine the specific
+	 * packets that are to be completed on that ring.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_COAL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_COAL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define TX_CMPL_COAL_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_COAL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_COAL_ERRORS_POISON_TLP_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_COAL_ERRORS_INTERNAL_ERROR \
+		UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_COAL_ERRORS_TIMESTAMP_INVALID_ERROR \
+		UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	uint32_t	sq_cons_idx;
+	/*
+	 * This value is SQ index for the start of the packet following the
+	 * last completed packet.
+	 */
+	#define TX_CMPL_COAL_SQ_CONS_IDX_MASK UINT32_C(0xffffff)
+	#define TX_CMPL_COAL_SQ_CONS_IDX_SFT 0
+} __rte_packed;
+
+/* tx_cmpl_ptp (size:128b/16B) */
+struct tx_cmpl_ptp {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_PTP_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_PTP_TYPE_SFT        0
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of TX packet. Length = 32B
+	 */
+	#define TX_CMPL_PTP_TYPE_TX_L2_PTP    UINT32_C(0x2)
+	#define TX_CMPL_PTP_TYPE_LAST        TX_CMPL_PTP_TYPE_TX_L2_PTP
+	#define TX_CMPL_PTP_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_PTP_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_PTP_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_PTP_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of this
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_PTP_V                                  UINT32_C(0x1)
+	#define TX_CMPL_PTP_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_PTP_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_PTP_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_PTP_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped due
+	 * to a transient internal error in TDC. The packet or LSO can be
+	 * retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_PTP_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_PTP_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint32_t	timestamp_lo;
+} __rte_packed;
+
+/* tx_cmpl_ptp_hi (size:128b/16B) */
+struct tx_cmpl_ptp_hi {
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint16_t	timestamp_hi[3];
+	uint16_t	reserved16;
+	uint64_t	v2;
+	/*
+	 * This value is written by the NIC such that it will be different for
+	 * each pass through the completion queue.The even passes will write 1.
+	 * The odd passes will write 0
+	 */
+	#define TX_CMPL_PTP_HI_V2     UINT32_C(0x1)
+} __rte_packed;
+
 /* rx_pkt_cmpl (size:128b/16B) */
 struct rx_pkt_cmpl {
 	uint16_t	flags_type;
@@ -3003,12 +3365,8 @@ struct rx_pkt_cmpl_hi {
 	#define RX_PKT_CMPL_REORDER_SFT 0
 } __rte_packed;
 
-/*
- * This TPA completion structure is used on devices where the
- * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
- */
-/* rx_tpa_start_cmpl (size:128b/16B) */
-struct rx_tpa_start_cmpl {
+/* rx_pkt_v2_cmpl (size:128b/16B) */
+struct rx_pkt_v2_cmpl {
 	uint16_t	flags_type;
 	/*
 	 * This field indicates the exact type of the completion.
@@ -3017,84 +3375,143 @@ struct rx_tpa_start_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	#define RX_PKT_V2_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_PKT_V2_CMPL_TYPE_SFT                       0
 	/*
-	 * RX L2 TPA Start Completion:
-	 * Completion at the beginning of a TPA operation.
-	 * Length = 32B
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
-	#define RX_TPA_START_CMPL_TYPE_LAST \
-		RX_TPA_START_CMPL_TYPE_RX_TPA_START
-	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_START_CMPL_FLAGS_SFT                6
-	/* This bit will always be '0' for TPA start completions. */
-	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_PKT_V2_CMPL_TYPE_RX_L2_V2                    UINT32_C(0xf)
+	#define RX_PKT_V2_CMPL_TYPE_LAST \
+		RX_PKT_V2_CMPL_TYPE_RX_L2_V2
+	#define RX_PKT_V2_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_PKT_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Normal:
+	 * Packet was placed using normal algorithm.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_NORMAL \
+		(UINT32_C(0x0) << 7)
 	/*
 	 * Jumbo:
-	 * TPA Packet was placed using jumbo algorithm. This means
-	 * that the first buffer will be filled with data before
-	 * moving to aggregation buffers. Each aggregation buffer
-	 * will be filled before moving to the next aggregation
-	 * buffer.
+	 * Packet was placed using jumbo algorithm.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
 		(UINT32_C(0x1) << 7)
 	/*
 	 * Header/Data Separation:
 	 * Packet was placed using Header/Data separation algorithm.
 	 * The separation location is indicated by the itype field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
 	/*
-	 * GRO/Jumbo:
-	 * Packet will be placed using GRO/Jumbo where the first
-	 * packet is filled with data. Subsequent packets will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * Truncation:
+	 * Packet was placed using truncation algorithm. The
+	 * placed (truncated) length is indicated in the payload_offset
+	 * field. The original length is indicated in the len field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
-		(UINT32_C(0x5) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION \
+		(UINT32_C(0x3) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_PKT_V2_CMPL_FLAGS_RSS_VALID                 UINT32_C(0x400)
 	/*
-	 * GRO/Header-Data Separation:
-	 * Packet will be placed using GRO/HDS where the header
-	 * is in the first packet.
-	 * Payload of each packet will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
-		(UINT32_C(0x6) << 7)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* This bit is '1' if the RSS field in this completion is valid. */
-	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
-	/* unused is 1 b */
-	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	#define RX_PKT_V2_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_MASK                UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * Not Known:
+	 * Indicates that the packet type was not known.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_NOT_KNOWN \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * IP Packet:
+	 * Indicates that the packet was an IP packet, but further
+	 * classification was not possible.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_IP \
+		(UINT32_C(0x1) << 12)
 	/*
 	 * TCP Packet:
 	 * Indicates that the packet was IP and TCP.
+	 * This indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_TCP \
 		(UINT32_C(0x2) << 12)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
-		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
 	/*
-	 * This value indicates the amount of packet data written to the
-	 * buffer the opaque field in this completion corresponds to.
+	 * UDP Packet:
+	 * Indicates that the packet was IP and UDP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_UDP \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * FCoE Packet:
+	 * Indicates that the packet was recognized as a FCoE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_FCOE \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * RoCE Packet:
+	 * Indicates that the packet was recognized as a RoCE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ROCE \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * ICMP Packet:
+	 * Indicates that the packet was recognized as ICMP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ICMP \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * PtP packet wo/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_WO_TIMESTAMP \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * PtP packet w/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet and that a timestamp was taken for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP \
+		(UINT32_C(0x9) << 12)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP
+	/*
+	 * This is the length of the data for the packet stored in the
+	 * buffer(s) identified by the opaque value. This includes
+	 * the packet BD and any associated buffer BDs. This does not include
+	 * the length of any data places in aggregation BDs.
 	 */
 	uint16_t	len;
 	/*
@@ -3102,19 +3519,597 @@ struct rx_tpa_start_cmpl {
 	 * corresponds to.
 	 */
 	uint32_t	opaque;
+	uint8_t	agg_bufs_v1;
 	/*
 	 * This value is written by the NIC such that it will be different
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	uint8_t	v1;
+	#define RX_PKT_V2_CMPL_V1           UINT32_C(0x1)
 	/*
-	 * This value is written by the NIC such that it will be different
-	 * for each pass through the completion queue. The even passes
-	 * will write 1. The odd passes will write 0.
+	 * This value is the number of aggregation buffers that follow this
+	 * entry in the completion ring that are a part of this packet.
+	 * If the value is zero, then the packet is completely contained
+	 * in the buffer space provided for the packet in the RX ring.
 	 */
-	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
-	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
+	#define RX_PKT_V2_CMPL_AGG_BUFS_MASK UINT32_C(0x3e)
+	#define RX_PKT_V2_CMPL_AGG_BUFS_SFT 1
+	/* unused1 is 2 b */
+	#define RX_PKT_V2_CMPL_UNUSED1_MASK UINT32_C(0xc0)
+	#define RX_PKT_V2_CMPL_UNUSED1_SFT  6
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	uint16_t	metadata1_payload_offset;
+	/*
+	 * This is data from the CFA as indicated by the meta_format field.
+	 * If truncation placement is not used, this value indicates the offset
+	 * in bytes from the beginning of the packet where the inner payload
+	 * starts. This value is valid for TCP, UDP, FCoE, and RoCE packets. If
+	 * truncation placement is used, this value represents the placed
+	 * (truncated) length of the packet.
+	 */
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_MASK    UINT32_C(0x1ff)
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_SFT     0
+	/* This is data from the CFA as indicated by the meta_format field. */
+	#define RX_PKT_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC. When vee_cmpl_mode
+	 * is set in VNIC context, this is the lower 32b of the host address
+	 * from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/* Last 16 bytes of RX Packet V2 Completion Record */
+/* rx_pkt_v2_cmpl_hi (size:128b/16B) */
+struct rx_pkt_v2_cmpl_hi {
+	uint32_t	flags2;
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_LAST \
+		RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_PKT_V2_CMPL_HI_V2 \
+		UINT32_C(0x1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_SFT                               1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packet that was found after part of the
+	 * packet was already placed. The packet should be treated as
+	 * invalid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_SFT                   1
+	/* No buffer error */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit: Packet did not fit into packet buffer provided.
+	 * For regular placement, this means the packet did not fit in
+	 * the buffer provided. For HDS and jumbo placement, this means
+	 * that the packet could not be placed into 8 physical buffers
+	 * (if fixed-size buffers are used), or that the packet could
+	 * not be placed in the number of physical buffers configured
+	 * for the VNIC (if variable-size buffers are used)
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Not On Chip: All BDs needed for the packet were not on-chip
+	 * when the packet arrived. For regular placement, this error is
+	 * not valid. For HDS and jumbo placement, this means that not
+	 * enough agg BDs were posted to place the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
+		(UINT32_C(0x2) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This indicates that there was an error in the outer tunnel
+	 * portion of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_MASK \
+		UINT32_C(0x70)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_SFT                   4
+	/*
+	 * No additional error occurred on the outer tunnel portion
+	 * of the packet or the packet does not have a outer tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the outer tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * Indicates that header length is out of range in the outer
+	 * tunnel header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the outer tunnel l3 header length. Valid for IPv4, or
+	 * IPv6 outer tunnel packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the outer tunnel UDP header length for a outer
+	 * tunnel UDP packet that is not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 4)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have
+	 * failed (e.g. TTL = 0) in the outer tunnel header. Valid for
+	 * IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_TTL \
+		(UINT32_C(0x5) << 4)
+	/*
+	 * Indicates that the IP checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_CS_ERROR \
+		(UINT32_C(0x6) << 4)
+	/*
+	 * Indicates that the L4 checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR \
+		(UINT32_C(0x7) << 4)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR
+	/*
+	 * This indicates that there was a CRC error on either an FCoE
+	 * or RoCE packet. The itype indicates the packet type.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_CRC_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that there was an error in the tunnel portion
+	 * of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_MASK \
+		UINT32_C(0xe00)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_SFT                    9
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * of the packet or the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 9)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 9)
+	/*
+	 * Indicates that header length is out of range in the tunnel
+	 * header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 9)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the tunnel l3 header length. Valid for IPv4, or IPv6 tunnel
+	 * packet packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 9)
+	/*
+	 * Indicates that the physical packet is shorter than that claimed
+	 * by the tunnel UDP header length for a tunnel UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 9)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have failed
+	 * (e.g. TTL = 0) in the tunnel header. Valid for IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \
+		(UINT32_C(0x5) << 9)
+	/*
+	 * Indicates that the IP checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \
+		(UINT32_C(0x6) << 9)
+	/*
+	 * Indicates that the L4 checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \
+		(UINT32_C(0x7) << 9)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR
+	/*
+	 * This indicates that there was an error in the inner
+	 * portion of the packet when this
+	 * field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_MASK \
+		UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_SFT                      12
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * or the packet of the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * Indicates that IP header version does not match
+	 * expectation from L2 Ethertype for IPv4 and IPv6 or that
+	 * option other than VFT was parsed on
+	 * FCoE packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 12)
+	/*
+	 * indicates that header length is out of range. Valid for
+	 * IPv4 and RoCE
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 12)
+	/*
+	 * indicates that the IPv4 TTL or IPv6 hop limit check
+	 * have failed (e.g. TTL = 0). Valid for IPv4, and IPv6
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_TTL \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * Indicates that physical packet is shorter than that
+	 * claimed by the l3 header length. Valid for IPv4,
+	 * IPv6 packet or RoCE packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the UDP header length for a UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_UDP_TOTAL_ERROR \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * Indicates that TCP header length > IP payload. Valid for
+	 * TCP packets only.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN \
+		(UINT32_C(0x6) << 12)
+	/* Indicates that TCP header length < 5. Valid for TCP. */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN_TOO_SMALL \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * Indicates that TCP option headers result in a TCP header
+	 * size that does not match data offset in TCP header. Valid
+	 * for TCP.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * Indicates that the IP checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \
+		(UINT32_C(0x9) << 12)
+	/*
+	 * Indicates that the L4 checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \
+		(UINT32_C(0xa) << 12)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format=1, this value is the VLAN VID. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_SFT 0
+	/* When meta_format=1, this value is the VLAN DE. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format=1, this value is the VLAN PRI. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_SFT 13
+	/*
+	 * The timestamp field contains the 32b timestamp for the packet from
+	 * the MAC.
+	 */
+	uint32_t	timestamp;
+} __rte_packed;
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_cmpl (size:128b/16B) */
+struct rx_tpa_start_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
+	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	/*
+	 * RX L2 TPA Start Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 */
+	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
+	#define RX_TPA_START_CMPL_TYPE_LAST \
+		RX_TPA_START_CMPL_TYPE_RX_TPA_START
+	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
+	#define RX_TPA_START_CMPL_FLAGS_SFT                6
+	/* This bit will always be '0' for TPA start completions. */
+	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
+	/* unused is 1 b */
+	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
 	/*
 	 * This is the RSS hash type for the packet. The value is packed
 	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
@@ -3285,6 +4280,430 @@ struct rx_tpa_start_cmpl_hi {
 	#define RX_TPA_START_CMPL_INNER_L4_SIZE_SFT   27
 } __rte_packed;
 
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ * RX L2 TPA Start V2 Completion Record (32 bytes split to 2 16-byte
+ * struct)
+ */
+/* rx_tpa_start_v2_cmpl (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define RX_TPA_START_V2_CMPL_TYPE_SFT                       0
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2 \
+		UINT32_C(0xd)
+	#define RX_TPA_START_V2_CMPL_TYPE_LAST \
+		RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2
+	#define RX_TPA_START_V2_CMPL_FLAGS_MASK \
+		UINT32_C(0xffc0)
+	#define RX_TPA_START_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an error
+	 * of some type. Type of error is indicated in error_flags.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ERROR \
+		UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_MASK \
+		UINT32_C(0x380)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_RSS_VALID \
+		UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement. It
+	 * starts at the first 32B boundary after the end of the header for
+	 * HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PKT_METADATA_PRESENT \
+		UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to. If the VNIC is configured to not use an Rx BD for
+	 * the TPA Start completion, then this is a copy of the opaque field
+	 * from the first BD used to place the TPA Start packet.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_LAST RX_TPA_START_V2_CMPL_V1
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	uint16_t	agg_id;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	#define RX_TPA_START_V2_CMPL_AGG_ID_MASK            UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_AGG_ID_SFT             0
+	#define RX_TPA_START_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN valid. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC.
+	 * When vee_cmpl_mode is set in VNIC context, this is the lower
+	 * 32b of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/*
+ * Last 16 bytes of RX L2 TPA Start V2 Completion Record
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_v2_cmpl_hi (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl_hi {
+	uint32_t	flags2;
+	/* This indicates that the aggregation was done using GRO rules. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_AGG_GRO \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet in the affregation.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 * CS status for TPA packets is always valid. This means that "all_ok"
+	 * status will always be set. The ok count status will be set
+	 * appropriately for the packet header, such that all existing CS
+	 * values are ok.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set. For TPA Start completions,
+	 * the complete checksum is calculated for the first packet in the
+	 * aggregation only.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V2 \
+		UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_SFT                     1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packetThe packet should be treated as
+	 * invalid.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	/* No buffer error */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit:
+	 * Packet did not fit into packet buffer provided. This means
+	 * that the TPA Start packet was too big to be placed into the
+	 * per-packet maximum number of physical buffers configured for
+	 * the VNIC, or that it was too big to be placed into the
+	 * per-aggregation maximum number of physical buffers configured
+	 * for the VNIC. This error only occurs when the VNIC is
+	 * configured for variable size receive buffers.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_LAST \
+		RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format != 0, this value is the VLAN VID. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_SFT 0
+	/* When meta_format != 0, this value is the VLAN DE. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format != 0, this value is the VLAN PRI. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_SFT 13
+	/*
+	 * This field contains the outer_l3_offset, inner_l2_offset,
+	 * inner_l3_offset, and inner_l4_size.
+	 *
+	 * hdr_offsets[8:0] contains the outer_l3_offset.
+	 * hdr_offsets[17:9] contains the inner_l2_offset.
+	 * hdr_offsets[26:18] contains the inner_l3_offset.
+	 * hdr_offsets[31:27] contains the inner_l4_size.
+	 */
+	uint32_t	hdr_offsets;
+} __rte_packed;
+
 /*
  * This TPA completion structure is used on devices where the
  * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
@@ -3299,27 +4718,27 @@ struct rx_tpa_end_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_END_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_END_CMPL_TYPE_SFT                 0
+	#define RX_TPA_END_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_TPA_END_CMPL_TYPE_SFT                       0
 	/*
 	 * RX L2 TPA End Completion:
 	 * Completion at the end of a TPA operation.
 	 * Length = 32B
 	 */
-	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END            UINT32_C(0x15)
+	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END                  UINT32_C(0x15)
 	#define RX_TPA_END_CMPL_TYPE_LAST \
 		RX_TPA_END_CMPL_TYPE_RX_TPA_END
-	#define RX_TPA_END_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_END_CMPL_FLAGS_SFT                6
+	#define RX_TPA_END_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_TPA_END_CMPL_FLAGS_SFT                      6
 	/*
 	 * When this bit is '1', it indicates a packet that has an
 	 * error of some type. Type of error is indicated in
 	 * error_flags.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_TPA_END_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT             7
 	/*
 	 * Jumbo:
 	 * TPA Packet was placed using jumbo algorithm. This means
@@ -3337,6 +4756,15 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
 	/*
 	 * GRO/Jumbo:
 	 * Packet will be placed using GRO/Jumbo where the first
@@ -3358,11 +4786,28 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS \
 		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* unused is 2 b */
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_MASK         UINT32_C(0xc00)
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_SFT          10
+		RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* unused is 1 b */
+	#define RX_TPA_END_CMPL_FLAGS_UNUSED                    UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
@@ -3372,8 +4817,9 @@ struct rx_tpa_end_cmpl {
 	 *     field is valid and contains the TCP checksum.
 	 *     This also indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT                 12
 	/*
 	 * This value is zero for TPA End completions.
 	 * There is no data in the buffer that corresponds to the opaque
@@ -4243,6 +5689,52 @@ struct rx_abuf_cmpl {
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* VEE FLUSH Completion Record (16 bytes) */
+/* vee_flush (size:128b/16B) */
+struct vee_flush {
+	uint32_t	downstream_path_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define VEE_FLUSH_TYPE_MASK           UINT32_C(0x3f)
+	#define VEE_FLUSH_TYPE_SFT            0
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by the Primate and processed
+	 * by the VEE hardware to ensure that all completions on a VEE
+	 * function have been processed by the VEE hardware before FLR
+	 * process is completed.
+	 */
+	#define VEE_FLUSH_TYPE_VEE_FLUSH        UINT32_C(0x1c)
+	#define VEE_FLUSH_TYPE_LAST            VEE_FLUSH_TYPE_VEE_FLUSH
+	/* downstream_path is 1 b */
+	#define VEE_FLUSH_DOWNSTREAM_PATH     UINT32_C(0x40)
+	/* This completion is associated with VEE Transmit */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_TX    (UINT32_C(0x0) << 6)
+	/* This completion is associated with VEE Receive */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_RX    (UINT32_C(0x1) << 6)
+	#define VEE_FLUSH_DOWNSTREAM_PATH_LAST VEE_FLUSH_DOWNSTREAM_PATH_RX
+	/*
+	 * This is an opaque value that is passed through the completion
+	 * to the VEE handler SW and is used to indicate what VEE VQ or
+	 * function has completed FLR processing.
+	 */
+	uint32_t	opaque;
+	uint32_t	v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes will
+	 * write 1. The odd passes will write 0.
+	 */
+	#define VEE_FLUSH_V     UINT32_C(0x1)
+	/* unused3 is 32 b */
+	uint32_t	unused_3;
+} __rte_packed;
+
 /* eject_cmpl (size:128b/16B) */
 struct eject_cmpl {
 	uint16_t	type;
@@ -6562,7 +8054,7 @@ struct hwrm_async_event_cmpl_deferred_response {
 	/*
 	 * The PF's mailbox is clear to issue another command.
 	 * A command with this seq_id is still in progress
-	 * and will return a regular HWRM completion when done.
+	 * and will return a regualr HWRM completion when done.
 	 * 'event_data1' field, if non-zero, contains the estimated
 	 * execution time for the command.
 	 */
@@ -7476,6 +8968,8 @@ struct hwrm_func_qcaps_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -7729,6 +9223,12 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
 		UINT32_C(0x40000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_QCAPS command
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_DBG_QCAPS_CMD_SUPPORTED \
+		UINT32_C(0x80000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7854,6 +9354,19 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
 		UINT32_C(0x2)
+	/*
+	 * If 1, the device can report extended hw statistics (including
+	 * additional tpa statistics).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_EXT_HW_STATS_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then the core firmware has support to enable/
+	 * disable hot reset support for interface dynamically through
+	 * HWRM_FUNC_CFG.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x8)
 	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -7904,6 +9417,8 @@ struct hwrm_func_qcfg_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -8013,6 +9528,15 @@ struct hwrm_func_qcfg_output {
 	 */
 	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x100)
+	/*
+	 * If set to 1, then the firmware and all currently registered driver
+	 * instances support hot reset. The hot reset support will be updated
+	 * dynamically based on the driver interface advertisement.
+	 * If set to 0, then the adapter is not currently able to initiate
+	 * hot reset.
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_HOT_RESET_ALLOWED \
+		UINT32_C(0x200)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -8565,6 +10089,17 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x2000000)
+	/*
+	 * If this bit is set to 0, then the interface does not support hot
+	 * reset capability which it advertised with the hot_reset_support
+	 * flag in HWRM_FUNC_DRV_RGTR. If any of the function has set this
+	 * flag to 0, adapter cannot do the hot reset. In this state, if the
+	 * firmware receives a hot reset request, firmware must fail the
+	 * request. If this bit is set to 1, then interface is renabling the
+	 * hot reset capability.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS_HOT_RESET_IF_EN_DIS \
+		UINT32_C(0x4000000)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the mtu field to be
@@ -8704,6 +10239,12 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_ENABLES_ADMIN_LINK_STATE \
 		UINT32_C(0x400000)
+	/*
+	 * This bit must be '1' for the hot_reset_if_en_dis field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x800000)
 	/*
 	 * The maximum transmission unit of the function.
 	 * The HWRM should make sure that the mtu of
@@ -9036,15 +10577,21 @@ struct hwrm_func_qstats_input {
 	/* This flags indicates the type of statistics request. */
 	uint8_t	flags;
 	/* This value is not used to avoid backward compatibility issues. */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED    UINT32_C(0x0)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
 	/*
 	 * flags should be set to 1 when request is for only RoCE statistics.
 	 * This will be honored only if the caller_fid is a privileged PF.
 	 * In all other cases FID and caller_fid should be the same.
 	 */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY UINT32_C(0x1)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
 	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_LAST \
-		HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY
+		HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK
 	uint8_t	unused_0[5];
 } __rte_packed;
 
@@ -9130,6 +10677,132 @@ struct hwrm_func_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/************************
+ * hwrm_func_qstats_ext *
+ ************************/
+
+
+/* hwrm_func_qstats_ext_input (size:192b/24B) */
+struct hwrm_func_qstats_ext_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being queried.
+	 * 0xFF... (All Fs) if the query is for the requesting
+	 * function.
+	 * A privileged PF can query for other function's statistics.
+	 */
+	uint16_t	fid;
+	/* This flags indicates the type of statistics request. */
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * flags should be set to 1 when request is for only RoCE statistics.
+	 * This will be honored only if the caller_fid is a privileged PF.
+	 * In all other cases FID and caller_fid should be the same.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
+} __rte_packed;
+
+/* hwrm_func_qstats_ext_output (size:1472b/184B) */
+struct hwrm_func_qstats_ext_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on received path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***********************
  * hwrm_func_clr_stats *
  ***********************/
@@ -10116,7 +11789,7 @@ struct hwrm_func_backing_store_qcaps_output {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -11039,7 +12712,7 @@ struct hwrm_func_backing_store_cfg_input {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -16149,7 +17822,18 @@ struct hwrm_port_qstats_input {
 	uint64_t	resp_addr;
 	/* Port ID of port that is being queried. */
 	uint16_t	port_id;
-	uint8_t	unused_0[6];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -16382,7 +18066,7 @@ struct rx_port_stats_ext {
  * Port Rx Statistics extended PFC WatchDog Format.
  * StormDetect and StormRevert event determination is based
  * on an integration period and a percentage threshold.
- * StormDetect event - when percentage of XOFF frames received
+ * StormDetect event - when percentage of XOFF frames receieved
  * within an integration period exceeds the configured threshold.
  * StormRevert event - when percentage of XON frames received
  * within an integration period exceeds the configured threshold.
@@ -16843,7 +18527,18 @@ struct hwrm_port_qstats_ext_input {
 	 * statistics block in bytes
 	 */
 	uint16_t	rx_stat_size;
-	uint8_t	unused_0[2];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for the counter mask,
+	 * representing width of each of the stats counters, rather than
+	 * counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0;
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -25312,95 +27007,104 @@ struct hwrm_ring_free_input {
 	/* Ring Type. */
 	uint8_t	ring_type;
 	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	/* TX Ring (TR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	/* RX Ring (RR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	/* RoCE Notification Completion Ring (ROCE_CR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	/* RX Aggregation Ring */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
+	/* Notification Queue */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
+		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
+	uint8_t	unused_0;
+	/* Physical number of ring allocated. */
+	uint16_t	ring_id;
+	uint8_t	unused_1[4];
+} __rte_packed;
+
+/* hwrm_ring_free_output (size:128b/16B) */
+struct hwrm_ring_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/*******************
+ * hwrm_ring_reset *
+ *******************/
+
+
+/* hwrm_ring_reset_input (size:192b/24B) */
+struct hwrm_ring_reset_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Ring Type. */
+	uint8_t	ring_type;
+	/* L2 Completion Ring (CR) */
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL     UINT32_C(0x0)
 	/* TX Ring (TR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX          UINT32_C(0x1)
 	/* RX Ring (RR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX          UINT32_C(0x2)
 	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
-	/* RX Aggregation Ring */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
-	/* Notification Queue */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
-		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
-	uint8_t	unused_0;
-	/* Physical number of ring allocated. */
-	uint16_t	ring_id;
-	uint8_t	unused_1[4];
-} __rte_packed;
-
-/* hwrm_ring_free_output (size:128b/16B) */
-struct hwrm_ring_free_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL   UINT32_C(0x3)
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * Rx Ring Group.  This is to reset rx and aggregation in an atomic
+	 * operation. Completion ring associated with this ring group is
+	 * not reset.
 	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/*******************
- * hwrm_ring_reset *
- *******************/
-
-
-/* hwrm_ring_reset_input (size:192b/24B) */
-struct hwrm_ring_reset_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
-	 */
-	uint64_t	resp_addr;
-	/* Ring Type. */
-	uint8_t	ring_type;
-	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
-	/* TX Ring (TR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX        UINT32_C(0x1)
-	/* RX Ring (RR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX        UINT32_C(0x2)
-	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP UINT32_C(0x6)
 	#define HWRM_RING_RESET_INPUT_RING_TYPE_LAST \
-		HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL
+		HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP
 	uint8_t	unused_0;
-	/* Physical number of the ring. */
+	/*
+	 * Physical number of the ring. When ring type is rx_ring_grp, ring id
+	 * actually refers to ring group id.
+	 */
 	uint16_t	ring_id;
 	uint8_t	unused_1[4];
 } __rte_packed;
@@ -25615,7 +27319,18 @@ struct hwrm_ring_cmpl_ring_qaggint_params_input {
 	uint64_t	resp_addr;
 	/* Physical number of completion ring. */
 	uint16_t	ring_id;
-	uint8_t	unused_0[6];
+	uint16_t	flags;
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_MASK \
+		UINT32_C(0x3)
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_SFT 0
+	/*
+	 * Set this flag to 1 when querying parameters on a notification
+	 * queue. Set this flag to 0 when querying parameters on a
+	 * completion queue or completion ring.
+	 */
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
+		UINT32_C(0x4)
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_ring_cmpl_ring_qaggint_params_output (size:256b/32B) */
@@ -25652,19 +27367,19 @@ struct hwrm_ring_cmpl_ring_qaggint_params_output {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA when in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
+	 * Maximum wait time spent aggregating
 	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
@@ -25738,7 +27453,7 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	/*
 	 * Set this flag to 1 when configuring parameters on a
 	 * notification queue. Set this flag to 0 when configuring
-	 * parameters on a completion queue.
+	 * parameters on a completion queue or completion ring.
 	 */
 	#define HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
 		UINT32_C(0x4)
@@ -25753,20 +27468,20 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA while in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
-	 * cmpls before signaling the interrupt after the
+	 * Maximum wait time spent aggregating
+	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
 	uint16_t	int_lat_tmr_max;
@@ -33339,78 +35054,246 @@ struct hwrm_tf_version_get_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-} __rte_packed;
-
-/* hwrm_tf_version_get_output (size:128b/16B) */
-struct hwrm_tf_version_get_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	/* Version Major number. */
-	uint8_t	major;
-	/* Version Minor number. */
-	uint8_t	minor;
-	/* Version Update number. */
-	uint8_t	update;
-	/* unused. */
-	uint8_t	unused0[4];
+} __rte_packed;
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_open_output (size:192b/24B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware.
+	 */
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the first client on
+	 * the newly created session.
+	 */
+	uint32_t	fw_session_client_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* unused. */
+	uint8_t	unused1[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/****************************
+ * hwrm_tf_session_register *
+ ****************************/
+
+
+/* hwrm_tf_session_register_input (size:704b/88B) */
+struct hwrm_tf_session_register_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field is
-	 * written last.
-	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/************************
- * hwrm_tf_session_open *
- ************************/
-
-
-/* hwrm_tf_session_open_input (size:640b/80B) */
-struct hwrm_tf_session_open_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * Unique session identifier for the session that the
+	 * register request want to create a new client on. This
+	 * value originates from the first open request.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
 	 */
-	uint64_t	resp_addr;
-	/* Name of the session. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session client. */
+	uint8_t	session_client_name[64];
 } __rte_packed;
 
-/* hwrm_tf_session_open_output (size:128b/16B) */
-struct hwrm_tf_session_open_output {
+/* hwrm_tf_session_register_output (size:128b/16B) */
+struct hwrm_tf_session_register_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33420,12 +35303,11 @@ struct hwrm_tf_session_open_output {
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
 	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session.
+	 * Unique session client identifier for the session created
+	 * by the firmware. It includes the session the client it
+	 * attached to and session client info.
 	 */
-	uint32_t	fw_session_id;
+	uint32_t	fw_session_client_id;
 	/* unused. */
 	uint8_t	unused0[3];
 	/*
@@ -33439,13 +35321,13 @@ struct hwrm_tf_session_open_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/**************************
- * hwrm_tf_session_attach *
- **************************/
+/******************************
+ * hwrm_tf_session_unregister *
+ ******************************/
 
 
-/* hwrm_tf_session_attach_input (size:704b/88B) */
-struct hwrm_tf_session_attach_input {
+/* hwrm_tf_session_unregister_input (size:192b/24B) */
+struct hwrm_tf_session_unregister_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -33475,24 +35357,19 @@ struct hwrm_tf_session_attach_input {
 	 */
 	uint64_t	resp_addr;
 	/*
-	 * Unique session identifier for the session that the attach
-	 * request want to attach to. This value originates from the
-	 * shared session memory that the attach request opened by
-	 * way of the 'attach name' that was passed in to the core
-	 * attach API.
-	 * The fw_session_id of the attach session includes PCIe bus
-	 * info to distinguish the PF and session info to identify
-	 * the associated TruFlow session.
+	 * Unique session identifier for the session that the
+	 * unregister request want to close a session client on.
 	 */
-	uint32_t	attach_fw_session_id;
-	/* unused. */
-	uint32_t	unused0;
-	/* Name of the session it self. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the session that the
+	 * unregister request want to close.
+	 */
+	uint32_t	fw_session_client_id;
 } __rte_packed;
 
-/* hwrm_tf_session_attach_output (size:128b/16B) */
-struct hwrm_tf_session_attach_output {
+/* hwrm_tf_session_unregister_output (size:128b/16B) */
+struct hwrm_tf_session_unregister_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33501,16 +35378,8 @@ struct hwrm_tf_session_attach_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session. This fw_session_id is unique to the attach
-	 * request.
-	 */
-	uint32_t	fw_session_id;
 	/* unused. */
-	uint8_t	unused0[3];
+	uint8_t	unused0[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33746,15 +35615,17 @@ struct hwrm_tf_session_resc_qcaps_input {
 	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * Defines the size of the provided qcaps_addr array
 	 * buffer. The size should be set to the Resource Manager
-	 * provided max qcaps value that is device specific. This is
-	 * the max size possible.
+	 * provided max number of qcaps entries which is device
+	 * specific. Resource Manager gets the max size from HCAPI
+	 * RM.
 	 */
-	uint16_t	size;
+	uint16_t	qcaps_size;
 	/*
-	 * This is the DMA address for the qcaps output data
-	 * array. Array is of tf_rm_cap type and is device specific.
+	 * This is the DMA address for the qcaps output data array
+	 * buffer. Array is of tf_rm_resc_req_entry type and is
+	 * device specific.
 	 */
 	uint64_t	qcaps_addr;
 } __rte_packed;
@@ -33772,29 +35643,28 @@ struct hwrm_tf_session_resc_qcaps_output {
 	/* Control flags. */
 	uint32_t	flags;
 	/* Session reservation strategy. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_SFT \
 		0
 	/* Static partitioning. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_STATIC \
 		UINT32_C(0x0)
 	/* Strategy 1. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_1 \
 		UINT32_C(0x1)
 	/* Strategy 2. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_2 \
 		UINT32_C(0x2)
 	/* Strategy 3. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3 \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
-		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3
 	/*
-	 * Size of the returned tf_rm_cap data array. The value
-	 * cannot exceed the size defined by the input msg. The data
-	 * array is returned using the qcaps_addr specified DMA
-	 * address also provided by the input msg.
+	 * Size of the returned qcaps_addr data array buffer. The
+	 * value cannot exceed the size defined by the input msg,
+	 * qcaps_size.
 	 */
 	uint16_t	size;
 	/* unused. */
@@ -33817,7 +35687,7 @@ struct hwrm_tf_session_resc_qcaps_output {
  ******************************/
 
 
-/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+/* hwrm_tf_session_resc_alloc_input (size:320b/40B) */
 struct hwrm_tf_session_resc_alloc_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -33860,16 +35730,25 @@ struct hwrm_tf_session_resc_alloc_input {
 	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided num_addr
-	 * buffer.
+	 * Defines the array size of the provided req_addr and
+	 * resv_addr array buffers. Should be set to the number of
+	 * request entries.
 	 */
-	uint16_t	size;
+	uint16_t	req_size;
+	/*
+	 * This is the DMA address for the request input data array
+	 * buffer. Array is of tf_rm_resc_req_entry type. Size of the
+	 * array buffer is provided by the 'req_size' field in this
+	 * message.
+	 */
+	uint64_t	req_addr;
 	/*
-	 * This is the DMA address for the num input data array
-	 * buffer. Array is of tf_rm_num type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * This is the DMA address for the resc output data array
+	 * buffer. Array is of tf_rm_resc_entry type. Size of the array
+	 * buffer is provided by the 'req_size' field in this
+	 * message.
 	 */
-	uint64_t	num_addr;
+	uint64_t	resc_addr;
 } __rte_packed;
 
 /* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
@@ -33882,8 +35761,15 @@ struct hwrm_tf_session_resc_alloc_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
+	/*
+	 * Size of the returned tf_rm_resc_entry data array. The value
+	 * cannot exceed the req_size defined by the input msg. The data
+	 * array is returned using the resv_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
 	/* unused. */
-	uint8_t	unused0[7];
+	uint8_t	unused0[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33946,11 +35832,12 @@ struct hwrm_tf_session_resc_free_input {
 	 * Defines the size, in bytes, of the provided free_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	free_size;
 	/*
 	 * This is the DMA address for the free input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size field of this message.
+	 * buffer.  Array is of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'free_size' field of this
+	 * message.
 	 */
 	uint64_t	free_addr;
 } __rte_packed;
@@ -34029,11 +35916,12 @@ struct hwrm_tf_session_resc_flush_input {
 	 * Defines the size, in bytes, of the provided flush_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	flush_size;
 	/*
 	 * This is the DMA address for the flush input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * buffer.  Array of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'flush_size' field in this
+	 * message.
 	 */
 	uint64_t	flush_addr;
 } __rte_packed;
@@ -34062,12 +35950,9 @@ struct hwrm_tf_session_resc_flush_output {
 } __rte_packed;
 
 /* TruFlow RM capability of a resource. */
-/* tf_rm_cap (size:64b/8B) */
-struct tf_rm_cap {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_req_entry (size:64b/8B) */
+struct tf_rm_resc_req_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Minimum value. */
 	uint16_t	min;
@@ -34075,25 +35960,10 @@ struct tf_rm_cap {
 	uint16_t	max;
 } __rte_packed;
 
-/* TruFlow RM number of a resource. */
-/* tf_rm_num (size:64b/8B) */
-struct tf_rm_num {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
-	uint32_t	type;
-	/* Number of resources. */
-	uint32_t	num;
-} __rte_packed;
-
 /* TruFlow RM reservation information. */
-/* tf_rm_res (size:64b/8B) */
-struct tf_rm_res {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_entry (size:64b/8B) */
+struct tf_rm_resc_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Start offset. */
 	uint16_t	start;
@@ -34925,6 +36795,162 @@ struct hwrm_tf_ext_em_qcfg_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/*********************
+ * hwrm_tf_em_insert *
+ *********************/
+
+
+/* hwrm_tf_em_insert_input (size:832b/104B) */
+struct hwrm_tf_em_insert_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware Session Id. */
+	uint32_t	fw_session_id;
+	/* Control Flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX
+	/* Reported match strength. */
+	uint16_t	strength;
+	/* Index to action. */
+	uint32_t	action_ptr;
+	/* Index of EM record. */
+	uint32_t	em_record_idx;
+	/* EM Key value. */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
+/* hwrm_tf_em_insert_output (size:128b/16B) */
+struct hwrm_tf_em_insert_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* EM record pointer index. */
+	uint16_t	rptr_index;
+	/* EM record offset 0~3. */
+	uint8_t	rptr_entry;
+	/* Number of word entries consumed by the key. */
+	uint8_t	num_of_entries;
+	/* unused. */
+	uint32_t	unused0;
+} __rte_packed;
+
+/*********************
+ * hwrm_tf_em_delete *
+ *********************/
+
+
+/* hwrm_tf_em_delete_input (size:832b/104B) */
+struct hwrm_tf_em_delete_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Session Id. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX
+	/* Unused0 */
+	uint16_t	unused0;
+	/* EM internal flow hanndle. */
+	uint64_t	flow_handle;
+	/* EM Key value */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused1[3];
+} __rte_packed;
+
+/* hwrm_tf_em_delete_output (size:128b/16B) */
+struct hwrm_tf_em_delete_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Original stack allocation index. */
+	uint16_t	em_index;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
 /********************
  * hwrm_tf_tcam_set *
  ********************/
@@ -35582,10 +37608,10 @@ struct ctx_hw_stats {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35598,10 +37624,10 @@ struct ctx_hw_stats {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35618,7 +37644,11 @@ struct ctx_hw_stats {
 	uint64_t	tpa_aborts;
 } __rte_packed;
 
-/* Periodic statistics context DMA to host. */
+/*
+ * Extended periodic statistics context DMA to host. On cards that
+ * support TPA v2, additional TPA related stats exist and can be retrieved
+ * by DMA of ctx_hw_stats_ext, rather than legacy ctx_hw_stats structure.
+ */
 /* ctx_hw_stats_ext (size:1344b/168B) */
 struct ctx_hw_stats_ext {
 	/* Number of received unicast packets */
@@ -35627,10 +37657,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35643,10 +37673,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35912,7 +37942,14 @@ struct hwrm_stat_ctx_query_input {
 	uint64_t	resp_addr;
 	/* ID of the statistics context that is being queried. */
 	uint32_t	stat_ctx_id;
-	uint8_t	unused_0[4];
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK     UINT32_C(0x1)
+	uint8_t	unused_0[3];
 } __rte_packed;
 
 /* hwrm_stat_ctx_query_output (size:1408b/176B) */
@@ -35949,7 +37986,7 @@ struct hwrm_stat_ctx_query_output {
 	uint64_t	rx_bcast_pkts;
 	/* Number of received packets with error */
 	uint64_t	rx_err_pkts;
-	/* Number of dropped packets on received path */
+	/* Number of dropped packets on receive path */
 	uint64_t	rx_drop_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
@@ -35976,6 +38013,117 @@ struct hwrm_stat_ctx_query_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/***************************
+ * hwrm_stat_ext_ctx_query *
+ ***************************/
+
+
+/* hwrm_stat_ext_ctx_query_input (size:192b/24B) */
+struct hwrm_stat_ext_ctx_query_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* ID of the extended statistics context that is being queried. */
+	uint32_t	stat_ctx_id;
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_EXT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK \
+		UINT32_C(0x1)
+	uint8_t	unused_0[3];
+} __rte_packed;
+
+/* hwrm_stat_ext_ctx_query_output (size:1472b/184B) */
+struct hwrm_stat_ext_ctx_query_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on receive path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***************************
  * hwrm_stat_ctx_eng_query *
  ***************************/
@@ -37565,6 +39713,13 @@ struct hwrm_nvm_install_update_input {
 	 */
 	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_ALLOWED_TO_DEFRAG \
 		UINT32_C(0x4)
+	/*
+	 * If set to 1, FW will verify the package in the "UPDATE" NVM item
+	 * without installing it. This flag is for FW internal use only.
+	 * Users should not set this flag. The request will otherwise fail.
+	 */
+	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_VERIFY_ONLY \
+		UINT32_C(0x8)
 	uint8_t	unused_0[2];
 } __rte_packed;
 
@@ -38115,6 +40270,72 @@ struct hwrm_nvm_validate_option_cmd_err {
 	uint8_t	unused_0[7];
 } __rte_packed;
 
+/****************
+ * hwrm_oem_cmd *
+ ****************/
+
+
+/* hwrm_oem_cmd_input (size:1024b/128B) */
+struct hwrm_oem_cmd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific command data. */
+	uint32_t	oem_data[26];
+} __rte_packed;
+
+/* hwrm_oem_cmd_output (size:768b/96B) */
+struct hwrm_oem_cmd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific response data. */
+	uint32_t	oem_data[18];
+	uint8_t	unused_1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /*****************
  * hwrm_fw_reset *
  ******************/
@@ -38338,6 +40559,55 @@ struct hwrm_port_ts_query_output {
 	uint8_t		valid;
 } __rte_packed;
 
+/*
+ * This structure is fixed at the beginning of the ChiMP SRAM (GRC
+ * offset: 0x31001F0). Host software is expected to read from this
+ * location for a defined signature. If it exists, the software can
+ * assume the presence of this structure and the validity of the
+ * FW_STATUS location in the next field.
+ */
+/* hcomm_status (size:64b/8B) */
+struct hcomm_status {
+	uint32_t	sig_ver;
+	/*
+	 * This field defines the version of the structure. The latest
+	 * version value is 1.
+	 */
+	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
+	#define HCOMM_STATUS_VER_SFT		0
+	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
+	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
+	/*
+	 * This field is to store the signature value to indicate the
+	 * presence of the structure.
+	 */
+	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
+	#define HCOMM_STATUS_SIGNATURE_SFT	8
+	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
+	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
+	uint32_t	fw_status_loc;
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
+	/* PCIE configuration space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
+	/* GRC space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
+	/* BAR0 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
+	/* BAR1 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
+		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
+	/*
+	 * This offset where the fw_status register is located. The value
+	 * is generally 4-byte aligned.
+	 */
+	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
+	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
+} __rte_packed;
+/* This is the GRC offset where the hcomm_status struct resides. */
+#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
+
 /**************************
  * hwrm_cfa_counter_qcaps *
  **************************/
@@ -38622,53 +40892,4 @@ struct hwrm_cfa_counter_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/*
- * This structure is fixed at the beginning of the ChiMP SRAM (GRC
- * offset: 0x31001F0). Host software is expected to read from this
- * location for a defined signature. If it exists, the software can
- * assume the presence of this structure and the validity of the
- * FW_STATUS location in the next field.
- */
-/* hcomm_status (size:64b/8B) */
-struct hcomm_status {
-	uint32_t	sig_ver;
-	/*
-	 * This field defines the version of the structure. The latest
-	 * version value is 1.
-	 */
-	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
-	#define HCOMM_STATUS_VER_SFT		0
-	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
-	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
-	/*
-	 * This field is to store the signature value to indicate the
-	 * presence of the structure.
-	 */
-	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
-	#define HCOMM_STATUS_SIGNATURE_SFT	8
-	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
-	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
-	uint32_t	fw_status_loc;
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
-	/* PCIE configuration space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
-	/* GRC space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
-	/* BAR0 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
-	/* BAR1 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
-		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
-	/*
-	 * This offset where the fw_status register is located. The value
-	 * is generally 4-byte aligned.
-	 */
-	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
-	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
-} __rte_packed;
-/* This is the GRC offset where the hcomm_status struct resides. */
-#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
-
 #endif /* _HSI_STRUCT_DEF_DPDK_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 341909573..439950e02 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -86,6 +86,7 @@ struct tf_tbl_type_get_output;
 struct tf_em_internal_insert_input;
 struct tf_em_internal_insert_output;
 struct tf_em_internal_delete_input;
+struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -949,6 +950,8 @@ typedef struct tf_em_internal_insert_output {
 	uint16_t			 rptr_index;
 	/* EM record offset 0~3 */
 	uint8_t			  rptr_entry;
+	/* Number of word entries consumed by the key */
+	uint8_t			  num_of_entries;
 } tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
 
 /* Input params for EM INTERNAL rule delete */
@@ -969,4 +972,10 @@ typedef struct tf_em_internal_delete_input {
 	uint16_t			 em_key_bitlen;
 } tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
 
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_output {
+	/* Original stack allocation index */
+	uint16_t			 em_index;
+} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
index e5abcc2f2..b1fd2cd43 100644
--- a/drivers/net/bnxt/tf_core/lookup3.h
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -152,7 +152,6 @@ static inline uint32_t hashword(const uint32_t *k,
 		final(a, b, c);
 		/* Falls through. */
 	case 0:	    /* case 0: nothing left to add */
-		/* FALLTHROUGH */
 		break;
 	}
 	/*------------------------------------------------- report the result */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 9cfbd244f..954806377 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -27,6 +27,14 @@ stack_init(int num_entries, uint32_t *items, struct stack *st)
 	return 0;
 }
 
+/*
+ * Return the address of the items
+ */
+uint32_t *stack_items(struct stack *st)
+{
+	return st->items;
+}
+
 /* Return the size of the stack
  */
 int32_t
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
index ebd055592..6732e0313 100644
--- a/drivers/net/bnxt/tf_core/stack.h
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -36,6 +36,16 @@ int stack_init(int num_entries,
 	       uint32_t *items,
 	       struct stack *st);
 
+/** Return the address of the stack contents
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    pointer to the stack contents
+ */
+uint32_t *stack_items(struct stack *st);
+
 /** Return the size of the stack
  *
  *  [in] st
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index cf9f36adb..1f6c33ab5 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -45,6 +45,100 @@ static void tf_seeds_init(struct tf_session *session)
 	}
 }
 
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	tfp_free(ptr);
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -54,6 +148,7 @@ tf_open_session(struct tf                    *tfp,
 	struct tfp_calloc_parms alloc_parms;
 	unsigned int domain, bus, slot, device;
 	uint8_t fw_session_id;
+	int dir;
 
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
@@ -110,7 +205,7 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup;
 	}
 
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+	tfp->session = alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -175,6 +270,16 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize EM pool */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EM Pool initialization failed\n");
+			goto cleanup_close;
+		}
+	}
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -239,6 +344,7 @@ tf_close_session(struct tf *tfp)
 	int rc_close = 0;
 	struct tf_session *tfs;
 	union tf_session_id session_id;
+	int dir;
 
 	if (tfp == NULL || tfp->session == NULL)
 		return -EINVAL;
@@ -268,6 +374,10 @@ tf_close_session(struct tf *tfp)
 
 	/* Final cleanup as we're last user of the session */
 	if (tfs->ref_count == 0) {
+		/* Free EM pool */
+		for (dir = 0; dir < TF_DIR_MAX; dir++)
+			tf_free_em_pool(tfs, dir);
+
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
 		tfp->session = NULL;
@@ -301,16 +411,25 @@ int tf_insert_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
 	/* Process the EM entry per Table Scope type */
-	return tf_insert_eem_entry((struct tf_session *)tfp->session->core_data,
-				   tbl_scope_cb,
-				   parms);
+	if (parms->mem == TF_MEM_EXTERNAL) {
+		/* External EEM */
+		return tf_insert_eem_entry((struct tf_session *)
+					   (tfp->session->core_data),
+					   tbl_scope_cb,
+					   parms);
+	} else if (parms->mem == TF_MEM_INTERNAL) {
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
+	}
+
+	return -EINVAL;
 }
 
 /** Delete EM hash entry API
@@ -327,13 +446,16 @@ int tf_delete_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
-	return tf_delete_eem_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tfp, parms);
+	else
+		return tf_delete_em_internal_entry(tfp, parms);
 }
 
 /** allocate identifier resource
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1eedd80e7..81ff7602f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -44,44 +44,7 @@ enum tf_mem {
 };
 
 /**
- * The size of the external action record (Wh+/Brd2)
- *
- * Currently set to 512.
- *
- * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
- * + stats (16) = 304 aligned on a 16B boundary
- *
- * Theoretically, the size should be smaller. ~304B
- */
-#define TF_ACTION_RECORD_SZ 512
-
-/**
- * External pool size
- *
- * Defines a single pool of external action records of
- * fixed size.  Currently, this is an index.
- */
-#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
-
-/**
- *  External pool entry count
- *
- *  Defines the number of entries in the external action pool
- */
-#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
-
-/**
- * Number of external pools
- */
-#define TF_EXT_POOL_CNT_MAX 1
-
-/**
- * External pool Id
- */
-#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
-#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
-
-/** EEM record AR helper
+ * EEM record AR helper
  *
  * Helper to handle the Action Record Pointer in the EEM Record Entry.
  *
@@ -109,7 +72,8 @@ enum tf_mem {
  */
 
 
-/** Session Version defines
+/**
+ * Session Version defines
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -119,7 +83,8 @@ enum tf_mem {
 #define TF_SESSION_VER_MINOR  0   /**< Minor Version */
 #define TF_SESSION_VER_UPDATE 0   /**< Update Version */
 
-/** Session Name
+/**
+ * Session Name
  *
  * Name of the TruFlow control channel interface.  Expects
  * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
@@ -128,7 +93,8 @@ enum tf_mem {
 
 #define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Session ID define */
 
-/** Session Identifier
+/**
+ * Session Identifier
  *
  * Unique session identifier which includes PCIe bus info to
  * distinguish the PF and session info to identify the associated
@@ -146,7 +112,8 @@ union tf_session_id {
 	} internal;
 };
 
-/** Session Version
+/**
+ * Session Version
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -160,8 +127,8 @@ struct tf_session_version {
 	uint8_t update;
 };
 
-/** Session supported device types
- *
+/**
+ * Session supported device types
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
@@ -171,6 +138,147 @@ enum tf_device_type {
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
+/** Identifier resource types
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for SR2
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC,
+	TF_IDENT_TYPE_MAX
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/* Internal */
+
+	/** Wh+/SR Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Wh+/SR/Th Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Wh+/SR Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Wh+/SR Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 32 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Wh+/SR Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Wh+/SR Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Wh+/SR Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Wh+/SR Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Wh+/SR Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Wh+/SR Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Wh+/SR Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** SR2 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** SR2 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** SR2 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** SR2 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** SR2 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** SR2 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** SR2 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** SR2 VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+	/** Th/SR2 EM Flexible Key builder */
+	TF_TBL_TYPE_EM_FKB,
+	/** Th/SR2 WC Flexible Key builder */
+	TF_TBL_TYPE_WC_FKB,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	TF_TBL_TYPE_MAX
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	/** L2 Context TCAM */
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	/** Profile TCAM */
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	/** Wildcard TCAM */
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	/** Source Properties TCAM */
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	/** Connection Tracking Rule TCAM */
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	/** Virtual Edge Bridge TCAM */
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+};
+
+/**
+ * EM Resources
+ * These defines are provisioned during
+ * tf_open_session()
+ */
+enum tf_em_tbl_type {
+	/** The number of internal EM records for the session */
+	TF_EM_TBL_TYPE_EM_RECORD,
+	/** The number of table scopes reequested */
+	TF_EM_TBL_TYPE_TBL_SCOPE,
+	TF_EM_TBL_TYPE_MAX
+};
+
 /** TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
@@ -309,6 +417,30 @@ struct tf_open_session_parms {
 	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
 	 */
 	enum tf_device_type device_type;
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
 };
 
 /**
@@ -417,31 +549,6 @@ int tf_close_session(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
-	 *  and can be used in WC TCAM or EM keys to virtualize further
-	 *  lookups.
-	 */
-	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
-	 *  to enable virtualization of the profile TCAM.
-	 */
-	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
-	 *  to enable virtualization of the WC TCAM hardware.
-	 */
-	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
-	 *  to enable virtualization of the EM hardware. (not required for Brd4
-	 *  as it has table scope)
-	 */
-	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
-	 *  enable virtualization of further lookups.
-	 */
-	TF_IDENT_TYPE_L2_FUNC
-};
-
 /** tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
@@ -631,19 +738,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-/**
- * TCAM table type
- */
-enum tf_tcam_tbl_type {
-	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	TF_TCAM_TBL_TYPE_PROF_TCAM,
-	TF_TCAM_TBL_TYPE_WC_TCAM,
-	TF_TCAM_TBL_TYPE_SP_TCAM,
-	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
-	TF_TCAM_TBL_TYPE_VEB_TCAM,
-	TF_TCAM_TBL_TYPE_MAX
-
-};
 
 /**
  * @page tcam TCAM Access
@@ -813,7 +907,8 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** get TCAM entry
+/*
+ * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -824,7 +919,8 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/** tf_free_tcam_entry parameter definition
+/*
+ * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
 	/**
@@ -845,8 +941,7 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free TCAM entry
- *
+/*
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -873,84 +968,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  */
 
 /**
- * Enumeration of TruFlow table types. A table type is used to identify a
- * resource object.
- *
- * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
- * the only table type that is connected with a table scope.
- */
-enum tf_tbl_type {
-	/** Wh+/Brd2 Action Record */
-	TF_TBL_TYPE_FULL_ACT_RECORD,
-	/** Multicast Groups */
-	TF_TBL_TYPE_MCAST_GROUPS,
-	/** Action Encap 8 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_8B,
-	/** Action Encap 16 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_16B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_32B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_64B,
-	/** Action Source Properties SMAC */
-	TF_TBL_TYPE_ACT_SP_SMAC,
-	/** Action Source Properties SMAC IPv4 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
-	/** Action Source Properties SMAC IPv6 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
-	/** Action Statistics 64 Bits */
-	TF_TBL_TYPE_ACT_STATS_64,
-	/** Action Modify L4 Src Port */
-	TF_TBL_TYPE_ACT_MODIFY_SPORT,
-	/** Action Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_DPORT,
-	/** Action Modify IPv4 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
-	/** Action _Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
-
-	/* HW */
-
-	/** Meter Profiles */
-	TF_TBL_TYPE_METER_PROF,
-	/** Meter Instance */
-	TF_TBL_TYPE_METER_INST,
-	/** Mirror Config */
-	TF_TBL_TYPE_MIRROR_CONFIG,
-	/** UPAR */
-	TF_TBL_TYPE_UPAR,
-	/** Brd4 Epoch 0 table */
-	TF_TBL_TYPE_EPOCH0,
-	/** Brd4 Epoch 1 table  */
-	TF_TBL_TYPE_EPOCH1,
-	/** Brd4 Metadata  */
-	TF_TBL_TYPE_METADATA,
-	/** Brd4 CT State  */
-	TF_TBL_TYPE_CT_STATE,
-	/** Brd4 Range Profile  */
-	TF_TBL_TYPE_RANGE_PROF,
-	/** Brd4 Range Entry  */
-	TF_TBL_TYPE_RANGE_ENTRY,
-	/** Brd4 LAG Entry  */
-	TF_TBL_TYPE_LAG,
-	/** Brd4 only VNIC/SVIF Table */
-	TF_TBL_TYPE_VNIC_SVIF,
-
-	/* External */
-
-	/** External table type - initially 1 poolsize entries.
-	 * All External table types are associated with a table
-	 * scope. Internal types are not.
-	 */
-	TF_TBL_TYPE_EXT,
-	TF_TBL_TYPE_MAX
-};
-
-/** tf_alloc_tbl_entry parameter definition
+ * tf_alloc_tbl_entry parameter definition
  */
 struct tf_alloc_tbl_entry_parms {
 	/**
@@ -993,7 +1011,8 @@ struct tf_alloc_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** allocate index table entries
+/**
+ * allocate index table entries
  *
  * Internal types:
  *
@@ -1023,7 +1042,8 @@ struct tf_alloc_tbl_entry_parms {
 int tf_alloc_tbl_entry(struct tf *tfp,
 		       struct tf_alloc_tbl_entry_parms *parms);
 
-/** tf_free_tbl_entry parameter definition
+/**
+ * tf_free_tbl_entry parameter definition
  */
 struct tf_free_tbl_entry_parms {
 	/**
@@ -1049,7 +1069,8 @@ struct tf_free_tbl_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free index table entry
+/**
+ * free index table entry
  *
  * Used to free a previously allocated table entry.
  *
@@ -1075,7 +1096,8 @@ struct tf_free_tbl_entry_parms {
 int tf_free_tbl_entry(struct tf *tfp,
 		      struct tf_free_tbl_entry_parms *parms);
 
-/** tf_set_tbl_entry parameter definition
+/**
+ * tf_set_tbl_entry parameter definition
  */
 struct tf_set_tbl_entry_parms {
 	/**
@@ -1104,7 +1126,8 @@ struct tf_set_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** set index table entry
+/**
+ * set index table entry
  *
  * Used to insert an application programmed index table entry into a
  * previous allocated table location.  A shadow copy of the table
@@ -1115,7 +1138,8 @@ struct tf_set_tbl_entry_parms {
 int tf_set_tbl_entry(struct tf *tfp,
 		     struct tf_set_tbl_entry_parms *parms);
 
-/** tf_get_tbl_entry parameter definition
+/**
+ * tf_get_tbl_entry parameter definition
  */
 struct tf_get_tbl_entry_parms {
 	/**
@@ -1140,7 +1164,8 @@ struct tf_get_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** get index table entry
+/**
+ * get index table entry
  *
  * Used to retrieve a previous set index table entry.
  *
@@ -1163,7 +1188,8 @@ int tf_get_tbl_entry(struct tf *tfp,
  * @ref tf_search_em_entry
  *
  */
-/** tf_insert_em_entry parameter definition
+/**
+ * tf_insert_em_entry parameter definition
  */
 struct tf_insert_em_entry_parms {
 	/**
@@ -1239,6 +1265,10 @@ struct tf_delete_em_entry_parms {
 	 * 2 element array with 2 ids. (Brd4 only)
 	 */
 	uint16_t *epochs;
+	/**
+	 * [out] The index of the entry
+	 */
+	uint16_t index;
 	/**
 	 * [in] structure containing flow delete handle information
 	 */
@@ -1291,7 +1321,8 @@ struct tf_search_em_entry_parms {
 	uint64_t flow_handle;
 };
 
-/** insert em hash entry in internal table memory
+/**
+ * insert em hash entry in internal table memory
  *
  * Internal:
  *
@@ -1328,7 +1359,8 @@ struct tf_search_em_entry_parms {
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms);
 
-/** delete em hash entry table memory
+/**
+ * delete em hash entry table memory
  *
  * Internal:
  *
@@ -1353,7 +1385,8 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms);
 
-/** search em hash entry table memory
+/**
+ * search em hash entry table memory
  *
  * Internal:
 
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index bd8e2ba8a..fd1797e39 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -287,7 +287,7 @@ static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
 }
 
 static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t	       *in_key,
+				    uint8_t *in_key,
 				    struct tf_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
@@ -308,7 +308,7 @@ static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
  * EEXIST  - Key does exist in table at "index" in table "table".
  * TF_ERR     - Something went horribly wrong.
  */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 					  enum tf_dir dir,
 					  struct tf_eem_64b_entry *entry,
 					  uint32_t key0_hash,
@@ -368,8 +368,8 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session	   *session,
-			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
@@ -457,6 +457,96 @@ int tf_insert_eem_entry(struct tf_session	   *session,
 	return -EINVAL;
 }
 
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms)
+{
+	int       rc;
+	uint32_t  gfid;
+	uint16_t  rptr_index = 0;
+	uint8_t   rptr_entry = 0;
+	uint8_t   num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG
+		   (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc != 0)
+		return -1;
+
+	PMD_DRV_LOG
+		   (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int tf_delete_em_internal_entry(struct tf *tfp,
+				struct tf_delete_em_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+
 /** delete EEM hash entry API
  *
  * returns:
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 8a3584fbd..c1805df73 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -12,6 +12,20 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+/*
+ * Used to build GFID:
+ *
+ *   15           2  0
+ *  +--------------+--+
+ *  |   Index      |E |
+ *  +--------------+--+
+ *
+ * E = Entry (bucket inndex)
+ */
+#define TF_EM_INTERNAL_INDEX_SHIFT 2
+#define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
+#define TF_EM_INTERNAL_ENTRY_MASK  0x3
+
 /** EEM Entry header
  *
  */
@@ -53,6 +67,17 @@ struct tf_eem_64b_entry {
 	struct tf_eem_entry_hdr hdr;
 };
 
+/** EM Entry
+ *  Each EM entry is 512-bit (64-bytes) but ordered differently to
+ *  EEM.
+ */
+struct tf_em_64b_entry {
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+};
+
 /**
  * Allocates EEM Table scope
  *
@@ -106,9 +131,15 @@ int tf_insert_eem_entry(struct tf_session *session,
 			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms);
 
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms);
+
 int tf_delete_eem_entry(struct tf *tfp,
 			struct tf_delete_em_entry_parms *parms);
 
+int tf_delete_em_internal_entry(struct tf                       *tfp,
+				struct tf_delete_em_entry_parms *parms);
+
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
index 417a99cda..1491539ca 100644
--- a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -90,6 +90,18 @@ do {									\
 		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
 } while (0)
 
+#define TF_GET_NUM_KEY_ENTRIES_FROM_FLOW_HANDLE(flow_handle,		\
+					  num_key_entries)		\
+	(num_key_entries =						\
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		     TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT))		\
+
+#define TF_GET_ENTRY_NUM_FROM_FLOW_HANDLE(flow_handle,		\
+					  entry_num)		\
+	(entry_num =						\
+		(((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT))		\
+
 /*
  * 32 bit Flow ID handlers
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index beecafdeb..554a8491d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -16,6 +16,7 @@
 #include "tf_msg.h"
 #include "hsi_struct_def_dpdk.h"
 #include "hwrm_tf.h"
+#include "tf_em.h"
 
 /**
  * Endian converts min and max values from the HW response to the query
@@ -1013,15 +1014,94 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_insert_input req = { 0 };
+	struct tf_em_internal_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
+		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_INSERT,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends EM delete insert request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_delete_input req = { 0 };
+	struct tf_em_internal_delete_output resp = { 0 };
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.tf_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_DELETE,
+		 req,
+		resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 /**
  * Sends EM operation request to Firmware
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op)
+		 int dir,
+		 uint16_t op)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_input req = {0};
 	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
 	struct tfp_send_msg_parms parms = { 0 };
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 030d1881e..89f7370cc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -121,6 +121,19 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				    struct tf_insert_em_entry_parms *params,
+				    uint16_t *rptr_index,
+				    uint8_t *rptr_entry,
+				    uint8_t *num_of_entries);
+/**
+ * Sends EM internal delete request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms);
 /**
  * Sends EM mem register request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 50ef2d530..c9f4f8f04 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -13,12 +13,25 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "stack.h"
 
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
+/**
+ * Number of EM entries. Static for now will be removed
+ * when parameter added at a later date. At this stage we
+ * are using fixed size entries so that each stack entry
+ * represents 4 RT (f/n)blocks. So we take the total block
+ * allocation for truflow and divide that by 4.
+ */
+#define TF_SESSION_TOTAL_FN_BLOCKS (1024 * 8) /* 8K blocks */
+#define TF_SESSION_EM_ENTRY_SIZE 4 /* 4 blocks per entry */
+#define TF_SESSION_EM_POOL_SIZE \
+	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
+
 /** Session
  *
  * Shared memory containing private TruFlow session information.
@@ -289,6 +302,11 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/**
+	 * EM Pools
+	 */
+	struct stack em_pool[TF_DIR_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d900c9c09..dda72c3d5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -156,7 +156,7 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
 		if (tfp_calloc(&parms) != 0)
 			goto cleanup;
 
-		tp->pg_pa_tbl[i] = (uint64_t)(uintptr_t)parms.mem_pa;
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
 		tp->pg_va_tbl[i] = parms.mem_va;
 
 		memset(tp->pg_va_tbl[i], 0, pg_size);
@@ -792,7 +792,8 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	index = parms->idx;
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1179,7 +1180,8 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1330,7 +1332,8 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1801,3 +1804,91 @@ tf_free_tbl_entry(struct tf *tfp,
 			    rc);
 	return rc;
 }
+
+
+static void
+tf_dump_link_page_table(struct tf_em_page_tbl *tp,
+			struct tf_em_page_tbl *tp_next)
+{
+	uint64_t *pg_va;
+	uint32_t i;
+	uint32_t j;
+	uint32_t k = 0;
+
+	printf("pg_count:%d pg_size:0x%x\n",
+	       tp->pg_count,
+	       tp->pg_size);
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+		printf("\t%p\n", (void *)pg_va);
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
+			if (((pg_va[j] & 0x7) ==
+			     tfp_cpu_to_le_64(PTU_PTE_LAST |
+					      PTU_PTE_VALID)))
+				return;
+
+			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
+				printf("** Invalid entry **\n");
+				return;
+			}
+
+			if (++k >= tp_next->pg_count) {
+				printf("** Shouldn't get here **\n");
+				return;
+			}
+		}
+	}
+}
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
+{
+	struct tf_session      *session;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_page_tbl *tp;
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_table *tbl;
+	int i;
+	int j;
+	int dir;
+
+	printf("called %s\n", __func__);
+
+	/* find session struct */
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* find control block for table scope */
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		PMD_DRV_LOG(ERR, "No table scope\n");
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
+
+		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
+			printf
+	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
+			       j,
+			       tbl->type,
+			       tbl->num_entries,
+			       tbl->entry_size,
+			       tbl->num_lvl);
+			if (tbl->pg_tbl[0].pg_va_tbl &&
+			    tbl->pg_tbl[0].pg_pa_tbl)
+				printf("%p %p\n",
+			       tbl->pg_tbl[0].pg_va_tbl[0],
+			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
+			for (i = 0; i < tbl->num_lvl - 1; i++) {
+				printf("Level:%d\n", i);
+				tp = &tbl->pg_tbl[i];
+				tp_next = &tbl->pg_tbl[i + 1];
+				tf_dump_link_page_table(tp, tp_next);
+			}
+			printf("\n");
+		}
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index bdc6288ee..6cda4875b 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -76,38 +76,51 @@ struct tf_tbl_scope_cb {
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
  */
-#define BNXT_PAGE_SHIFT 22 /** 2M */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define PAGE_SIZE TF_EM_PAGE_SIZE_2M
 
-#if (BNXT_PAGE_SHIFT < 12)				/** < 4K >> 4K */
-#define TF_EM_PAGE_SHIFT 12
+#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_PAGE_SHIFT <= 13)			/** 4K, 8K */
-#define TF_EM_PAGE_SHIFT 13
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
-#define TF_EM_PAGE_SHIFT 15
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
-#elif (BNXT_PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
-#define TF_EM_PAGE_SHIFT 16
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
-#define TF_EM_PAGE_SHIFT 18
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_PAGE_SHIFT <= 21)			/** 1M */
-#define TF_EM_PAGE_SHIFT 20
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_PAGE_SHIFT <= 22)			/** 2M, 4M */
-#define TF_EM_PAGE_SHIFT 21
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
-#define TF_EM_PAGE_SHIFT 22
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#else						/** >= 1G >> 1G */
-#define TF_EM_PAGE_SHIFT	30
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8d5e94e1a..fe49b6304 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -3,14 +3,23 @@
  * All rights reserved.
  */
 
-/* This header file defines the Portability structures and APIs for
+/*
+ * This header file defines the Portability structures and APIs for
  * TruFlow.
  */
 
 #ifndef _TFP_H_
 #define _TFP_H_
 
+#include <rte_config.h>
 #include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+
+/**
+ * DPDK/Driver specific log level for the BNXT Eth driver.
+ */
+extern int bnxt_logtype_driver;
 
 /** Spinlock
  */
@@ -18,13 +27,21 @@ struct tfp_spinlock_parms {
 	rte_spinlock_t slock;
 };
 
+#define TFP_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define TFP_DRV_LOG(level, fmt, args...) \
+	TFP_DRV_LOG_RAW(level, fmt, ## args)
+
 /**
  * @file
  *
  * TrueFlow Portability API Header File
  */
 
-/** send message parameter definition
+/**
+ * send message parameter definition
  */
 struct tfp_send_msg_parms {
 	/**
@@ -62,7 +79,8 @@ struct tfp_send_msg_parms {
 	uint32_t *resp_data;
 };
 
-/** calloc parameter definition
+/**
+ * calloc parameter definition
  */
 struct tfp_calloc_parms {
 	/**
@@ -96,43 +114,15 @@ struct tfp_calloc_parms {
  * @ref tfp_send_msg_tunneled
  *
  * @ref tfp_calloc
- * @ref tfp_free
  * @ref tfp_memcpy
+ * @ref tfp_free
  *
  * @ref tfp_spinlock_init
  * @ref tfp_spinlock_lock
  * @ref tfp_spinlock_unlock
  *
- * @ref tfp_cpu_to_le_16
- * @ref tfp_le_to_cpu_16
- * @ref tfp_cpu_to_le_32
- * @ref tfp_le_to_cpu_32
- * @ref tfp_cpu_to_le_64
- * @ref tfp_le_to_cpu_64
- * @ref tfp_cpu_to_be_16
- * @ref tfp_be_to_cpu_16
- * @ref tfp_cpu_to_be_32
- * @ref tfp_be_to_cpu_32
- * @ref tfp_cpu_to_be_64
- * @ref tfp_be_to_cpu_64
  */
 
-#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
-#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
-#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
-#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
-#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
-#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
-#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
-#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
-#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
-#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
-#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
-#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
-#define tfp_bswap_16(val) rte_bswap16(val)
-#define tfp_bswap_32(val) rte_bswap32(val)
-#define tfp_bswap_64(val) rte_bswap64(val)
-
 /**
  * Provides communication capability from the TrueFlow API layer to
  * the TrueFlow firmware. The portability layer internally provides
@@ -162,9 +152,24 @@ int tfp_send_msg_direct(struct tf *tfp,
  *   -1             - Global error like not supported
  *   -EINVAL        - Parameter Error
  */
-int tfp_send_msg_tunneled(struct tf                 *tfp,
+int tfp_send_msg_tunneled(struct tf *tfp,
 			  struct tfp_send_msg_parms *parms);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
@@ -179,10 +184,58 @@ int tfp_send_msg_tunneled(struct tf                 *tfp,
  *   -EINVAL        - Parameter error
  */
 int tfp_calloc(struct tfp_calloc_parms *parms);
-
-void tfp_free(void *addr);
 void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_free(void *addr);
+
 void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
+
+/*
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 10/51] net/bnxt: modify EM insert and delete to use HWRM direct
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (8 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 09/51] net/bnxt: add support for exact match Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 11/51] net/bnxt: add multi device support Ajit Khaparde
                     ` (41 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Modify Exact Match insert and delete to use the HWRM messages directly.
Remove tunneled EM insert and delete message types.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h | 70 ++----------------------------
 drivers/net/bnxt/tf_core/tf_msg.c  | 66 ++++++++++++++++------------
 2 files changed, 43 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 439950e02..d342c695c 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -23,8 +23,6 @@ typedef enum tf_subtype {
 	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
 	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
 	HWRM_TFT_TBL_SCOPE_CFG = 731,
-	HWRM_TFT_EM_RULE_INSERT = 739,
-	HWRM_TFT_EM_RULE_DELETE = 740,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
@@ -83,10 +81,6 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_em_internal_insert_input;
-struct tf_em_internal_insert_output;
-struct tf_em_internal_delete_input;
-struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -351,7 +345,7 @@ typedef struct tf_session_hw_resc_alloc_output {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -453,7 +447,7 @@ typedef struct tf_session_hw_resc_free_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -555,7 +549,7 @@ typedef struct tf_session_hw_resc_flush_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -922,60 +916,4 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
-/* Input params for EM internal rule insert */
-typedef struct tf_em_internal_insert_input {
-	/* Firmware Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* strength */
-	uint16_t			 strength;
-	/* index to action */
-	uint32_t			 action_ptr;
-	/* index of em record */
-	uint32_t			 em_record_idx;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
-
-/* Output params for EM internal rule insert */
-typedef struct tf_em_internal_insert_output {
-	/* EM record pointer index */
-	uint16_t			 rptr_index;
-	/* EM record offset 0~3 */
-	uint8_t			  rptr_entry;
-	/* Number of word entries consumed by the key */
-	uint8_t			  num_of_entries;
-} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_input {
-	/* Session Id */
-	uint32_t			 tf_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* EM internal flow hanndle */
-	uint64_t			 flow_handle;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_output {
-	/* Original stack allocation index */
-	uint16_t			 em_index;
-} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
-
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 554a8491d..c8f6b88d3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1023,32 +1023,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				uint8_t *rptr_entry,
 				uint8_t *num_of_entries)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_insert_input req = { 0 };
-	struct tf_em_internal_insert_output resp = { 0 };
+	int                         rc;
+	struct tfp_send_msg_parms        parms = { 0 };
+	struct hwrm_tf_em_insert_input   req = { 0 };
+	struct hwrm_tf_em_insert_output  resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
 		TF_LKUP_RECORD_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_INSERT,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1056,7 +1062,7 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 	*rptr_index = resp.rptr_index;
 	*num_of_entries = resp.num_of_entries;
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
@@ -1065,32 +1071,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_delete_input req = { 0 };
-	struct tf_em_internal_delete_output resp = { 0 };
+	int                             rc;
+	struct tfp_send_msg_parms       parms = { 0 };
+	struct hwrm_tf_em_delete_input  req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
-	req.tf_session_id =
+	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_DELETE,
-		 req,
-		resp);
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
 	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 11/51] net/bnxt: add multi device support
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (9 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
                     ` (40 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Randy Schacher, Venkat Duvvuru

From: Michael Wildt <michael.wildt@broadcom.com>

Introduce new modules for Device, Resource Manager, Identifier,
Table Types, and TCAM for multi device support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   8 +
 drivers/net/bnxt/tf_core/Makefile             |   9 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 266 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c            |   2 +
 drivers/net/bnxt/tf_core/tf_core.h            |  56 +--
 drivers/net/bnxt/tf_core/tf_device.c          |  50 +++
 drivers/net/bnxt/tf_core/tf_device.h          | 331 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  24 ++
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  64 +++
 drivers/net/bnxt/tf_core/tf_identifier.c      |  47 +++
 drivers/net/bnxt/tf_core/tf_identifier.h      | 140 +++++++
 drivers/net/bnxt/tf_core/tf_rm.c              |  54 +--
 drivers/net/bnxt/tf_core/tf_rm.h              |  18 -
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 102 +++++
 drivers/net/bnxt/tf_core/tf_rm_new.h          | 368 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_session.c         |  31 ++
 drivers/net/bnxt/tf_core/tf_session.h         |  54 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      | 240 ++++++++++++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     | 239 ++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c             |   1 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  78 ++++
 drivers/net/bnxt/tf_core/tf_tbl_type.h        | 309 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_tcam.c            |  78 ++++
 drivers/net/bnxt/tf_core/tf_tcam.h            | 314 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_util.c            | 145 +++++++
 drivers/net/bnxt/tf_core/tf_util.h            |  41 ++
 28 files changed, 3101 insertions(+), 94 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 5c7859cb5..a50cb261d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,14 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_device_p4.c',
+	'tf_core/tf_identifier.c',
+	'tf_core/tf_shadow_tbl.c',
+	'tf_core/tf_shadow_tcam.c',
+	'tf_core/tf_tbl_type.c',
+	'tf_core/tf_tcam.c',
+	'tf_core/tf_util.c',
+	'tf_core/tf_rm_new.c',
 
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index aa2d964e9..7a3c325a6 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,3 +14,12 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
new file mode 100644
index 000000000..c0c1e754e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -0,0 +1,266 @@
+/*
+ * Copyright(c) 2001-2020, Broadcom. All rights reserved. The
+ * term Broadcom refers to Broadcom Inc. and/or its subsidiaries.
+ * Proprietary and Confidential Information.
+ *
+ * This source file is the property of Broadcom Corporation, and
+ * may not be copied or distributed in any isomorphic form without
+ * the prior written consent of Broadcom Corporation.
+ *
+ * DO NOT MODIFY!!! This file is automatically generated.
+ */
+
+#ifndef _CFA_RESOURCE_TYPES_H_
+#define _CFA_RESOURCE_TYPES_H_
+
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+/* Wildcard TCAM Profile Id */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+/* Meter Profile */
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+/* Source Properties TCAM */
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+/* L2 Func */
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+/* EPOCH */
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+/* Metadata */
+#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+/* Connection Tracking Rule TCAM */
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+/* Range Profile */
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+/* Range */
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+/* Link Aggrigation */
+#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+
+
+#endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 1f6c33ab5..6e15a4c5c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -6,6 +6,7 @@
 #include <stdio.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
 #include "tf_em.h"
@@ -229,6 +230,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	session->dev = NULL;
 	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 81ff7602f..becc50c7f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -371,6 +371,35 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * tf_session_resources parameter definition.
+ */
+struct tf_session_resources {
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+};
 
 /**
  * tf_open_session parameters definition.
@@ -414,33 +443,14 @@ struct tf_open_session_parms {
 	union tf_session_id session_id;
 	/** [in] device type
 	 *
-	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
 	enum tf_device_type device_type;
-	/** [in] Requested Identifier Resources
-	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
-	 */
-	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
-	/** [in] Requested Index Table resource counts
-	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
-	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
-	/** [in] Requested TCAM Table resource counts
-	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
-	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
-	/** [in] Requested EM resource counts
+	/** [in] resources
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * Resource allocation
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
+	struct tf_session_resources resources;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
new file mode 100644
index 000000000..3b368313e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_device_p4.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+struct tf;
+
+/**
+ * Device specific bind function
+ */
+static int
+dev_bind_p4(struct tf *tfp __rte_unused,
+	    struct tf_session_resources *resources __rte_unused,
+	    struct tf_dev_info *dev_info)
+{
+	/* Initialize the modules */
+
+	dev_info->ops = &tf_dev_ops_p4;
+	return 0;
+}
+
+int
+dev_bind(struct tf *tfp __rte_unused,
+	 enum tf_device_type type,
+	 struct tf_session_resources *resources,
+	 struct tf_dev_info *dev_info)
+{
+	switch (type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_bind_p4(tfp,
+				   resources,
+				   dev_info);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "Device type not supported\n");
+		return -ENOTSUP;
+	}
+}
+
+int
+dev_unbind(struct tf *tfp __rte_unused,
+	   struct tf_dev_info *dev_handle __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
new file mode 100644
index 000000000..8b63ff178
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -0,0 +1,331 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_H_
+#define _TF_DEVICE_H_
+
+#include "tf_core.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * The Device module provides a general device template. A supported
+ * device type should implement one or more of the listed function
+ * pointers according to its capabilities.
+ *
+ * If a device function pointer is NULL the device capability is not
+ * supported.
+ */
+
+/**
+ * TF device information
+ */
+struct tf_dev_info {
+	const struct tf_dev_ops *ops;
+};
+
+/**
+ * @page device Device
+ *
+ * @ref tf_dev_bind
+ *
+ * @ref tf_dev_unbind
+ */
+
+/**
+ * Device bind handles the initialization of the specified device
+ * type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] type
+ *   Device type
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int dev_bind(struct tf *tfp,
+	     enum tf_device_type type,
+	     struct tf_session_resources *resources,
+	     struct tf_dev_info *dev_handle);
+
+/**
+ * Device release handles cleanup of the device specific information.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dev_handle
+ *   Device handle
+ */
+int dev_unbind(struct tf *tfp,
+	       struct tf_dev_info *dev_handle);
+
+/**
+ * Truflow device specific function hooks structure
+ *
+ * The following device hooks can be defined; unless noted otherwise,
+ * they are optional and can be filled with a null pointer. The
+ * purpose of these hooks is to support Truflow device operations for
+ * different device variants.
+ */
+struct tf_dev_ops {
+	/**
+	 * Allocation of an identifier element.
+	 *
+	 * This API allocates the specified identifier element from a
+	 * device specific identifier DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ident)(struct tf *tfp,
+				  struct tf_ident_alloc_parms *parms);
+
+	/**
+	 * Free of an identifier element.
+	 *
+	 * This API free's a previous allocated identifier element from a
+	 * device specific identifier DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ident)(struct tf *tfp,
+				 struct tf_ident_free_parms *parms);
+
+	/**
+	 * Allocation of a table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
+				     struct tf_tbl_type_alloc_parms *parms);
+
+	/**
+	 * Free of a table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tbl_type)(struct tf *tfp,
+				    struct tf_tbl_type_free_parms *parms);
+
+	/**
+	 * Searches for the specified table type element in a shadow DB.
+	 *
+	 * This API searches for the specified table type element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the table type
+	 * DB and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tbl_type)
+			(struct tf *tfp,
+			struct tf_tbl_type_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_set_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_get_parms *parms);
+
+	/**
+	 * Allocation of a tcam element.
+	 *
+	 * This API allocates the specified tcam element from a device
+	 * specific tcam DB. The allocated element is returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tcam)(struct tf *tfp,
+				 struct tf_tcam_alloc_parms *parms);
+
+	/**
+	 * Free of a tcam element.
+	 *
+	 * This API free's a previous allocated tcam element from a
+	 * device specific tcam DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tcam)(struct tf *tfp,
+				struct tf_tcam_free_parms *parms);
+
+	/**
+	 * Searches for the specified tcam element in a shadow DB.
+	 *
+	 * This API searches for the specified tcam element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the tcam DB
+	 * and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tcam)
+			(struct tf *tfp,
+			struct tf_tcam_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified tcam element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tcam)(struct tf *tfp,
+			       struct tf_tcam_set_parms *parms);
+
+	/**
+	 * Retrieves the specified tcam element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tcam)(struct tf *tfp,
+			       struct tf_tcam_get_parms *parms);
+};
+
+/**
+ * Supported device operation structures
+ */
+extern const struct tf_dev_ops tf_dev_ops_p4;
+
+#endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
new file mode 100644
index 000000000..c3c4d1e05
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_alloc_ident = tf_ident_alloc,
+	.tf_dev_free_ident = tf_ident_free,
+	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
+	.tf_dev_free_tbl_type = tf_tbl_type_free,
+	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
+	.tf_dev_set_tbl_type = tf_tbl_type_set,
+	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tcam = tf_tcam_alloc,
+	.tf_dev_free_tcam = tf_tcam_free,
+	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_set_tcam = tf_tcam_set,
+	.tf_dev_get_tcam = tf_tcam_get,
+};
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
new file mode 100644
index 000000000..84d90e3a7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_P4_H_
+#define _TF_DEVICE_P4_H_
+
+#include <cfa_resource_types.h>
+
+#include "tf_core.h"
+#include "tf_rm_new.h"
+
+struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+};
+
+struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
+	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+};
+
+struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+};
+
+#endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
new file mode 100644
index 000000000..726d0b406
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_identifier.h"
+
+struct tf;
+
+/**
+ * Identifier DBs.
+ */
+/* static void *ident_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+int
+tf_ident_bind(struct tf *tfp __rte_unused,
+	      struct tf_ident_cfg *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_alloc(struct tf *tfp __rte_unused,
+	       struct tf_ident_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_free(struct tf *tfp __rte_unused,
+	      struct tf_ident_free_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
new file mode 100644
index 000000000..b77c91b9d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_IDENTIFIER_H_
+#define _TF_IDENTIFIER_H_
+
+#include "tf_core.h"
+
+/**
+ * The Identifier module provides processing of Identifiers.
+ */
+
+struct tf_ident_cfg {
+	/**
+	 * Number of identifier types in each of the configuration
+	 * arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Identifier allcoation parameter definition
+ */
+struct tf_ident_alloc_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/**
+ * Identifier free parameter definition
+ */
+struct tf_ident_free_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/**
+ * @page ident Identity Management
+ *
+ * @ref tf_ident_bind
+ *
+ * @ref tf_ident_unbind
+ *
+ * @ref tf_ident_alloc
+ *
+ * @ref tf_ident_free
+ */
+
+/**
+ * Initializes the Identifier module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_bind(struct tf *tfp,
+		  struct tf_ident_cfg *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_unbind(struct tf *tfp);
+
+/**
+ * Allocates a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_alloc(struct tf *tfp,
+		   struct tf_ident_alloc_parms *parms);
+
+/**
+ * Free's a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_free(struct tf *tfp,
+		  struct tf_ident_free_parms *parms);
+
+#endif /* _TF_IDENTIFIER_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 38b1e71cd..2264704d2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -9,6 +9,7 @@
 
 #include "tf_rm.h"
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_resources.h"
 #include "tf_msg.h"
@@ -76,59 +77,6 @@
 			(dtype) = type ## _TX;	\
 	} while (0)
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
-{
-	switch (dir) {
-	case TF_DIR_RX:
-		return "RX";
-	case TF_DIR_TX:
-		return "TX";
-	default:
-		return "Invalid direction";
-	}
-}
-
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
-{
-	switch (id_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		return "l2_ctxt_remap";
-	case TF_IDENT_TYPE_PROF_FUNC:
-		return "prof_func";
-	case TF_IDENT_TYPE_WC_PROF:
-		return "wc_prof";
-	case TF_IDENT_TYPE_EM_PROF:
-		return "em_prof";
-	case TF_IDENT_TYPE_L2_FUNC:
-		return "l2_func";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
-{
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		return "l2_ctxt_tcam";
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		return "prof_tcam";
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		return "wc_tcam";
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		return "veb_tcam";
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		return "sp_tcam";
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		return "ct_rule_tcam";
-	default:
-		return "Invalid tcam table type";
-	}
-}
-
 const char
 *tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index e69d443a8..1a09f13a7 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -124,24 +124,6 @@ struct tf_rm_db {
 	struct tf_rm_resc tx;
 };
 
-/**
- * Helper function converting direction to text string
- */
-const char
-*tf_dir_2_str(enum tf_dir dir);
-
-/**
- * Helper function converting identifier to text string
- */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
-
-/**
- * Helper function converting tcam type to text string
- */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
-
 /**
  * Helper function used to convert HW HCAPI resource type to a string.
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
new file mode 100644
index 000000000..51bb9ba3a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_rm_new.h"
+
+/**
+ * Resource query single entry. Used when accessing HCAPI RM on the
+ * firmware.
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Generic RM Element data type that an RM DB is build upon.
+ */
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type type;
+
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
+
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
+
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
+
+/**
+ * TF RM DB definition
+ */
+struct tf_rm_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
+
+int
+tf_rm_create_db(struct tf *tfp __rte_unused,
+		struct tf_rm_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free_db(struct tf *tfp __rte_unused,
+	      struct tf_rm_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
new file mode 100644
index 000000000..72dba0984
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -0,0 +1,368 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_core.h"
+#include "bitalloc.h"
+
+struct tf;
+
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
+ *
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
+ */
+
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
+	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	TF_RM_TYPE_MAX
+};
+
+/**
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
+ */
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Allocation information for a single element.
+ */
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_entry entry;
+};
+
+/**
+ * Create RM DB parameters
+ */
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements in the parameter structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure
+	 */
+	struct tf_rm_element_cfg *parms;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Free RM DB parameters
+ */
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Allocate RM parameters for a single element
+ */
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+};
+
+/**
+ * Free RM parameters for a single element
+ */
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+};
+
+/**
+ * Is Allocated parameters for a single element
+ */
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	uint8_t *allocated;
+};
+
+/**
+ * Get Allocation information for a single element
+ */
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * @page rm Resource Manager
+ *
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ */
+
+/**
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
+
+/**
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
+
+/**
+ * Allocates a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to allocate parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
+
+/**
+ * Free's a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EpINVAL) on failure.
+ */
+int tf_rm_free(struct tf_rm_free_parms *parms);
+
+/**
+ * Performs an allocation verification check on a specified element.
+ *
+ * [in] parms
+ *   Pointer to is allocated parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
+
+/**
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
+ *
+ * [in] parms
+ *   Pointer to get info parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
+
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
new file mode 100644
index 000000000..c74994546
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session *tfs)
+{
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		TFP_DRV_LOG(ERR, "Session not created\n");
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	return 0;
+}
+
+int
+tf_session_get_device(struct tf_session *tfs,
+		      struct tf_device *tfd)
+{
+	if (tfs->dev == NULL) {
+		TFP_DRV_LOG(ERR, "Device not created\n");
+		return -EINVAL;
+	}
+	tfd = tfs->dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index c9f4f8f04..b1cc7a4a7 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -11,10 +11,21 @@
 
 #include "bitalloc.h"
 #include "tf_core.h"
+#include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
 #include "stack.h"
 
+/**
+ * The Session module provides session control support. A session is
+ * to the ULP layer known as a session_info instance. The session
+ * private data is the actual session.
+ *
+ * Session manages:
+ *   - The device and all the resources related to the device.
+ *   - Any session sharing between ULP applications
+ */
+
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
@@ -90,6 +101,9 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Device */
+	struct tf_dev_info *dev;
+
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
 
@@ -309,4 +323,44 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * @page session Session Management
+ *
+ * @ref tf_session_get_session
+ *
+ * @ref tf_session_get_device
+ */
+
+/**
+ * Looks up the private session information from the TF session info.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session(struct tf *tfp,
+			   struct tf_session *tfs);
+
+/**
+ * Looks up the device information from the TF Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfd
+ *   Pointer to the device
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_device(struct tf_session *tfs,
+			  struct tf_dev_info *tfd);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
new file mode 100644
index 000000000..8f2b6de70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tbl.h"
+
+/**
+ * Shadow table DB element
+ */
+struct tf_shadow_tbl_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of table type entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow table DB definition
+ */
+struct tf_shadow_tbl_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tbl_element *db;
+};
+
+int
+tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
new file mode 100644
index 000000000..dfd336e53
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TBL_H_
+#define _TF_SHADOW_TBL_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow Table module provides shadow DB handling for table based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow table DB is intended to be used by the Table Type module
+ * only.
+ */
+
+/**
+ * Shadow DB configuration information for a single table type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of table types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tbl_cfg_parms {
+	/**
+	 * TF Table type
+	 */
+	enum tf_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow table DB creation parameters
+ */
+struct tf_shadow_tbl_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tbl_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table DB free parameters
+ */
+struct tf_shadow_tbl_free_db_parms {
+	/**
+	 * Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table search parameters
+ */
+struct tf_shadow_tbl_search_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Table type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table insert parameters
+ */
+struct tf_shadow_tbl_insert_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table remove parameters
+ */
+struct tf_shadow_tbl_remove_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tbl Shadow table DB
+ *
+ * @ref tf_shadow_tbl_create_db
+ *
+ * @ref tf_shadow_tbl_free_db
+ *
+ * @reg tf_shadow_tbl_search
+ *
+ * @reg tf_shadow_tbl_insert
+ *
+ * @reg tf_shadow_tbl_remove
+ */
+
+/**
+ * Creates and fills a Shadow table DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms);
+
+/**
+ * Closes the Shadow table DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms);
+
+/**
+ * Search Shadow table db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow table DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow table DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TBL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.c b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
new file mode 100644
index 000000000..c61b833d7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tcam.h"
+
+/**
+ * Shadow tcam DB element
+ */
+struct tf_shadow_tcam_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of tcam entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow tcam DB definition
+ */
+struct tf_shadow_tcam_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tcam_element *db;
+};
+
+int
+tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.h b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
new file mode 100644
index 000000000..e2c4e06c0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TCAM_H_
+#define _TF_SHADOW_TCAM_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow tcam module provides shadow DB handling for tcam based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow tcam DB is intended to be used by the Tcam module only.
+ */
+
+/**
+ * Shadow DB configuration information for a single tcam type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of tcam types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tcam_cfg_parms {
+	/**
+	 * TF tcam type
+	 */
+	enum tf_tcam_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow tcam DB creation parameters
+ */
+struct tf_shadow_tcam_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tcam_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam DB free parameters
+ */
+struct tf_shadow_tcam_free_db_parms {
+	/**
+	 * Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam search parameters
+ */
+struct tf_shadow_tcam_search_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam insert parameters
+ */
+struct tf_shadow_tcam_insert_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam remove parameters
+ */
+struct tf_shadow_tcam_remove_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tcam Shadow tcam DB
+ *
+ * @ref tf_shadow_tcam_create_db
+ *
+ * @ref tf_shadow_tcam_free_db
+ *
+ * @reg tf_shadow_tcam_search
+ *
+ * @reg tf_shadow_tcam_insert
+ *
+ * @reg tf_shadow_tcam_remove
+ */
+
+/**
+ * Creates and fills a Shadow tcam DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms);
+
+/**
+ * Closes the Shadow tcam DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms);
+
+/**
+ * Search Shadow tcam db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow tcam DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow tcam DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TCAM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index dda72c3d5..17399a5b2 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,6 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
new file mode 100644
index 000000000..a57a5ddf2
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tbl_type.h"
+
+struct tf;
+
+/**
+ * Table Type DBs.
+ */
+/* static void *tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Table Type Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_type_bind(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc(struct tf *tfp __rte_unused,
+		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_free(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
+			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_set(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_get(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
new file mode 100644
index 000000000..c880b368b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Table Type module provides processing of Internal TF table types.
+ */
+
+/**
+ * Table Type configuration parameters
+ */
+struct tf_tbl_type_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Table Type allocation parameters
+ */
+struct tf_tbl_type_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type free parameters
+ */
+struct tf_tbl_type_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+struct tf_tbl_type_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type set parameters
+ */
+struct tf_tbl_type_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type get parameters
+ */
+struct tf_tbl_type_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tbl_type Table Type
+ *
+ * @ref tf_tbl_type_bind
+ *
+ * @ref tf_tbl_type_unbind
+ *
+ * @ref tf_tbl_type_alloc
+ *
+ * @ref tf_tbl_type_free
+ *
+ * @ref tf_tbl_type_alloc_search
+ *
+ * @ref tf_tbl_type_set
+ *
+ * @ref tf_tbl_type_get
+ */
+
+/**
+ * Initializes the Table Type module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_bind(struct tf *tfp,
+		     struct tf_tbl_type_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc(struct tf *tfp,
+		      struct tf_tbl_type_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_free(struct tf *tfp,
+		     struct tf_tbl_type_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc_search(struct tf *tfp,
+			     struct tf_tbl_type_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_set(struct tf *tfp,
+		    struct tf_tbl_type_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_get(struct tf *tfp,
+		    struct tf_tbl_type_get_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
new file mode 100644
index 000000000..3ad99dd0d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tcam.h"
+
+struct tf;
+
+/**
+ * TCAM DBs.
+ */
+/* static void *tcam_db[TF_DIR_MAX]; */
+
+/**
+ * TCAM Shadow DBs
+ */
+/* static void *shadow_tcam_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tcam_bind(struct tf *tfp __rte_unused,
+	     struct tf_tcam_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc(struct tf *tfp __rte_unused,
+	      struct tf_tcam_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_free(struct tf *tfp __rte_unused,
+	     struct tf_tcam_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc_search(struct tf *tfp __rte_unused,
+		     struct tf_tcam_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_set(struct tf *tfp __rte_unused,
+	    struct tf_tcam_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_get(struct tf *tfp __rte_unused,
+	    struct tf_tcam_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
new file mode 100644
index 000000000..1420c9ed5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -0,0 +1,314 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TCAM_H_
+#define _TF_TCAM_H_
+
+#include "tf_core.h"
+
+/**
+ * The TCAM module provides processing of Internal TCAM types.
+ */
+
+/**
+ * TCAM configuration parameters
+ */
+struct tf_tcam_cfg_parms {
+	/**
+	 * Number of tcam types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * TCAM allocation parameters
+ */
+struct tf_tcam_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM free parameters
+ */
+struct tf_tcam_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/**
+ * TCAM allocate search parameters
+ */
+struct tf_tcam_alloc_search_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/**
+ * TCAM set parameters
+ */
+struct tf_tcam_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM get parameters
+ */
+struct tf_tcam_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tcam TCAM
+ *
+ * @ref tf_tcam_bind
+ *
+ * @ref tf_tcam_unbind
+ *
+ * @ref tf_tcam_alloc
+ *
+ * @ref tf_tcam_free
+ *
+ * @ref tf_tcam_alloc_search
+ *
+ * @ref tf_tcam_set
+ *
+ * @ref tf_tcam_get
+ *
+ */
+
+/**
+ * Initializes the TCAM module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_bind(struct tf *tfp,
+		 struct tf_tcam_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested tcam type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc(struct tf *tfp,
+		  struct tf_tcam_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_free(struct tf *tfp,
+		 struct tf_tcam_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc_search(struct tf *tfp,
+			 struct tf_tcam_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_set(struct tf *tfp,
+		struct tf_tcam_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_get(struct tf *tfp,
+		struct tf_tcam_get_parms *parms);
+
+#endif /* _TF_TCAM_H */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
new file mode 100644
index 000000000..a9010543d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include "tf_util.h"
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+{
+	switch (tbl_type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		return "Full Action record";
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		return "Multicast Groups";
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		return "Encap 8B";
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		return "Encap 16B";
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+		return "Encap 32B";
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		return "Encap 64B";
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		return "Source Properties SMAC";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		return "Source Properties SMAC IPv4";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		return "Source Properties SMAC IPv6";
+	case TF_TBL_TYPE_ACT_STATS_64:
+		return "Stats 64B";
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		return "NAT Source Port";
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		return "NAT Destination Port";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		return "NAT IPv4 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		return "NAT IPv4 Destination";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+		return "NAT IPv6 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+		return "NAT IPv6 Destination";
+	case TF_TBL_TYPE_METER_PROF:
+		return "Meter Profile";
+	case TF_TBL_TYPE_METER_INST:
+		return "Meter";
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		return "Mirror";
+	case TF_TBL_TYPE_UPAR:
+		return "UPAR";
+	case TF_TBL_TYPE_EPOCH0:
+		return "EPOCH0";
+	case TF_TBL_TYPE_EPOCH1:
+		return "EPOCH1";
+	case TF_TBL_TYPE_METADATA:
+		return "Metadata";
+	case TF_TBL_TYPE_CT_STATE:
+		return "Connection State";
+	case TF_TBL_TYPE_RANGE_PROF:
+		return "Range Profile";
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		return "Range";
+	case TF_TBL_TYPE_LAG:
+		return "Link Aggregation";
+	case TF_TBL_TYPE_VNIC_SVIF:
+		return "VNIC SVIF";
+	case TF_TBL_TYPE_EM_FKB:
+		return "EM Flexible Key Builder";
+	case TF_TBL_TYPE_WC_FKB:
+		return "WC Flexible Key Builder";
+	case TF_TBL_TYPE_EXT:
+		return "External";
+	default:
+		return "Invalid tbl type";
+	}
+}
+
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+{
+	switch (em_type) {
+	case TF_EM_TBL_TYPE_EM_RECORD:
+		return "EM Record";
+	case TF_EM_TBL_TYPE_TBL_SCOPE:
+		return "Table Scope";
+	default:
+		return "Invalid EM type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
new file mode 100644
index 000000000..4099629ea
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_UTIL_H_
+#define _TF_UTIL_H_
+
+#include "tf_core.h"
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function converting tbl type to text string
+ */
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+
+/**
+ * Helper function converting em tbl type to text string
+ */
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+
+#endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 12/51] net/bnxt: support bulk table get and mirror
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (10 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 11/51] net/bnxt: add multi device support Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 13/51] net/bnxt: update multi device design support Ajit Khaparde
                     ` (39 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add new bulk table type get using FW
  to DMA the data back to host.
- Add flag to allow records to be cleared if possible
- Set mirror using tf_alloc_tbl_entry

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h      |  37 ++++++++-
 drivers/net/bnxt/tf_core/tf_common.h    |  54 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |   2 +
 drivers/net/bnxt/tf_core/tf_core.h      |  55 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  70 ++++++++++++----
 drivers/net/bnxt/tf_core/tf_msg.h       |  15 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |   5 +-
 drivers/net/bnxt/tf_core/tf_tbl.c       | 103 ++++++++++++++++++++++++
 8 files changed, 319 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index d342c695c..c04d1034a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Broadcom
+ * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -27,7 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -81,6 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
+struct tf_tbl_type_get_bulk_input;
+struct tf_tbl_type_get_bulk_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -902,6 +905,8 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -916,4 +921,32 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_bulk_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Starting index to get from */
+	uint32_t			 start_index;
+	/* Number of entries to get */
+	uint32_t			 num_entries;
+	/* Host memory where data will be stored */
+	uint64_t			 host_addr;
+} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_bulk_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
new file mode 100644
index 000000000..2aa4b8640
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_COMMON_H_
+#define _TF_COMMON_H_
+
+/* Helper to check the parms */
+#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "%s: session error\n", \
+				    tf_dir_2_str((parms)->dir)); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_TFP_SESSION(tfp) do { \
+		if ((tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6e15a4c5c..a8236aec9 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -16,6 +16,8 @@
 #include "bitalloc.h"
 #include "bnxt.h"
 #include "rand.h"
+#include "tf_common.h"
+#include "hwrm_tf.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index becc50c7f..96a1a794f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1165,7 +1165,7 @@ struct tf_get_tbl_entry_parms {
 	 */
 	uint8_t *data;
 	/**
-	 * [out] Entry size
+	 * [in] Entry size
 	 */
 	uint16_t data_sz_in_bytes;
 	/**
@@ -1188,6 +1188,59 @@ struct tf_get_tbl_entry_parms {
 int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
+/**
+ * tf_get_bulk_tbl_entry parameter definition
+ */
+struct tf_get_bulk_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Clear hardware entries on reads only
+	 * supported for TF_TBL_TYPE_ACT_STATS_64
+	 */
+	bool clear_on_read;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * Bulk get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_bulk_tbl_entry(struct tf *tfp,
+		     struct tf_get_bulk_tbl_entry_parms *parms);
+
 /**
  * @page exact_match Exact Match Table
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c8f6b88d3..c755c8555 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1216,12 +1216,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 static int
-tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
 {
 	struct tfp_calloc_parms alloc_parms;
 	int rc;
@@ -1229,15 +1225,10 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	/* Allocate session */
 	alloc_parms.nitems = 1;
 	alloc_parms.size = size;
-	alloc_parms.alignment = 0;
+	alloc_parms.alignment = 4096;
 	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate tcam dma entry, rc:%d\n",
-			    rc);
+	if (rc)
 		return -ENOMEM;
-	}
 
 	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
 	buf->va_addr = alloc_parms.mem_va;
@@ -1245,6 +1236,52 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	return 0;
 }
 
+int
+tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *params)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_bulk_input req = { 0 };
+	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	int data_size = 0;
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16((params->dir) |
+		((params->clear_on_read) ?
+		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+
+	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < data_size)
+		return -EINVAL;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
 		      struct tf_set_tcam_entry_parms *parms)
@@ -1282,9 +1319,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	} else {
 		/* use dma buffer */
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_get_dma_buf(&buf, data_size);
-		if (rc != 0)
-			return rc;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
 		data = buf.va_addr;
 		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
 	}
@@ -1303,8 +1340,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
+cleanup:
 	if (buf.va_addr != NULL)
 		tfp_free(buf.va_addr);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 89f7370cc..8d050c402 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -267,4 +267,19 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 			 uint8_t *data,
 			 uint32_t index);
 
+/**
+ * Sends bulk get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to table get bulk parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 05e131f8b..9b7f5a069 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -149,11 +149,10 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_RX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_TX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 17399a5b2..26313ed3c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
@@ -794,6 +795,7 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -915,6 +917,76 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_bulk_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->starting_idx;
+
+	/*
+	 * Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->starting_idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return rc;
+}
+
 #if (TF_SHADOW == 1)
 /**
  * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
@@ -1182,6 +1254,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -1663,6 +1736,36 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
+/* API defined in tf_core.h */
+int
+tf_get_bulk_tbl_entry(struct tf *tfp,
+		 struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type,
+				    strerror(-rc));
+	}
+
+	return rc;
+}
+
 /* API defined in tf_core.h */
 int
 tf_alloc_tbl_scope(struct tf *tfp,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 13/51] net/bnxt: update multi device design support
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (11 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
                     ` (38 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Implement the modules RM, Device (WH+), Identifier.
- Update Session module.
- Implement new HWRMs for RM direct messaging.
- Add new parameter check macro's and clean up the header includes for
  i.e. tfp such that bnxt.h is not directly included in the new modules.
- Add cfa_resource_types, required for RM design.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   2 +
 drivers/net/bnxt/tf_core/Makefile             |   1 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 291 ++++++------
 drivers/net/bnxt/tf_core/tf_common.h          |  24 +
 drivers/net/bnxt/tf_core/tf_core.c            | 286 +++++++++++-
 drivers/net/bnxt/tf_core/tf_core.h            |  12 +-
 drivers/net/bnxt/tf_core/tf_device.c          | 150 +++++-
 drivers/net/bnxt/tf_core/tf_device.h          |  79 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  78 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  79 ++--
 drivers/net/bnxt/tf_core/tf_identifier.c      | 142 +++++-
 drivers/net/bnxt/tf_core/tf_identifier.h      |  25 +-
 drivers/net/bnxt/tf_core/tf_msg.c             | 268 +++++++++--
 drivers/net/bnxt/tf_core/tf_msg.h             |  59 +++
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 434 ++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h          |  72 ++-
 drivers/net/bnxt/tf_core/tf_session.c         | 256 ++++++++++-
 drivers/net/bnxt/tf_core/tf_session.h         | 118 ++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   4 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  30 +-
 drivers/net/bnxt/tf_core/tf_tbl_type.h        |  95 ++--
 drivers/net/bnxt/tf_core/tf_tcam.h            |  14 +-
 22 files changed, 2120 insertions(+), 399 deletions(-)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index a50cb261d..1f7df9d06 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_session.c',
+	'tf_core/tf_device.c',
 	'tf_core/tf_device_p4.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 7a3c325a6..2c02e29e7 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,6 +14,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index c0c1e754e..11e8892f4 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -12,6 +12,11 @@
 
 #ifndef _CFA_RESOURCE_TYPES_H_
 #define _CFA_RESOURCE_TYPES_H_
+/*
+ * This is the constant used to define invalid CFA
+ * resource types across all devices.
+ */
+#define CFA_RESOURCE_TYPE_INVALID 65535
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
@@ -58,209 +63,205 @@
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index 2aa4b8640..ec3bca835 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -51,4 +51,28 @@
 		} \
 	} while (0)
 
+
+#define TF_CHECK_PARMS1(parms) do {					\
+		if ((parms) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS2(parms1, parms2) do {				\
+		if ((parms1) == NULL || (parms2) == NULL) {		\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
+		if ((parms1) == NULL ||					\
+		    (parms2) == NULL ||					\
+		    (parms3) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
 #endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a8236aec9..81a88e211 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -85,7 +85,7 @@ tf_create_em_pool(struct tf_session *session,
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
 
 	if (rc != 0) {
 		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
@@ -231,7 +231,6 @@ tf_open_session(struct tf                    *tfp,
 		   TF_SESSION_NAME_MAX);
 
 	/* Initialize Session */
-	session->device_type = parms->device_type;
 	session->dev = NULL;
 	tf_rm_init(tfp);
 
@@ -276,7 +275,9 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		rc = tf_create_em_pool(session,
+				       (enum tf_dir)dir,
+				       TF_SESSION_EM_POOL_SIZE);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "EM Pool initialization failed\n");
@@ -313,6 +314,64 @@ tf_open_session(struct tf                    *tfp,
 	return -EINVAL;
 }
 
+int
+tf_open_session_new(struct tf *tfp,
+		    struct tf_open_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_open_session_parms oparms;
+
+	TF_CHECK_PARMS(tfp, parms);
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
+		return -ENOTSUP;
+	}
+
+	/* Verify control channel and build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	oparms.open_cfg = parms;
+
+	rc = tf_session_open_session(tfp, &oparms);
+	/* Logging handled by tf_session_open_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+}
+
 int
 tf_attach_session(struct tf *tfp __rte_unused,
 		  struct tf_attach_session_parms *parms __rte_unused)
@@ -341,6 +400,69 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_attach_session_new(struct tf *tfp,
+		      struct tf_attach_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_attach_session_parms aparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Verify control channel */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Verify 'attach' channel */
+	rc = sscanf(parms->attach_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device attach_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Prepare return value of session_id, using ctrl_chan_name
+	 * device values as it becomes the session id.
+	 */
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	aparms.attach_cfg = parms;
+	rc = tf_session_attach_session(tfp,
+				       &aparms);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Attached to session, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return rc;
+}
+
 int
 tf_close_session(struct tf *tfp)
 {
@@ -380,7 +502,7 @@ tf_close_session(struct tf *tfp)
 	if (tfs->ref_count == 0) {
 		/* Free EM pool */
 		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, dir);
+			tf_free_em_pool(tfs, (enum tf_dir)dir);
 
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
@@ -401,6 +523,39 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+int
+tf_close_session_new(struct tf *tfp)
+{
+	int rc;
+	struct tf_session_close_session_parms cparms = { 0 };
+	union tf_session_id session_id = { 0 };
+	uint8_t ref_count;
+
+	TF_CHECK_PARMS1(tfp);
+
+	cparms.ref_count = &ref_count;
+	cparms.session_id = &session_id;
+	rc = tf_session_close_session(tfp,
+				      &cparms);
+	/* Logging handled by tf_session_close_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    cparms.session_id->id,
+		    *cparms.ref_count);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    cparms.session_id->internal.domain,
+		    cparms.session_id->internal.bus,
+		    cparms.session_id->internal.device,
+		    cparms.session_id->internal.fw_session_id);
+
+	return rc;
+}
+
 /** insert EM hash entry API
  *
  *    returns:
@@ -539,10 +694,67 @@ int tf_alloc_identifier(struct tf *tfp,
 	return 0;
 }
 
-/** free identifier resource
- *
- * Returns success or failure code.
- */
+int
+tf_alloc_identifier_new(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_alloc_parms aparms;
+	uint16_t id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_ident_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.ident_type = parms->ident_type;
+	aparms.id = &id;
+	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->id = id;
+
+	return 0;
+}
+
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms)
 {
@@ -618,6 +830,64 @@ int tf_free_identifier(struct tf *tfp,
 	return 0;
 }
 
+int
+tf_free_identifier_new(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_ident_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.ident_type = parms->ident_type;
+	fparms.id = parms->id;
+	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 96a1a794f..74ed24e5a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -380,7 +380,7 @@ struct tf_session_resources {
 	 * The number of identifier resources requested for the session.
 	 * The index used is tf_identifier_type.
 	 */
-	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
 	/** [in] Requested Index Table resource counts
 	 *
 	 * The number of index table resources requested for the session.
@@ -480,6 +480,9 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+int tf_open_session_new(struct tf *tfp,
+			struct tf_open_session_parms *parms);
+
 struct tf_attach_session_parms {
 	/** [in] ctrl_chan_name
 	 *
@@ -542,6 +545,8 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
+int tf_attach_session_new(struct tf *tfp,
+			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -551,6 +556,7 @@ int tf_attach_session(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
+int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -602,6 +608,8 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
+int tf_alloc_identifier_new(struct tf *tfp,
+			    struct tf_alloc_identifier_parms *parms);
 
 /** free identifier resource
  *
@@ -613,6 +621,8 @@ int tf_alloc_identifier(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
+int tf_free_identifier_new(struct tf *tfp,
+			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 3b368313e..4c46cadc6 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,45 +6,169 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
-#include "bnxt.h"
 
 struct tf;
 
+/* Forward declarations */
+static int dev_unbind_p4(struct tf *tfp);
+
 /**
- * Device specific bind function
+ * Device specific bind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] shadow_copy
+ *   Flag controlling shadow copy DB creation
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp __rte_unused,
-	    struct tf_session_resources *resources __rte_unused,
-	    struct tf_dev_info *dev_info)
+dev_bind_p4(struct tf *tfp,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
+	int rc;
+	int frc;
+	struct tf_ident_cfg_parms ident_cfg;
+	struct tf_tbl_cfg_parms tbl_cfg;
+	struct tf_tcam_cfg_parms tcam_cfg;
+
 	/* Initialize the modules */
 
-	dev_info->ops = &tf_dev_ops_p4;
+	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
+	ident_cfg.cfg = tf_ident_p4;
+	ident_cfg.shadow_copy = shadow_copy;
+	ident_cfg.resources = resources;
+	rc = tf_ident_bind(tfp, &ident_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier initialization failure\n");
+		goto fail;
+	}
+
+	tbl_cfg.num_elements = TF_TBL_TYPE_MAX;
+	tbl_cfg.cfg = tf_tbl_p4;
+	tbl_cfg.shadow_copy = shadow_copy;
+	tbl_cfg.resources = resources;
+	rc = tf_tbl_bind(tfp, &tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Table initialization failure\n");
+		goto fail;
+	}
+
+	tcam_cfg.num_elements = TF_TCAM_TBL_TYPE_MAX;
+	tcam_cfg.cfg = tf_tcam_p4;
+	tcam_cfg.shadow_copy = shadow_copy;
+	tcam_cfg.resources = resources;
+	rc = tf_tcam_bind(tfp, &tcam_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM initialization failure\n");
+		goto fail;
+	}
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	dev_handle->ops = &tf_dev_ops_p4;
+
 	return 0;
+
+ fail:
+	/* Cleanup of already created modules */
+	frc = dev_unbind_p4(tfp);
+	if (frc)
+		return frc;
+
+	return rc;
+}
+
+/**
+ * Device specific unbind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+dev_unbind_p4(struct tf *tfp)
+{
+	int rc = 0;
+	bool fail = false;
+
+	/* Unbind all the support modules. As this is only done on
+	 * close we only report errors as everything has to be cleaned
+	 * up regardless.
+	 */
+	rc = tf_ident_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Identifier\n");
+		fail = true;
+	}
+
+	rc = tf_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Table Type\n");
+		fail = true;
+	}
+
+	rc = tf_tcam_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, TCAM\n");
+		fail = true;
+	}
+
+	if (fail)
+		return -1;
+
+	return rc;
 }
 
 int
 dev_bind(struct tf *tfp __rte_unused,
 	 enum tf_device_type type,
+	 bool shadow_copy,
 	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_info)
+	 struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
 		return dev_bind_p4(tfp,
+				   shadow_copy,
 				   resources,
-				   dev_info);
+				   dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
-			    "Device type not supported\n");
-		return -ENOTSUP;
+			    "No such device\n");
+		return -ENODEV;
 	}
 }
 
 int
-dev_unbind(struct tf *tfp __rte_unused,
-	   struct tf_dev_info *dev_handle __rte_unused)
+dev_unbind(struct tf *tfp,
+	   struct tf_dev_info *dev_handle)
 {
-	return 0;
+	switch (dev_handle->type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_unbind_p4(tfp);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "No such device\n");
+		return -ENODEV;
+	}
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 8b63ff178..6aeb6fedb 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -27,6 +27,7 @@ struct tf_session;
  * TF device information
  */
 struct tf_dev_info {
+	enum tf_device_type type;
 	const struct tf_dev_ops *ops;
 };
 
@@ -56,10 +57,12 @@ struct tf_dev_info {
  *
  * Returns
  *   - (0) if successful.
- *   - (-EINVAL) on failure.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_bind(struct tf *tfp,
 	     enum tf_device_type type,
+	     bool shadow_copy,
 	     struct tf_session_resources *resources,
 	     struct tf_dev_info *dev_handle);
 
@@ -71,6 +74,11 @@ int dev_bind(struct tf *tfp,
  *
  * [in] dev_handle
  *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_unbind(struct tf *tfp,
 	       struct tf_dev_info *dev_handle);
@@ -84,6 +92,44 @@ int dev_unbind(struct tf *tfp,
  * different device variants.
  */
 struct tf_dev_ops {
+	/**
+	 * Retrives the MAX number of resource types that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] max_types
+	 *   Pointer to MAX number of types the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_max_types)(struct tf *tfp,
+				    uint16_t *max_types);
+
+	/**
+	 * Retrieves the WC TCAM slice information that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] slice_size
+	 *   Pointer to slice size the device supports
+	 *
+	 * [out] num_slices_per_row
+	 *   Pointer to number of slices per row the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
+					 uint16_t *slice_size,
+					 uint16_t *num_slices_per_row);
+
 	/**
 	 * Allocation of an identifier element.
 	 *
@@ -134,14 +180,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation parameters
+	 *   Pointer to table allocation parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
-				     struct tf_tbl_type_alloc_parms *parms);
+	int (*tf_dev_alloc_tbl)(struct tf *tfp,
+				struct tf_tbl_alloc_parms *parms);
 
 	/**
 	 * Free of a table type element.
@@ -153,14 +199,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type free parameters
+	 *   Pointer to table free parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_free_tbl_type)(struct tf *tfp,
-				    struct tf_tbl_type_free_parms *parms);
+	int (*tf_dev_free_tbl)(struct tf *tfp,
+			       struct tf_tbl_free_parms *parms);
 
 	/**
 	 * Searches for the specified table type element in a shadow DB.
@@ -175,15 +221,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation and search parameters
+	 *   Pointer to table allocation and search parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_search_tbl_type)
-			(struct tf *tfp,
-			struct tf_tbl_type_alloc_search_parms *parms);
+	int (*tf_dev_alloc_search_tbl)(struct tf *tfp,
+				       struct tf_tbl_alloc_search_parms *parms);
 
 	/**
 	 * Sets the specified table type element.
@@ -195,14 +240,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type set parameters
+	 *   Pointer to table set parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_set_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_set_parms *parms);
+	int (*tf_dev_set_tbl)(struct tf *tfp,
+			      struct tf_tbl_set_parms *parms);
 
 	/**
 	 * Retrieves the specified table type element.
@@ -214,14 +259,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type get parameters
+	 *   Pointer to table get parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_get_parms *parms);
+	int (*tf_dev_get_tbl)(struct tf *tfp,
+			       struct tf_tbl_get_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c3c4d1e05..c235976fe 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -3,19 +3,87 @@
  * All rights reserved.
  */
 
+#include <rte_common.h>
+#include <cfa_resource_types.h>
+
 #include "tf_device.h"
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
 
+/**
+ * Device specific function that retrieves the MAX number of HCAPI
+ * types the device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] max_types
+ *   Pointer to the MAX number of HCAPI types supported
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
+			uint16_t *max_types)
+{
+	if (max_types == NULL)
+		return -EINVAL;
+
+	*max_types = CFA_RESOURCE_TYPE_P4_LAST + 1;
+
+	return 0;
+}
+
+/**
+ * Device specific function that retrieves the WC TCAM slices the
+ * device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] slice_size
+ *   Pointer to the WC TCAM slice size
+ *
+ * [out] num_slices_per_row
+ *   Pointer to the WC TCAM row slice configuration
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
+			     uint16_t *slice_size,
+			     uint16_t *num_slices_per_row)
+{
+#define CFA_P4_WC_TCAM_SLICE_SIZE       12
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+
+	if (slice_size == NULL || num_slices_per_row == NULL)
+		return -EINVAL;
+
+	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
+	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+
+	return 0;
+}
+
+/**
+ * Truflow P4 device specific functions
+ */
 const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
-	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
-	.tf_dev_free_tbl_type = tf_tbl_type_free,
-	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
-	.tf_dev_set_tbl_type = tf_tbl_type_set,
-	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 84d90e3a7..5cd02b298 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,11 +12,12 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
@@ -24,41 +25,57 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
-	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
-	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+	/* CFA_RESOURCE_TYPE_P4_UPAR */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EPOC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_METADATA */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_CT_STATE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_PROF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_LAG */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EM_FBK */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_WC_FKB */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EXT */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 726d0b406..e89f9768b 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -6,42 +6,172 @@
 #include <rte_common.h>
 
 #include "tf_identifier.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Identifier DBs.
  */
-/* static void *ident_db[TF_DIR_MAX]; */
+static void *ident_db[TF_DIR_MAX];
 
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 int
-tf_ident_bind(struct tf *tfp __rte_unused,
-	      struct tf_ident_cfg *parms __rte_unused)
+tf_ident_bind(struct tf *tfp,
+	      struct tf_ident_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
+		db_cfg.rm_db = ident_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Identifier DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_ident_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = ident_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		ident_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_ident_alloc(struct tf *tfp __rte_unused,
-	       struct tf_ident_alloc_parms *parms __rte_unused)
+	       struct tf_ident_alloc_parms *parms)
 {
+	int rc;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = (uint32_t *)&parms->id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type);
+		return rc;
+	}
+
 	return 0;
 }
 
 int
 tf_ident_free(struct tf *tfp __rte_unused,
-	      struct tf_ident_free_parms *parms __rte_unused)
+	      struct tf_ident_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = parms->id;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = ident_db[parms->dir];
+	fparms.db_index = parms->ident_type;
+	fparms.index = parms->id;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index b77c91b9d..1c5319b5e 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -12,21 +12,28 @@
  * The Identifier module provides processing of Identifiers.
  */
 
-struct tf_ident_cfg {
+struct tf_ident_cfg_parms {
 	/**
-	 * Number of identifier types in each of the configuration
-	 * arrays
+	 * [in] Number of identifier types in each of the
+	 * configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
-	 * TCAM configuration array
+	 * [in] Identifier configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * [in] Boolean controlling the request shadow copy.
 	 */
-	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+	bool shadow_copy;
+	/**
+	 * [in] Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Identifier allcoation parameter definition
+ * Identifier allocation parameter definition
  */
 struct tf_ident_alloc_parms {
 	/**
@@ -40,7 +47,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [out] Identifier allocated
 	 */
-	uint16_t id;
+	uint16_t *id;
 };
 
 /**
@@ -88,7 +95,7 @@ struct tf_ident_free_parms {
  *   - (-EINVAL) on failure.
  */
 int tf_ident_bind(struct tf *tfp,
-		  struct tf_ident_cfg *parms);
+		  struct tf_ident_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c755c8555..e08a96f23 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -6,15 +6,13 @@
 #include <inttypes.h>
 #include <stdbool.h>
 #include <stdlib.h>
-
-#include "bnxt.h"
-#include "tf_core.h"
-#include "tf_session.h"
-#include "tfp.h"
+#include <string.h>
 
 #include "tf_msg_common.h"
 #include "tf_msg.h"
-#include "hsi_struct_def_dpdk.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tfp.h"
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
@@ -140,6 +138,51 @@ tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
 	return rc;
 }
 
+/**
+ * Allocates a DMA buffer that can be used for message transfer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ *
+ * [in] size
+ *   Requested size of the buffer in bytes
+ *
+ * Returns:
+ *    0      - Success
+ *   -ENOMEM - Unable to allocate buffer, no memory
+ */
+static int
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 4096;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc)
+		return -ENOMEM;
+
+	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+/**
+ * Free's a previous allocated DMA buffer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ */
+static void
+tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
+{
+	tfp_free(buf->va_addr);
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -154,7 +197,7 @@ tf_msg_session_open(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+	tfp_memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
 
 	parms.tf_type = HWRM_TF_SESSION_OPEN;
 	parms.req_data = (uint32_t *)&req;
@@ -870,6 +913,180 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_session_resc_qcaps(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *query,
+			  enum tf_rm_resc_resv_strategy *resv_strategy)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_qcaps_input req = { 0 };
+	struct hwrm_tf_session_resc_qcaps_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf qcaps_buf = { 0 };
+	struct tf_rm_resc_req_entry *data;
+	int dma_size;
+
+	if (size == 0 || query == NULL || resv_strategy == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Resource QCAPS parameter error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffer */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&qcaps_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.qcaps_size = size;
+	req.qcaps_addr = qcaps_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: QCAPS message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].min = tfp_le_to_cpu_16(data[i].min);
+		query[i].max = tfp_le_to_cpu_16(data[i].max);
+	}
+
+	*resv_strategy = resp.flags &
+	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
+
+	tf_msg_free_dma_buf(&qcaps_buf);
+
+	return rc;
+}
+
+int
+tf_msg_session_resc_alloc(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *request,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_alloc_input req = { 0 };
+	struct hwrm_tf_session_resc_alloc_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf req_buf = { 0 };
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_req_entry *req_data;
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&req_buf, dma_size);
+	if (rc)
+		return rc;
+
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.req_size = size;
+
+	req_data = (struct tf_rm_resc_req_entry *)req_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		req_data[i].type = tfp_cpu_to_le_32(request[i].type);
+		req_data[i].min = tfp_cpu_to_le_16(request[i].min);
+		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
+	}
+
+	req.req_addr = req_buf.pa_addr;
+	req.resp_addr = resv_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
+		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
+		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+	}
+
+	tf_msg_free_dma_buf(&req_buf);
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1034,7 +1251,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
@@ -1216,26 +1435,6 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-static int
-tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
-{
-	struct tfp_calloc_parms alloc_parms;
-	int rc;
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = size;
-	alloc_parms.alignment = 4096;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc)
-		return -ENOMEM;
-
-	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
-	buf->va_addr = alloc_parms.mem_va;
-
-	return 0;
-}
-
 int
 tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 			  struct tf_get_bulk_tbl_entry_parms *params)
@@ -1323,12 +1522,14 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		if (rc)
 			goto cleanup;
 		data = buf.va_addr;
-		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
 	}
 
-	memcpy(&data[0], parms->key, key_bytes);
-	memcpy(&data[key_bytes], parms->mask, key_bytes);
-	memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, key_bytes);
+	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
+	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1343,8 +1544,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		goto cleanup;
 
 cleanup:
-	if (buf.va_addr != NULL)
-		tfp_free(buf.va_addr);
+	tf_msg_free_dma_buf(&buf);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8d050c402..06f52ef00 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,8 +6,12 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include <rte_common.h>
+#include <hsi_struct_def_dpdk.h>
+
 #include "tf_tbl.h"
 #include "tf_rm.h"
+#include "tf_rm_new.h"
 
 struct tf;
 
@@ -121,6 +125,61 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the query. Should be set to the max
+ *   elements for the device type
+ *
+ * [out] query
+ *   Pointer to an array of query elements
+ *
+ * [out] resv_strategy
+ *   Pointer to the reservation strategy
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_qcaps(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *query,
+			      enum tf_rm_resc_resv_strategy *resv_strategy);
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] req
+ *   Pointer to an array of request elements
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_alloc(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *request,
+			      struct tf_rm_resc_entry *resv);
+
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 51bb9ba3a..7cadb231f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -3,20 +3,18 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
 #include <rte_common.h>
 
-#include "tf_rm_new.h"
+#include <cfa_resource_types.h>
 
-/**
- * Resource query single entry. Used when accessing HCAPI RM on the
- * firmware.
- */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
-};
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_msg.h"
 
 /**
  * Generic RM Element data type that an RM DB is build upon.
@@ -27,7 +25,7 @@ struct tf_rm_element {
 	 * hcapi_type can be ignored. If Null then the element is not
 	 * valid for the device.
 	 */
-	enum tf_rm_elem_cfg_type type;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/**
 	 * HCAPI RM Type for the element.
@@ -50,53 +48,435 @@ struct tf_rm_element {
 /**
  * TF RM DB definition
  */
-struct tf_rm_db {
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
+
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
 	struct tf_rm_element *db;
 };
 
+
+/**
+ * Resource Manager Adjust of base index definitions.
+ */
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
+
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
+{
+	int rc = 0;
+	uint32_t base_index;
+
+	base_index = db[db_index].alloc.entry.start;
+
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
 int
-tf_rm_create_db(struct tf *tfp __rte_unused,
-		struct tf_rm_create_db_parms *parms __rte_unused)
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
+
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against db requirements */
+
+	/* Alloc request, alignment already set */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			req[i].type = parms->cfg[i].hcapi_type;
+			/* Check that we can get the full amount allocated */
+			if (parms->alloc_num[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[i].min = parms->alloc_num[i];
+				req[i].max = parms->alloc_num[i];
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_num[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		} else {
+			/* Skip the element */
+			req[i].type = CFA_RESOURCE_TYPE_INVALID;
+		}
+	}
+
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       parms->num_elements,
+				       req,
+				       resv);
+	if (rc)
+		return rc;
+
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db = (void *)cparms.mem_va;
+
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
+
+	db = rm_db->db;
+	for (i = 0; i < parms->num_elements; i++) {
+		/* If allocation failed for a single entry the DB
+		 * creation is considered a failure.
+		 */
+		if (parms->alloc_num[i] != resv[i].stride) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    i);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_num[i],
+				    resv[i].stride);
+			goto fail;
+		}
+
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+		db[i].alloc.entry.start = resv[i].start;
+		db[i].alloc.entry.stride = resv[i].stride;
+
+		/* Create pool */
+		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
+			     sizeof(struct bitalloc));
+		/* Alloc request, alignment already set */
+		cparms.nitems = pool_size;
+		cparms.size = sizeof(struct bitalloc);
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+		db[i].pool = (struct bitalloc *)cparms.mem_va;
+	}
+
+	rm_db->num_entries = i;
+	rm_db->dir = parms->dir;
+	parms->rm_db = (void *)rm_db;
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
 	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
 int
 tf_rm_free_db(struct tf *tfp __rte_unused,
-	      struct tf_rm_free_db_parms *parms __rte_unused)
+	      struct tf_rm_free_db_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int i;
+	struct tf_rm_new_db *rm_db;
+
+	/* Traverse the DB and clear each pool.
+	 * NOTE:
+	 *   Firmware is not cleared. It will be cleared on close only.
+	 */
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db->pool);
+
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int id;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -ENOMEM;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				parms->index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -1;
+	}
+
+	return rc;
 }
 
 int
-tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
+
+	return rc;
 }
 
 int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	parms->info = &rm_db->db[parms->db_index].alloc;
+
+	return rc;
 }
 
 int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 72dba0984..6d8234ddc 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
 #include "tf_core.h"
 #include "bitalloc.h"
@@ -32,13 +32,16 @@ struct tf;
  * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
  * structure and several new APIs would need to be added to allow for
  * growth of a single TF resource type.
+ *
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
 
 /**
  * Resource reservation single entry result. Used when accessing HCAPI
  * RM on the firmware.
  */
-struct tf_rm_entry {
+struct tf_rm_new_entry {
 	/** Starting index of the allocated resource */
 	uint16_t start;
 	/** Number of allocated elements */
@@ -52,12 +55,32 @@ struct tf_rm_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
-	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
-	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
 	TF_RM_TYPE_MAX
 };
 
+/**
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
+ */
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
+};
+
 /**
  * RM Element configuration structure, used by the Device to configure
  * how an individual TF type is configured in regard to the HCAPI RM
@@ -68,7 +91,7 @@ struct tf_rm_element_cfg {
 	 * RM Element config controls how the DB for that element is
 	 * processed.
 	 */
-	enum tf_rm_elem_cfg_type cfg;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/* If a HCAPI to TF type conversion is required then TF type
 	 * can be added here.
@@ -92,7 +115,7 @@ struct tf_rm_alloc_info {
 	 * In case of dynamic allocation support this would have
 	 * to be changed to linked list of tf_rm_entry instead.
 	 */
-	struct tf_rm_entry entry;
+	struct tf_rm_new_entry entry;
 };
 
 /**
@@ -104,17 +127,21 @@ struct tf_rm_create_db_parms {
 	 */
 	enum tf_dir dir;
 	/**
-	 * [in] Number of elements in the parameter structure
+	 * [in] Number of elements.
 	 */
 	uint16_t num_elements;
 	/**
-	 * [in] Parameter structure
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Allocation number array. Array size is num_elements.
 	 */
-	struct tf_rm_element_cfg *parms;
+	uint16_t *alloc_num;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -128,7 +155,7 @@ struct tf_rm_free_db_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -138,7 +165,7 @@ struct tf_rm_allocate_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -159,7 +186,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -168,7 +195,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t index;
+	uint16_t index;
 };
 
 /**
@@ -178,7 +205,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -191,7 +218,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] Pointer to flag that indicates the state of the query
 	 */
-	uint8_t *allocated;
+	int *allocated;
 };
 
 /**
@@ -201,7 +228,7 @@ struct tf_rm_get_alloc_info_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -221,7 +248,7 @@ struct tf_rm_get_hcapi_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -306,6 +333,7 @@ int tf_rm_free_db(struct tf *tfp,
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
 int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
@@ -317,7 +345,7 @@ int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
  *
  * Returns
  *   - (0) if successful.
- *   - (-EpINVAL) on failure.
+ *   - (-EINVAL) on failure.
  */
 int tf_rm_free(struct tf_rm_free_parms *parms);
 
@@ -365,4 +393,4 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index c74994546..1bf30c996 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -3,29 +3,269 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_session.h"
+#include "tf_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tfp_calloc_parms cparms;
+	uint8_t fw_session_id;
+	union tf_session_id *session_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->open_cfg->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
+		else
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
+
+		parms->open_cfg->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_info);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session = (struct tf_session_info *)cparms.mem_va;
+
+	/* Allocate core data for the session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session->core_data = cparms.mem_va;
+
+	/* Initialize Session and Device */
+	session = (struct tf_session *)tfp->session->core_data;
+	session->ver.major = 0;
+	session->ver.minor = 0;
+	session->ver.update = 0;
+
+	session_id = &parms->open_cfg->session_id;
+	session->session_id.internal.domain = session_id->internal.domain;
+	session->session_id.internal.bus = session_id->internal.bus;
+	session->session_id.internal.device = session_id->internal.device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+	/* Return the allocated fw session id */
+	session_id->internal.fw_session_id = fw_session_id;
+
+	session->shadow_copy = parms->open_cfg->shadow_copy;
+
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->open_cfg->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	rc = dev_bind(tfp,
+		      parms->open_cfg->device_type,
+		      session->shadow_copy,
+		      &parms->open_cfg->resources,
+		      session->dev);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	/* Query for Session Config
+	 */
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	tf_close_session(tfp);
+	return -EINVAL;
+}
+
+int
+tf_session_attach_session(struct tf *tfp __rte_unused,
+			  struct tf_session_attach_session_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	TFP_DRV_LOG(ERR,
+		    "Attach not yet supported, rc:%s\n",
+		    strerror(-rc));
+	return rc;
+}
+
+int
+tf_session_close_session(struct tf *tfp,
+			 struct tf_session_close_session_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *tfd;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (tfs->session_id.id == TF_SESSION_ID_INVALID) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid session id, unable to close, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Record the session we're closing so the caller knows the
+	 * details.
+	 */
+	*parms->session_id = tfs->session_id;
+
+	rc = tf_session_get_device(tfs, &tfd);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* In case we're attached only the session client gets closed */
+	rc = tf_msg_session_close(tfp);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "FW Session close failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	tfs->ref_count--;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		/* Unbind the device */
+		rc = dev_unbind(tfp, tfd);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "Device unbind failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	return 0;
+}
+
 int
 tf_session_get_session(struct tf *tfp,
-		       struct tf_session *tfs)
+		       struct tf_session **tfs)
 {
+	int rc;
+
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		TFP_DRV_LOG(ERR, "Session not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	*tfs = (struct tf_session *)(tfp->session->core_data);
 
 	return 0;
 }
 
 int
 tf_session_get_device(struct tf_session *tfs,
-		      struct tf_device *tfd)
+		      struct tf_dev_info **tfd)
 {
+	int rc;
+
 	if (tfs->dev == NULL) {
-		TFP_DRV_LOG(ERR, "Device not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Device not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	*tfd = tfs->dev;
+
+	return 0;
+}
+
+int
+tf_session_get_fw_session_id(struct tf *tfp,
+			     uint8_t *fw_session_id)
+{
+	int rc;
+	struct tf_session *tfs;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
-	tfd = tfs->dev;
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*fw_session_id = tfs->session_id.internal.fw_session_id;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index b1cc7a4a7..92792518b 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -63,12 +63,7 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Device type, provided by tf_open_session().
-	 */
-	enum tf_device_type device_type;
-
-	/** Session ID, allocated by FW on tf_open_session().
-	 */
+	/** Session ID, allocated by FW on tf_open_session() */
 	union tf_session_id session_id;
 
 	/**
@@ -101,7 +96,7 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
-	/** Device */
+	/** Device handle */
 	struct tf_dev_info *dev;
 
 	/** Session HW and SRAM resources */
@@ -323,13 +318,97 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * Session open parameter definition
+ */
+struct tf_session_open_session_parms {
+	/**
+	 * [in] Pointer to the TF open session configuration
+	 */
+	struct tf_open_session_parms *open_cfg;
+};
+
+/**
+ * Session attach parameter definition
+ */
+struct tf_session_attach_session_parms {
+	/**
+	 * [in] Pointer to the TF attach session configuration
+	 */
+	struct tf_attach_session_parms *attach_cfg;
+};
+
+/**
+ * Session close parameter definition
+ */
+struct tf_session_close_session_parms {
+	uint8_t *ref_count;
+	union tf_session_id *session_id;
+};
+
 /**
  * @page session Session Management
  *
+ * @ref tf_session_open_session
+ *
+ * @ref tf_session_attach_session
+ *
+ * @ref tf_session_close_session
+ *
  * @ref tf_session_get_session
  *
  * @ref tf_session_get_device
+ *
+ * @ref tf_session_get_fw_session_id
+ */
+
+/**
+ * Creates a host session with a corresponding firmware session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
+int tf_session_open_session(struct tf *tfp,
+			    struct tf_session_open_session_parms *parms);
+
+/**
+ * Attaches a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_attach_session(struct tf *tfp,
+			      struct tf_session_attach_session_parms *parms);
+
+/**
+ * Closes a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in/out] parms
+ *   Pointer to the session close parameters.
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_close_session(struct tf *tfp,
+			     struct tf_session_close_session_parms *parms);
 
 /**
  * Looks up the private session information from the TF session info.
@@ -338,14 +417,14 @@ struct tf_session {
  *   Pointer to TF handle
  *
  * [out] tfs
- *   Pointer to the session
+ *   Pointer pointer to the session
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_session(struct tf *tfp,
-			   struct tf_session *tfs);
+			   struct tf_session **tfs);
 
 /**
  * Looks up the device information from the TF Session.
@@ -354,13 +433,30 @@ int tf_session_get_session(struct tf *tfp,
  *   Pointer to TF handle
  *
  * [out] tfd
- *   Pointer to the device
+ *   Pointer pointer to the device
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_device(struct tf_session *tfs,
-			  struct tf_dev_info *tfd);
+			  struct tf_dev_info **tfd);
+
+/**
+ * Looks up the FW session id of the firmware connection for the
+ * requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_fw_session_id(struct tf *tfp,
+				 uint8_t *fw_session_id);
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 6cda4875b..b335a9cf4 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,8 +7,12 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+
+#include "tf_core.h"
 #include "stack.h"
 
+struct tf_session;
+
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
 	PT_LVL_1,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index a57a5ddf2..b79706f97 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -10,12 +10,12 @@
 struct tf;
 
 /**
- * Table Type DBs.
+ * Table DBs.
  */
 /* static void *tbl_db[TF_DIR_MAX]; */
 
 /**
- * Table Type Shadow DBs
+ * Table Shadow DBs
  */
 /* static void *shadow_tbl_db[TF_DIR_MAX]; */
 
@@ -30,49 +30,49 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_type_bind(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp __rte_unused,
+	    struct tf_tbl_cfg_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc(struct tf *tfp __rte_unused,
-		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_free(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_free_parms *parms __rte_unused)
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
-			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_set(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp __rte_unused,
+	   struct tf_tbl_set_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_get(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp __rte_unused,
+	   struct tf_tbl_get_parms *parms __rte_unused)
 {
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index c880b368b..11f2aa333 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -11,33 +11,39 @@
 struct tf;
 
 /**
- * The Table Type module provides processing of Internal TF table types.
+ * The Table module provides processing of Internal TF table types.
  */
 
 /**
- * Table Type configuration parameters
+ * Table configuration parameters
  */
-struct tf_tbl_type_cfg_parms {
+struct tf_tbl_cfg_parms {
 	/**
 	 * Number of table types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * Table Type element configuration array
 	 */
-	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Table Type allocation parameters
+ * Table allocation parameters
  */
-struct tf_tbl_type_alloc_parms {
+struct tf_tbl_alloc_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -53,9 +59,9 @@ struct tf_tbl_type_alloc_parms {
 };
 
 /**
- * Table Type free parameters
+ * Table free parameters
  */
-struct tf_tbl_type_free_parms {
+struct tf_tbl_free_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -75,7 +81,10 @@ struct tf_tbl_type_free_parms {
 	uint16_t ref_cnt;
 };
 
-struct tf_tbl_type_alloc_search_parms {
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -117,9 +126,9 @@ struct tf_tbl_type_alloc_search_parms {
 };
 
 /**
- * Table Type set parameters
+ * Table set parameters
  */
-struct tf_tbl_type_set_parms {
+struct tf_tbl_set_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -143,9 +152,9 @@ struct tf_tbl_type_set_parms {
 };
 
 /**
- * Table Type get parameters
+ * Table get parameters
  */
-struct tf_tbl_type_get_parms {
+struct tf_tbl_get_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -169,39 +178,39 @@ struct tf_tbl_type_get_parms {
 };
 
 /**
- * @page tbl_type Table Type
+ * @page tbl Table
  *
- * @ref tf_tbl_type_bind
+ * @ref tf_tbl_bind
  *
- * @ref tf_tbl_type_unbind
+ * @ref tf_tbl_unbind
  *
- * @ref tf_tbl_type_alloc
+ * @ref tf_tbl_alloc
  *
- * @ref tf_tbl_type_free
+ * @ref tf_tbl_free
  *
- * @ref tf_tbl_type_alloc_search
+ * @ref tf_tbl_alloc_search
  *
- * @ref tf_tbl_type_set
+ * @ref tf_tbl_set
  *
- * @ref tf_tbl_type_get
+ * @ref tf_tbl_get
  */
 
 /**
- * Initializes the Table Type module with the requested DBs. Must be
+ * Initializes the Table module with the requested DBs. Must be
  * invoked as the first thing before any of the access functions.
  *
  * [in] tfp
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table configuration parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_bind(struct tf *tfp,
-		     struct tf_tbl_type_cfg_parms *parms);
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
@@ -216,7 +225,7 @@ int tf_tbl_type_bind(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_unbind(struct tf *tfp);
+int tf_tbl_unbind(struct tf *tfp);
 
 /**
  * Allocates the requested table type from the internal RM DB.
@@ -225,14 +234,14 @@ int tf_tbl_type_unbind(struct tf *tfp);
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table allocation parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc(struct tf *tfp,
-		      struct tf_tbl_type_alloc_parms *parms);
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
 
 /**
  * Free's the requested table type and returns it to the DB. If shadow
@@ -244,14 +253,14 @@ int tf_tbl_type_alloc(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_free(struct tf *tfp,
-		     struct tf_tbl_type_free_parms *parms);
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
 
 /**
  * Supported if Shadow DB is configured. Searches the Shadow DB for
@@ -269,8 +278,8 @@ int tf_tbl_type_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc_search(struct tf *tfp,
-			     struct tf_tbl_type_alloc_search_parms *parms);
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
 
 /**
  * Configures the requested element by sending a firmware request which
@@ -280,14 +289,14 @@ int tf_tbl_type_alloc_search(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table set parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_set(struct tf *tfp,
-		    struct tf_tbl_type_set_parms *parms);
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
 
 /**
  * Retrieves the requested element by sending a firmware request to get
@@ -297,13 +306,13 @@ int tf_tbl_type_set(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table get parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_get(struct tf *tfp,
-		    struct tf_tbl_type_get_parms *parms);
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
 
 #endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 1420c9ed5..68c25eb1b 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -20,16 +20,22 @@ struct tf_tcam_cfg_parms {
 	 * Number of tcam types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * TCAM configuration array
 	 */
-	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tcam_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 14/51] net/bnxt: support two-level priority for TCAMs
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (12 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 13/51] net/bnxt: update multi device design support Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
                     ` (37 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru, Randy Schacher

From: Shahaji Bhosle <sbhosle@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 36 ++++++++++++++++++++++++------
 drivers/net/bnxt/tf_core/tf_core.h |  4 +++-
 drivers/net/bnxt/tf_core/tf_em.c   |  6 ++---
 drivers/net/bnxt/tf_core/tf_tbl.c  |  2 +-
 4 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 81a88e211..eac57e7bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -893,7 +893,7 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
+	int index = 0;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
 
@@ -916,12 +916,34 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority) {
+		for (index = session_pool->size - 1; index >= 0; index--) {
+			if (ba_inuse(session_pool,
+					  index) == BA_ENTRY_FREE) {
+				break;
+			}
+		}
+		if (ba_alloc_index(session_pool,
+				   index) == BA_FAIL) {
+			TFP_DRV_LOG(ERR,
+				    "%s: %s: ba_alloc index %d failed\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+				    index);
+			return -ENOMEM;
+		}
+	} else {
+		index = ba_alloc(session_pool);
+		if (index == BA_FAIL) {
+			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+			return -ENOMEM;
+		}
 	}
 
 	parms->idx = index;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 74ed24e5a..f1ef00b30 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -799,7 +799,9 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested (definition TBD)
+	 * [in] Priority of entry requested
+	 * 0: index from top i.e. highest priority first
+	 * !0: index from bottom i.e lowest priority first
 	 */
 	uint32_t priority;
 	/**
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index fd1797e39..91cbc6299 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -479,8 +479,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG
-		   (ERR,
+		TFP_DRV_LOG(ERR,
 		   "dir:%d, EM entry index allocation failed\n",
 		   parms->dir);
 		return rc;
@@ -495,8 +494,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	PMD_DRV_LOG
-		   (ERR,
+	TFP_DRV_LOG(INFO,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 26313ed3c..4e236d56c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -1967,7 +1967,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
+		TFP_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 15/51] net/bnxt: add HCAPI interface support
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (13 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
                     ` (36 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Add new hardware shim APIs to support multiple
device generations

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/Makefile           |  10 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h        | 271 +++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h   | 672 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     | 399 +++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h     | 451 +++++++++++++++
 drivers/net/bnxt/meson.build              |   2 +
 drivers/net/bnxt/tf_core/tf_em.c          |  28 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  94 +--
 drivers/net/bnxt/tf_core/tf_tbl.h         |  24 +-
 10 files changed, 1970 insertions(+), 73 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h

diff --git a/drivers/net/bnxt/hcapi/Makefile b/drivers/net/bnxt/hcapi/Makefile
new file mode 100644
index 000000000..65cddd789
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/Makefile
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019-2020 Broadcom Limited.
+# All rights reserved.
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_p4.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_defs.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_p4.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/cfa_p40_hw.h
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
new file mode 100644
index 000000000..f60af4e56
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -0,0 +1,271 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_H_
+#define _HCAPI_CFA_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#include "hcapi_cfa_defs.h"
+
+#define SUPPORT_CFA_HW_P4  1
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL  1
+#endif
+
+/**
+ * Index used for the sram_entries field
+ */
+enum hcapi_cfa_resc_type_sram {
+	HCAPI_CFA_RESC_TYPE_SRAM_FULL_ACTION,
+	HCAPI_CFA_RESC_TYPE_SRAM_MCG,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_8B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_16B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	HCAPI_CFA_RESC_TYPE_SRAM_COUNTER_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_SPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_DPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_S_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_D_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_MAX
+};
+
+/**
+ * Index used for the hw_entries field in struct cfa_rm_db
+ */
+enum hcapi_cfa_resc_type_hw {
+	/* common HW resources for all chip variants */
+	HCAPI_CFA_RESC_TYPE_HW_L2_CTXT_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_EM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_EM_REC,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_METER_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_METER_INST,
+	HCAPI_CFA_RESC_TYPE_HW_MIRROR,
+	HCAPI_CFA_RESC_TYPE_HW_UPAR,
+	/* Wh+/SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_SP_TCAM,
+	/* Thor, SR2 common HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_FKB,
+	/* SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_TBL_SCOPE,
+	HCAPI_CFA_RESC_TYPE_HW_L2_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH0,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH1,
+	HCAPI_CFA_RESC_TYPE_HW_METADATA,
+	HCAPI_CFA_RESC_TYPE_HW_CT_STATE,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_LAG_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_MAX
+};
+
+struct hcapi_cfa_key_result {
+	uint64_t bucket_mem_ptr;
+	uint8_t bucket_idx;
+};
+
+/* common CFA register access macros */
+#define CFA_REG(x)		OFFSETOF(cfa_reg_t, cfa_##x)
+
+#ifndef REG_WR
+#define REG_WR(_p, x, y)  (*((uint32_t volatile *)(x)) = (y))
+#endif
+#ifndef REG_RD
+#define REG_RD(_p, x)  (*((uint32_t volatile *)(x)))
+#endif
+#define CFA_REG_RD(_p, x)	\
+	REG_RD(0, (uint32_t)(_p)->base_addr + CFA_REG(x))
+#define CFA_REG_WR(_p, x, y)	\
+	REG_WR(0, (uint32_t)(_p)->base_addr + CFA_REG(x), y)
+
+
+/* Constants used by Resource Manager Registration*/
+#define RM_CLIENT_NAME_MAX_LEN          32
+
+/**
+ *  Resource Manager Data Structures used for resource requests
+ */
+struct hcapi_cfa_resc_req_entry {
+	uint16_t min;
+	uint16_t max;
+};
+
+struct hcapi_cfa_resc_req {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_req_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the starting index and the number of
+	 * resources in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_req_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_req_db {
+	struct hcapi_cfa_resc_req rx;
+	struct hcapi_cfa_resc_req tx;
+};
+
+struct hcapi_cfa_resc_entry {
+	uint16_t start;
+	uint16_t stride;
+	uint16_t tag;
+};
+
+struct hcapi_cfa_resc {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the startin index and the number of resources
+	 * in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_db {
+	struct hcapi_cfa_resc rx;
+	struct hcapi_cfa_resc tx;
+};
+
+/**
+ * This is the main data structure used by the CFA Resource
+ * Manager.  This data structure holds all the state and table
+ * management information.
+ */
+typedef struct hcapi_cfa_rm_data {
+	uint32_t dummy_data;
+} hcapi_cfa_rm_data_t;
+
+/* End RM support */
+
+struct hcapi_cfa_devops;
+
+struct hcapi_cfa_devinfo {
+	uint8_t global_cfg_data[CFA_GLOBAL_CFG_DATA_SZ];
+	struct hcapi_cfa_layout_tbl layouts;
+	struct hcapi_cfa_devops *devops;
+};
+
+int hcapi_cfa_dev_bind(enum hcapi_cfa_ver hw_ver,
+		       struct hcapi_cfa_devinfo *dev_info);
+
+int hcapi_cfa_key_compile_layout(struct hcapi_cfa_key_template *key_template,
+				 struct hcapi_cfa_key_layout *key_layout);
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data, uint16_t bitlen);
+int
+hcapi_cfa_action_compile_layout(struct hcapi_cfa_action_template *act_template,
+				struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_init_obj(uint64_t *act_obj,
+			      struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_compute_ptr(uint64_t *act_obj,
+				 struct hcapi_cfa_action_layout *act_layout,
+				 uint32_t base_ptr);
+
+int hcapi_cfa_action_hw_op(struct hcapi_cfa_hwop *op,
+			   uint8_t *act_tbl,
+			   struct hcapi_cfa_data *act_obj);
+int hcapi_cfa_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_rm_register_client(hcapi_cfa_rm_data_t *data,
+				 const char *client_name,
+				 int *client_id);
+int hcapi_cfa_rm_unregister_client(hcapi_cfa_rm_data_t *data,
+				   int client_id);
+int hcapi_cfa_rm_query_resources(hcapi_cfa_rm_data_t *data,
+				 int client_id,
+				 uint16_t chnl_id,
+				 struct hcapi_cfa_resc_req_db *req_db);
+int hcapi_cfa_rm_query_resources_one(hcapi_cfa_rm_data_t *data,
+				     int clien_id,
+				     struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_reserve_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_release_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_initialize(hcapi_cfa_rm_data_t *data);
+
+#if SUPPORT_CFA_HW_P4
+
+int hcapi_cfa_p4_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxt_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxtrmp_hwop(struct hcapi_cfa_hwop *op,
+				      struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcam_hwop(struct hcapi_cfa_hwop *op,
+				 struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcamrmp_hwop(struct hcapi_cfa_hwop *op,
+				    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
+			       struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+#endif /* SUPPORT_CFA_HW_P4 */
+/**
+ *  HCAPI CFA device HW operation function callback definition
+ *  This is standardized function callback hook to install different
+ *  CFA HW table programming function callback.
+ */
+
+struct hcapi_cfa_tbl_cb {
+	/**
+	 * This function callback provides the functionality to read/write
+	 * HW table entry from a HW table.
+	 *
+	 * @param[in] op
+	 *   A pointer to the Hardware operation parameter
+	 *
+	 * @param[in] obj_data
+	 *   A pointer to the HW data object for the hardware operation
+	 *
+	 * @return
+	 *   0 for SUCCESS, negative value for FAILURE
+	 */
+	int (*hwop_cb)(struct hcapi_cfa_hwop *op,
+		       struct hcapi_cfa_data *obj_data);
+};
+
+#endif  /* HCAPI_CFA_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
new file mode 100644
index 000000000..39afd4dbc
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
@@ -0,0 +1,92 @@
+/*
+ *   Copyright(c) 2019-2020 Broadcom Limited.
+ *   All rights reserved.
+ */
+
+#include "bitstring.h"
+#include "hcapi_cfa_defs.h"
+#include <errno.h>
+#include "assert.h"
+
+/* HCAPI CFA common PUT APIs */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val)
+{
+	assert(layout);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		bs_put_msb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	else
+		bs_put_lsb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	return 0;
+}
+
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz)
+{
+	int i;
+	uint16_t bitpos;
+	uint8_t bitlen;
+	uint16_t field_id;
+
+	assert(layout);
+	assert(field_tbl);
+
+	if (layout->is_msb_order) {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_msb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	} else {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_lsb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	}
+	return 0;
+}
+
+/* HCAPI CFA common GET APIs */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id,
+			uint64_t *val)
+{
+	assert(layout);
+	assert(val);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		*val = bs_get_msb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	else
+		*val = bs_get_lsb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	return 0;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
new file mode 100644
index 000000000..ea8d99d01
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -0,0 +1,672 @@
+
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+/*!
+ *   \file
+ *   \brief Exported functions for CFA HW programming
+ */
+#ifndef _HCAPI_CFA_DEFS_H_
+#define _HCAPI_CFA_DEFS_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#define SUPPORT_CFA_HW_ALL 0
+#define SUPPORT_CFA_HW_P4  1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+
+#define CFA_BITS_PER_BYTE (8)
+#define __CFA_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+#define CFA_ALIGN(x, a) __CFA_ALIGN_MASK(x, (a) - 1)
+#define CFA_ALIGN_128(x) CFA_ALIGN(x, 128)
+#define CFA_ALIGN_32(x) CFA_ALIGN(x, 32)
+
+#define NUM_WORDS_ALIGN_32BIT(x)                                               \
+	(CFA_ALIGN_32(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+#define NUM_WORDS_ALIGN_128BIT(x)                                              \
+	(CFA_ALIGN_128(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+
+#define CFA_GLOBAL_CFG_DATA_SZ (100)
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL (1)
+#endif
+
+#include "hcapi_cfa_p4.h"
+#define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+#define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
+#define CFA_PROF_MAX_KEY_CFG_SZ sizeof(struct cfa_p4_prof_key_cfg)
+#define CFA_KEY_MAX_FIELD_CNT 41
+#define CFA_ACT_MAX_TEMPLATE_SZ sizeof(struct cfa_p4_action_template)
+
+/**
+ * CFA HW version definition
+ */
+enum hcapi_cfa_ver {
+	HCAPI_CFA_P40 = 0, /**< CFA phase 4.0 */
+	HCAPI_CFA_P45 = 1, /**< CFA phase 4.5 */
+	HCAPI_CFA_P58 = 2, /**< CFA phase 5.8 */
+	HCAPI_CFA_P59 = 3, /**< CFA phase 5.9 */
+	HCAPI_CFA_PMAX = 4
+};
+
+/**
+ * CFA direction definition
+ */
+enum hcapi_cfa_dir {
+	HCAPI_CFA_DIR_RX = 0, /**< Receive */
+	HCAPI_CFA_DIR_TX = 1, /**< Transmit */
+	HCAPI_CFA_DIR_MAX = 2
+};
+
+/**
+ * CFA HW OPCODE definition
+ */
+enum hcapi_cfa_hwops {
+	HCAPI_CFA_HWOPS_PUT, /**< Write to HW operation */
+	HCAPI_CFA_HWOPS_GET, /**< Read from HW operation */
+	HCAPI_CFA_HWOPS_ADD, /**< For operations which require more than simple
+			      * writes to HW, this operation is used. The
+			      * distinction with this operation when compared
+			      * to the PUT ops is that this operation is used
+			      * in conjunction with the HCAPI_CFA_HWOPS_DEL
+			      * op to remove the operations issued by the
+			      * ADD OP.
+			      */
+	HCAPI_CFA_HWOPS_DEL, /**< This issues operations to clear the hardware.
+			      * This operation is used in conjunction
+			      * with the HCAPI_CFA_HWOPS_ADD op and is the
+			      * way to undo/clear the ADD op.
+			      */
+	HCAPI_CFA_HWOPS_MAX
+};
+
+/**
+ * CFA HW KEY CONTROL OPCODE definition
+ */
+enum hcapi_cfa_key_ctrlops {
+	HCAPI_CFA_KEY_CTRLOPS_INSERT, /**< insert control bits */
+	HCAPI_CFA_KEY_CTRLOPS_STRIP, /**< strip control bits */
+	HCAPI_CFA_KEY_CTRLOPS_MAX
+};
+
+/**
+ * CFA HW field structure definition
+ */
+struct hcapi_cfa_field {
+	/** [in] Starting bit position pf the HW field within a HW table
+	 *  entry.
+	 */
+	uint16_t bitpos;
+	/** [in] Number of bits for the HW field. */
+	uint8_t bitlen;
+};
+
+/**
+ * CFA HW table entry layout structure definition
+ */
+struct hcapi_cfa_layout {
+	/** [out] Bit order of layout */
+	bool is_msb_order;
+	/** [out] Size in bits of entry */
+	uint32_t total_sz_in_bits;
+	/** [out] data pointer of the HW layout fields array */
+	const struct hcapi_cfa_field *field_array;
+	/** [out] number of HW field entries in the HW layout field array */
+	uint32_t array_sz;
+};
+
+/**
+ * CFA HW data object definition
+ */
+struct hcapi_cfa_data_obj {
+	/** [in] HW field identifier. Used as an index to a HW table layout */
+	uint16_t field_id;
+	/** [in] Value of the HW field */
+	uint64_t val;
+};
+
+/**
+ * CFA HW definition
+ */
+struct hcapi_cfa_hw {
+	/** [in] HW table base address for the operation with optional device
+	 *  handle. For on-chip HW table operation, this is the either the TX
+	 *  or RX CFA HW base address. For off-chip table, this field is the
+	 *  base memory address of the off-chip table.
+	 */
+	uint64_t base_addr;
+	/** [in] Optional opaque device handle. It is generally used to access
+	 *  an GRC register space through PCIE BAR and passed to the BAR memory
+	 *  accessor routine.
+	 */
+	void *handle;
+};
+
+/**
+ * CFA HW operation definition
+ *
+ */
+struct hcapi_cfa_hwop {
+	/** [in] HW opcode */
+	enum hcapi_cfa_hwops opcode;
+	/** [in] CFA HW information used by accessor routines.
+	 */
+	struct hcapi_cfa_hw hw;
+};
+
+/**
+ * CFA HW data structure definition
+ */
+struct hcapi_cfa_data {
+	/** [in] physical offset to the HW table for the data to be
+	 *  written to.  If this is an array of registers, this is the
+	 *  index into the array of registers.  For writing keys, this
+	 *  is the byte offset into the memory wher the key should be
+	 *  written.
+	 */
+	union {
+		uint32_t index;
+		uint32_t byte_offset;
+	} u;
+	/** [in] HW data buffer pointer */
+	uint8_t *data;
+	/** [in] HW data mask buffer pointer */
+	uint8_t *data_mask;
+	/** [in] size of the HW data buffer in bytes */
+	uint16_t data_sz;
+};
+
+/*********************** Truflow start ***************************/
+enum hcapi_cfa_pg_tbl_lvl {
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
+};
+
+enum hcapi_cfa_em_table_type {
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
+};
+
+struct hcapi_cfa_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct hcapi_cfa_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct hcapi_cfa_em_page_tbl    pg_tbl[TF_PT_LVL_MAX];
+};
+
+struct hcapi_cfa_em_ctx_mem_info {
+	struct hcapi_cfa_em_table		em_tables[TF_MAX_TABLE];
+};
+
+/*********************** Truflow end ****************************/
+
+/**
+ * CFA HW key table definition
+ *
+ * Applicable to EEM and off-chip EM table only.
+ */
+struct hcapi_cfa_key_tbl {
+	/** [in] For EEM, this is the KEY0 base mem pointer. For off-chip EM,
+	 *  this is the base mem pointer of the key table.
+	 */
+	uint8_t *base0;
+	/** [in] total size of the key table in bytes. For EEM, this size is
+	 *  same for both KEY0 and KEY1 table.
+	 */
+	uint32_t size;
+	/** [in] number of key buckets, applicable for newer chips */
+	uint32_t num_buckets;
+	/** [in] For EEM, this is KEY1 base mem pointer. Fo off-chip EM,
+	 *  this is the key record memory base pointer within the key table,
+	 *  applicable for newer chip
+	 */
+	uint8_t *base1;
+};
+
+/**
+ * CFA HW key buffer definition
+ */
+struct hcapi_cfa_key_obj {
+	/** [in] pointer to the key data buffer */
+	uint32_t *data;
+	/** [in] buffer len in bits */
+	uint32_t len;
+	/** [in] Pointer to the key layout */
+	struct hcapi_cfa_key_layout *layout;
+};
+
+/**
+ * CFA HW key data definition
+ */
+struct hcapi_cfa_key_data {
+	/** [in] For on-chip key table, it is the offset in unit of smallest
+	 *  key. For off-chip key table, it is the byte offset relative
+	 *  to the key record memory base.
+	 */
+	uint32_t offset;
+	/** [in] HW key data buffer pointer */
+	uint8_t *data;
+	/** [in] size of the key in bytes */
+	uint16_t size;
+};
+
+/**
+ * CFA HW key location definition
+ */
+struct hcapi_cfa_key_loc {
+	/** [out] on-chip EM bucket offset or off-chip EM bucket mem pointer */
+	uint64_t bucket_mem_ptr;
+	/** [out] index within the EM bucket */
+	uint8_t bucket_idx;
+};
+
+/**
+ * CFA HW layout table definition
+ */
+struct hcapi_cfa_layout_tbl {
+	/** [out] data pointer to an array of fix formatted layouts supported.
+	 *  The index to the array is the CFA HW table ID
+	 */
+	const struct hcapi_cfa_layout *tbl;
+	/** [out] number of fix formatted layouts in the layout array */
+	uint16_t num_layouts;
+};
+
+/**
+ * Key template consists of key fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_key_template {
+	/** [in] key field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t field_en[CFA_KEY_MAX_FIELD_CNT];
+	/** [in] Identified if the key template is for TCAM. If false, the
+	 *  the key template is for EM. This field is mandantory for device that
+	 *  only support fix key formats.
+	 */
+	bool is_wc_tcam_key;
+};
+
+/**
+ * key layout consist of field array, key bitlen, key ID, and other meta data
+ * pertain to a key
+ */
+struct hcapi_cfa_key_layout {
+	/** [out] key layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual key size in number of bits */
+	uint16_t bitlen;
+	/** [out] key identifier and this field is only valid for device
+	 *  that supports fix key formats
+	 */
+	uint16_t id;
+	/** [out] Indentified the key layout is WC TCAM key */
+	bool is_wc_tcam_key;
+	/** [out] total slices size, valid for WC TCAM key only. It can be
+	 *  used by the user to determine the total size of WC TCAM key slices
+	 *  in bytes.
+	 */
+	uint16_t slices_size;
+};
+
+/**
+ * key layout memory contents
+ */
+struct hcapi_cfa_key_layout_contents {
+	/** key layouts */
+	struct hcapi_cfa_key_layout key_layout;
+
+	/** layout */
+	struct hcapi_cfa_layout layout;
+
+	/** fields */
+	struct hcapi_cfa_field field_array[CFA_KEY_MAX_FIELD_CNT];
+};
+
+/**
+ * Action template consists of action fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_action_template {
+	/** [in] CFA version for the action template */
+	enum hcapi_cfa_ver hw_ver;
+	/** [in] action field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t data[CFA_ACT_MAX_TEMPLATE_SZ];
+};
+
+/**
+ * action layout consist of field array, action wordlen and action format ID
+ */
+struct hcapi_cfa_action_layout {
+	/** [in] action identifier */
+	uint16_t id;
+	/** [out] action layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual action record size in number of bits */
+	uint16_t wordlen;
+};
+
+/**
+ *  \defgroup CFA_HCAPI_PUT_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to program a specified value to a
+ * HW field based on the provided programming layout.
+ *
+ * @param[in,out] obj_data
+ *   A data pointer to a CFA HW key/mask data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[in] val
+ *   Value of the HW field to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val);
+
+/**
+ * This API provides the functionality to program an array of field values
+ * with corresponding field IDs to a number of profiler sub-block fields
+ * based on the fixed profiler sub-block hardware programming layout.
+ *
+ * @param[in, out] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * This API provides the functionality to write a value to a
+ * field within the bit position and bit length of a HW data
+ * object based on a provided programming layout.
+ *
+ * @param[in, out] act_obj
+ *   A pointer of the action object to be initialized
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param field_id
+ *   [in] Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[in] val
+ *   HW field value to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t val);
+
+/*@}*/
+
+/**
+ *  \defgroup CFA_HCAPI_GET_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to get the word length of
+ * a layout object.
+ *
+ * @param[in] layout
+ *   A pointer of the HW layout
+ *
+ * @return
+ *   Word length of the layout object
+ */
+uint16_t hcapi_cfa_get_wordlen(const struct hcapi_cfa_layout *layout);
+
+/**
+ * The API provides the functionality to get bit offset and bit
+ * length information of a field from a programming layout.
+ *
+ * @param[in] layout
+ *   A pointer of the action layout
+ *
+ * @param[out] slice
+ *   A pointer to the action offset info data structure
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_slice(const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, struct hcapi_cfa_field *slice);
+
+/**
+ * This API provides the functionality to read the value of a
+ * CFA HW field from CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA HW key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t *val);
+
+/**
+ * This API provides the functionality to read a number of
+ * HW fields from a CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in, out] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * Get a value to a specific location relative to a HW field
+ *
+ * This API provides the functionality to read HW field from
+ * a section of a HW data object identified by the bit position
+ * and bit length from a given programming layout in order to avoid
+ * reading the entire HW data object.
+ *
+ * @param[in] obj_data
+ *   A pointer of the data object to read from
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param[in] field_id
+ *   Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t *val);
+
+/**
+ * This function is used to initialize a layout_contents structure
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * initialized.
+ *
+ * @param[in] layout_contents
+ *  A pointer of the layout contents to initialize
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_init_key_layout_contents(struct hcapi_cfa_key_layout_contents *cont);
+
+/**
+ * This function is used to validate a key template
+ *
+ * The struct hcapi_cfa_key_template is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_template
+ *  A pointer of the key template contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_is_valid_key_template(struct hcapi_cfa_key_template *key_template);
+
+/**
+ * This function is used to validate a key layout
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_layout
+ *  A pointer of the key layout contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_is_valid_key_layout(struct hcapi_cfa_key_layout *key_layout);
+
+/**
+ * This function is used to hash E/EM keys
+ *
+ *
+ * @param[in] key_data
+ *  A pointer of the key
+ *
+ * @param[in] bitlen
+ *  Number of bits in the key
+ *
+ * @return
+ *   CRC32 and Lookup3 hashes of the input key
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen);
+
+/**
+ * This function is used to execute an operation
+ *
+ *
+ * @param[in] op
+ *  Operation
+ *
+ * @param[in] key_tbl
+ *  Table
+ *
+ * @param[in] key_obj
+ *  Key data
+ *
+ * @param[in] key_key_loc
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc);
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset);
+#endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
new file mode 100644
index 000000000..ca0b1c923
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <string.h>
+#include "lookup3.h"
+#include "rand.h"
+
+#include "hcapi_cfa_defs.h"
+
+#define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
+#define TF_EM_PAGE_SIZE (1 << 21)
+uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
+uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
+bool hcapi_cfa_lkup_init;
+
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
+static void hcapi_cfa_seeds_init(void)
+{
+	int i;
+	uint32_t r;
+
+	if (hcapi_cfa_lkup_init)
+		return;
+
+	hcapi_cfa_lkup_init = true;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	hcapi_cfa_lkup_lkup3_init_cfg = SWAP_WORDS32(rand32());
+
+	for (i = 0; i < HCAPI_CFA_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2 + 1] = (r & 0x1);
+	}
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t hcapi_cfa_crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "CRC2:");
+#endif
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+#ifdef TF_EEM_DEBUG
+		TFP_DRV_LOG(DEBUG,
+			    "%02X %08X %08X\n",
+			    (buf[l] & 0xff),
+			    crc,
+			    ~crc);
+#endif
+	}
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "\n");
+#endif
+
+	return ~crc;
+}
+
+static uint32_t hcapi_cfa_crc32_hash(uint8_t *key)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = CFA_P4_EEM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = hcapi_cfa_lkup_em_seed_mem[index * 2];
+	val2 = hcapi_cfa_lkup_em_seed_mem[index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	val1 = hcapi_cfa_crc32i(~val1,
+		      (key - (CFA_P4_EEM_KEY_MAX_SIZE - 1)),
+		      CFA_P4_EEM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((const uint32_t *)(uintptr_t *)in_key) + 1,
+			 CFA_P4_EEM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 hcapi_cfa_lkup_lkup3_init_cfg);
+
+	return val1;
+}
+
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	uint64_t addr;
+
+	if (mem == NULL)
+		return 0;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = mem->num_lvl - 1;
+
+	addr = (uintptr_t)mem->pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Approximation of HCAPI hcapi_cfa_key_hash()
+ *
+ * Return:
+ *
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen)
+{
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+
+	/*
+	 * Init the seeds if needed
+	 */
+	if (!hcapi_cfa_lkup_init)
+		hcapi_cfa_seeds_init();
+
+	key0_hash = hcapi_cfa_crc32_hash(((uint8_t *)key_data) +
+					      (bitlen / 8) - 1);
+
+	key1_hash = hcapi_cfa_lookup3_hash((uint8_t *)key_data);
+
+	return ((uint64_t)key0_hash) << 32 | (uint64_t)key1_hash;
+}
+
+static int hcapi_cfa_key_hw_op_put(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_get(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy(key_obj->data,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_add(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Is entry free?
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is entry is valid then report failure
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT))
+		return -1;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_del(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Read entry
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is not a valid entry then report failure.
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT)) {
+		/*
+		 * If a key has been provided then verify the key matches
+		 * before deleting the entry.
+		 */
+		if (key_obj->data != NULL) {
+			if (memcmp(&table_entry,
+				   key_obj->data,
+				   key_obj->size) != 0)
+				return -1;
+		}
+	} else {
+		return -1;
+	}
+
+
+	/*
+	 * Delete entry
+	 */
+	memset((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       0,
+	       key_obj->size);
+
+	return rc;
+}
+
+
+/** Apporiximation of hcapi_cfa_key_hw_op()
+ *
+ *
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc)
+{
+	int rc = 0;
+
+	if (op == NULL ||
+	    key_tbl == NULL ||
+	    key_obj == NULL ||
+	    key_loc == NULL)
+		return -1;
+
+	op->hw.base_addr =
+		hcapi_get_table_page((struct hcapi_cfa_em_table *)
+				     key_tbl->base0,
+				     key_obj->offset);
+
+	if (op->hw.base_addr == 0)
+		return -1;
+
+	switch (op->opcode) {
+	case HCAPI_CFA_HWOPS_PUT: /**< Write to HW operation */
+		rc = hcapi_cfa_key_hw_op_put(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_GET: /**< Read from HW operation */
+		rc = hcapi_cfa_key_hw_op_get(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_ADD:
+		/**< For operations which require more than
+		 * simple writes to HW, this operation is used. The
+		 * distinction with this operation when compared
+		 * to the PUT ops is that this operation is used
+		 * in conjunction with the HCAPI_CFA_HWOPS_DEL
+		 * op to remove the operations issued by the
+		 * ADD OP.
+		 */
+
+		rc = hcapi_cfa_key_hw_op_add(op, key_obj);
+
+		break;
+	case HCAPI_CFA_HWOPS_DEL:
+		rc = hcapi_cfa_key_hw_op_del(op, key_obj);
+		break;
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
new file mode 100644
index 000000000..0661d6363
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -0,0 +1,451 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_P4_H_
+#define _HCAPI_CFA_P4_H_
+
+#include "cfa_p40_hw.h"
+
+/** CFA phase 4 fix formatted table(layout) ID definition
+ *
+ */
+enum cfa_p4_tbl_id {
+	CFA_P4_TBL_L2CTXT_TCAM = 0,
+	CFA_P4_TBL_L2CTXT_REMAP,
+	CFA_P4_TBL_PROF_TCAM,
+	CFA_P4_TBL_PROF_TCAM_REMAP,
+	CFA_P4_TBL_WC_TCAM,
+	CFA_P4_TBL_WC_TCAM_REC,
+	CFA_P4_TBL_WC_TCAM_REMAP,
+	CFA_P4_TBL_VEB_TCAM,
+	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_MAX
+};
+
+#define CFA_P4_PROF_MAX_KEYS 4
+enum cfa_p4_mac_sel_mode {
+	CFA_P4_MAC_SEL_MODE_FIRST = 0,
+	CFA_P4_MAC_SEL_MODE_LOWEST = 1,
+};
+
+struct cfa_p4_prof_key_cfg {
+	uint8_t mac_sel[CFA_P4_PROF_MAX_KEYS];
+#define CFA_P4_PROF_MAC_SEL_DMAC0 (1 << 0)
+#define CFA_P4_PROF_MAC_SEL_T_MAC0 (1 << 1)
+#define CFA_P4_PROF_MAC_SEL_OUTERMOST_MAC0 (1 << 2)
+#define CFA_P4_PROF_MAC_SEL_DMAC1 (1 << 3)
+#define CFA_P4_PROF_MAC_SEL_T_MAC1 (1 << 4)
+#define CFA_P4_PROF_MAC_OUTERMOST_MAC1 (1 << 5)
+	uint8_t pass_cnt;
+	enum cfa_p4_mac_sel_mode mode;
+};
+
+/**
+ * CFA action layout definition
+ */
+
+#define CFA_P4_ACTION_MAX_LAYOUT_SIZE 184
+
+/**
+ * Action object template structure
+ *
+ * Template structure presents data fields that are necessary to know
+ * at the beginning of Action Builder (AB) processing. Like before the
+ * AB compilation. One such example could be a template that is
+ * flexible in size (Encap Record) and the presence of these fields
+ * allows for determining the template size as well as where the
+ * fields are located in the record.
+ *
+ * The template may also present fields that are not made visible to
+ * the caller by way of the action fields.
+ *
+ * Template fields also allow for additional checking on user visible
+ * fields. One such example could be the encap pointer behavior on a
+ * CFA_P4_ACT_OBJ_TYPE_ACT or CFA_P4_ACT_OBJ_TYPE_ACT_SRAM.
+ */
+struct cfa_p4_action_template {
+	/** Action Object type
+	 *
+	 * Controls the type of the Action Template
+	 */
+	enum {
+		/** Select this type to build an Action Record Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT,
+		/** Select this type to build an Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT,
+		/** Select this type to build a SRAM Action Record
+		 * Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT_SRAM,
+		/** Select this type to build a SRAM Action
+		 * Encapsulation Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ENCAP_SRAM,
+		/** Select this type to build a SRAM Action Modify
+		 * Object, with IPv4 capability.
+		 */
+		/* In case of Stingray the term Modify is used for the 'NAT
+		 * action'. Action builder is leveraged to fill in the NAT
+		 * object which then can be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_MODIFY_IPV4_SRAM,
+		/** Select this type to build a SRAM Action Source
+		 * Property Object.
+		 */
+		/* In case of Stingray this is not a 'pure' action record.
+		 * Action builder is leveraged to full in the Source Property
+		 * object which can then be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM,
+		/** Select this type to build a SRAM Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT_SRAM,
+	} obj_type;
+
+	/** Action Control
+	 *
+	 * Controls the internals of the Action Template
+	 *
+	 * act is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_ACT)
+	 */
+	/*
+	 * Stat and encap are always inline for EEM as table scope
+	 * allocation does not allow for separate Stats allocation,
+	 * but has the xx_inline flags as to be forward compatible
+	 * with Stingray 2, always treated as TRUE.
+	 */
+	struct {
+		/** Set to CFA_HCAPI_TRUE to enable statistics
+		 */
+		uint8_t stat_enable;
+		/** Set to CFA_HCAPI_TRUE to enable statistics to be inlined
+		 */
+		uint8_t stat_inline;
+
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation
+		 */
+		uint8_t encap_enable;
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation to be inlined
+		 */
+		uint8_t encap_inline;
+	} act;
+
+	/** Modify Setting
+	 *
+	 * Controls the type of the Modify Action the template is
+	 * describing
+	 *
+	 * modify is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_MODIFY_SRAM)
+	 */
+	enum {
+		/** Set to enable Modify of Source IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_SOURCE_IPV4 = 0,
+		/** Set to enable Modify of Destination IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_DEST_IPV4
+	} modify;
+
+	/** Encap Control
+	 * Controls the type of encapsulation the template is
+	 * describing
+	 *
+	 * encap is valid when:
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_ACT) &&
+	 *   act.encap_enable) ||
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM)
+	 */
+	struct {
+		/* Direction is required as Stingray Encap on RX is
+		 * limited to l2 and VTAG only.
+		 */
+		/** Receive or Transmit direction
+		 */
+		uint8_t direction;
+		/** Set to CFA_HCAPI_TRUE to enable L2 capability in the
+		 *  template
+		 */
+		uint8_t l2_enable;
+		/** vtag controls the Encap Vector - VTAG Encoding, 4 bits
+		 *
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_0, default, no VLAN
+		 *      Tags applied
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_1, adds capability to
+		 *      set 1 VLAN Tag. Action Template compile adds
+		 *      the following field to the action object
+		 *      ::TF_ER_VLAN1
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_2, adds capability to
+		 *      set 2 VLAN Tags. Action Template compile adds
+		 *      the following fields to the action object
+		 *      ::TF_ER_VLAN1 and ::TF_ER_VLAN2
+		 * </ul>
+		 */
+		enum { CFA_P4_ACT_ENCAP_VTAGS_PUSH_0 = 0,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_1,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_2 } vtag;
+
+		/*
+		 * The remaining fields are NOT supported when
+		 * direction is RX and ((obj_type ==
+		 * CFA_P4_ACT_OBJ_TYPE_ACT) && act.encap_enable).
+		 * ab_compile_layout will perform the checking and
+		 * skip remaining fields.
+		 */
+		/** L3 Encap controls the Encap Vector - L3 Encoding,
+		 *  3 bits. Defines the type of L3 Encapsulation the
+		 *  template is describing.
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_L3_NONE, default, no L3
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV4, enables L3 IPv4
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV6, enables L3 IPv6
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8847, enables L3 MPLS
+		 *      8847 Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8848, enables L3 MPLS
+		 *      8848 Encapsulation.
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable any L3 encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_L3_NONE = 0,
+			/** Set to enable L3 IPv4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV4 = 4,
+			/** Set to enable L3 IPv6 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV6 = 5,
+			/** Set to enable L3 MPLS 8847 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8847 = 6,
+			/** Set to enable L3 MPLS 8848 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8848 = 7
+		} l3;
+
+#define CFA_P4_ACT_ENCAP_MAX_MPLS_LABELS 8
+		/** 1-8 labels, valid when
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8847) ||
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8848)
+		 *
+		 * MAX number of MPLS Labels 8.
+		 */
+		uint8_t l3_num_mpls_labels;
+
+		/** Set to CFA_HCAPI_TRUE to enable L4 capability in the
+		 * template.
+		 *
+		 * CFA_HCAPI_TRUE adds ::TF_EN_UDP_SRC_PORT and
+		 * ::TF_EN_UDP_DST_PORT to the template.
+		 */
+		uint8_t l4_enable;
+
+		/** Tunnel Encap controls the Encap Vector - Tunnel
+		 *  Encap, 3 bits. Defines the type of Tunnel
+		 *  encapsulation the template is describing
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NONE, default, no Tunnel
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL
+		 * <li> CFA_P4_ACT_ENCAP_TNL_VXLAN. NOTE: Expects
+		 *      l4_enable set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NGE. NOTE: Expects l4_enable
+		 *      set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NVGRE. NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GRE.NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable Tunnel header encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NONE = 0,
+			/** Set to enable Tunnel Generic Full header
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL,
+			/** Set to enable VXLAN header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_VXLAN,
+			/** Set to enable NGE (VXLAN2) header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NGE,
+			/** Set to enable NVGRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NVGRE,
+			/** Set to enable GRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GRE,
+			/** Set to enable Generic header after Tunnel
+			 * L4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4,
+			/** Set to enable Generic header after Tunnel
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		} tnl;
+
+		/** Number of bytes of generic tunnel header,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL)
+		 */
+		uint8_t tnl_generic_size;
+		/** Number of 32b words of nge options,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_NGE)
+		 */
+		uint8_t tnl_nge_op_len;
+		/* Currently not planned */
+		/* Custom Header */
+		/*	uint8_t custom_enable; */
+	} encap;
+};
+
+/**
+ * Enumeration of SRAM entry types, used for allocation of
+ * fixed SRAM entities. The memory model for CFA HCAPI
+ * determines if an SRAM entry type is supported.
+ */
+enum cfa_p4_action_sram_entry_type {
+	/* NOTE: Any additions to this enum must be reflected on FW
+	 * side as well.
+	 */
+
+	/** SRAM Action Record */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	/** SRAM Action Encap 8 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
+	/** SRAM Action Encap 16 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
+	/** SRAM Action Encap 64 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+	/** SRAM Action Modify IPv4 Source */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
+	/** SRAM Action Modify IPv4 Destination */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+	/** SRAM Action Source Properties SMAC */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
+	/** SRAM Action Source Properties SMAC IPv4 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV4,
+	/** SRAM Action Source Properties SMAC IPv6 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV6,
+	/** SRAM Action Statistics 64 Bits */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_STATS_64,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MAX
+};
+
+/**
+ * SRAM Action Record structure holding either an action index or an
+ * action ptr.
+ */
+union cfa_p4_action_sram_act_record {
+	/** SRAM Action idx specifies the offset of the SRAM
+	 * element within its SRAM Entry Type block. This
+	 * index can be written into i.e. an L2 Context. Use
+	 * this type for all SRAM Action Record types except
+	 * SRAM Full Action records. Use act_ptr instead.
+	 */
+	uint16_t act_idx;
+	/** SRAM Full Action is special in that it needs an
+	 * action record pointer. This pointer can be written
+	 * into i.e. a Wildcard TCAM entry.
+	 */
+	uint32_t act_ptr;
+};
+
+/**
+ * cfa_p4_action_param parameter definition
+ */
+struct cfa_p4_action_param {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	uint8_t dir;
+	/**
+	 * [in] type of the sram allocation type
+	 */
+	enum cfa_p4_action_sram_entry_type type;
+	/**
+	 * [in] action record to set. The 'type' specified lists the
+	 *	record definition to use in the passed in record.
+	 */
+	union cfa_p4_action_sram_act_record record;
+	/**
+	 * [in] number of elements in act_data
+	 */
+	uint32_t act_size;
+	/**
+	 * [in] ptr to array of action data
+	 */
+	uint64_t *act_data;
+};
+
+/**
+ * EEM Key entry sizes
+ */
+#define CFA_P4_EEM_KEY_MAX_SIZE 52
+#define CFA_P4_EEM_KEY_RECORD_SIZE 64
+
+/**
+ * cfa_eem_entry_hdr
+ */
+struct cfa_p4_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define CFA_P4_EEM_ENTRY_VALID_SHIFT 31
+#define CFA_P4_EEM_ENTRY_VALID_MASK 0x80000000
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_SHIFT 30
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_MASK 0x40000000
+#define CFA_P4_EEM_ENTRY_STRENGTH_SHIFT 28
+#define CFA_P4_EEM_ENTRY_STRENGTH_MASK 0x30000000
+#define CFA_P4_EEM_ENTRY_RESERVED_SHIFT 17
+#define CFA_P4_EEM_ENTRY_RESERVED_MASK 0x0FFE0000
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_SHIFT 8
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_MASK 0x0001FF00
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_SHIFT 3
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_MASK 0x000000F8
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_SHIFT 2
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK 0x00000004
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_SHIFT 1
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_MASK 0x00000002
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_SHIFT 0
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/**
+ *  cfa_p4_eem_key_entry
+ */
+struct cfa_p4_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[CFA_P4_EEM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct cfa_p4_eem_entry_hdr hdr;
+};
+
+#endif /* _CFA_HW_P4_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 1f7df9d06..33e6ebd66 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,6 +43,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_rm_new.c',
 
+	'hcapi/hcapi_cfa_p4.c',
+
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
 	'tf_ulp/ulp_flow_db.c',
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 91cbc6299..38f7fe419 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -189,7 +189,7 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
 		return NULL;
 
-	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
 		return NULL;
 
 	/*
@@ -325,7 +325,7 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key0_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key0_hash,
-					 KEY0_TABLE,
+					 TF_KEY0_TABLE,
 					 dir);
 
 	/*
@@ -334,23 +334,23 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key1_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key1_hash,
-					 KEY1_TABLE,
+					 TF_KEY1_TABLE,
 					 dir);
 
 	if (key0_entry == -EEXIST) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return -EEXIST;
 	} else if (key1_entry == -EEXIST) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return -EEXIST;
 	} else if (key0_entry == 0) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return 0;
 	} else if (key1_entry == 0) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return 0;
 	}
@@ -384,7 +384,7 @@ int tf_insert_eem_entry(struct tf_session *session,
 	int		   num_of_entry;
 
 	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
 
 	if (!mask)
 		return -EINVAL;
@@ -392,13 +392,13 @@ int tf_insert_eem_entry(struct tf_session *session,
 	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
 
 	key0_hash = tf_em_lkup_get_crc32_hash(session,
-				      &parms->key[num_of_entry] - 1,
-				      parms->dir);
+					      &parms->key[num_of_entry] - 1,
+					      parms->dir);
 	key0_index = key0_hash & mask;
 
 	key1_hash =
 	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-					parms->key);
+				       parms->key);
 	key1_index = key1_hash & mask;
 
 	/*
@@ -420,14 +420,14 @@ int tf_insert_eem_entry(struct tf_session *session,
 				      key1_index,
 				      &index,
 				      &table_type) == 0) {
-		if (table_type == KEY0_TABLE) {
+		if (table_type == TF_KEY0_TABLE) {
 			TF_SET_GFID(gfid,
 				    key0_index,
-				    KEY0_TABLE);
+				    TF_KEY0_TABLE);
 		} else {
 			TF_SET_GFID(gfid,
 				    key1_index,
-				    KEY1_TABLE);
+				    TF_KEY1_TABLE);
 		}
 
 		/*
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 4e236d56c..35a7cfab5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -285,8 +285,8 @@ tf_em_setup_page_table(struct tf_em_table *tbl)
 		tf_em_link_page_table(tp, tp_next, set_pte_last);
 	}
 
-	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
 /**
@@ -317,7 +317,7 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 			uint64_t *num_data_pages)
 {
 	uint64_t lvl_data_size = page_size;
-	int lvl = PT_LVL_0;
+	int lvl = TF_PT_LVL_0;
 	uint64_t data_size;
 
 	*num_data_pages = 0;
@@ -326,10 +326,10 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 	while (lvl_data_size < data_size) {
 		lvl++;
 
-		if (lvl == PT_LVL_1)
+		if (lvl == TF_PT_LVL_1)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				page_size;
-		else if (lvl == PT_LVL_2)
+		else if (lvl == TF_PT_LVL_2)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				MAX_PAGE_PTRS(page_size) * page_size;
 		else
@@ -386,18 +386,18 @@ tf_em_size_page_tbls(int max_lvl,
 		     uint32_t page_size,
 		     uint32_t *page_cnt)
 {
-	if (max_lvl == PT_LVL_0) {
-		page_cnt[PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == PT_LVL_1) {
-		page_cnt[PT_LVL_1] = num_data_pages;
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
-	} else if (max_lvl == PT_LVL_2) {
-		page_cnt[PT_LVL_2] = num_data_pages;
-		page_cnt[PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
 	} else {
 		return;
 	}
@@ -434,7 +434,7 @@ tf_em_size_table(struct tf_em_table *tbl)
 	/* Determine number of page table levels and the number
 	 * of data pages needed to process the given eem table.
 	 */
-	if (tbl->type == RECORD_TABLE) {
+	if (tbl->type == TF_RECORD_TABLE) {
 		/*
 		 * For action records just a memory size is provided. Work
 		 * backwards to resolve to number of entries
@@ -480,9 +480,9 @@ tf_em_size_table(struct tf_em_table *tbl)
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
 		    num_data_pages,
-		    page_cnt[PT_LVL_0],
-		    page_cnt[PT_LVL_1],
-		    page_cnt[PT_LVL_2]);
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
 
 	return 0;
 }
@@ -508,7 +508,7 @@ tf_em_ctx_unreg(struct tf *tfp,
 	struct tf_em_table *tbl;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
@@ -544,7 +544,7 @@ tf_em_ctx_reg(struct tf *tfp,
 	int rc = 0;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
@@ -719,41 +719,41 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		return -EINVAL;
 	}
 	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->rx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->tx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	return 0;
@@ -1572,11 +1572,11 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 
 		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
 		rc = tf_msg_em_cfg(tfp,
-				   em_tables[KEY0_TABLE].num_entries,
-				   em_tables[KEY0_TABLE].ctx_id,
-				   em_tables[KEY1_TABLE].ctx_id,
-				   em_tables[RECORD_TABLE].ctx_id,
-				   em_tables[EFC_TABLE].ctx_id,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
@@ -1600,9 +1600,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 * actions related to a single table scope.
 		 */
 		rc = tf_create_tbl_pool_external(dir,
-					    tbl_scope_cb,
-					    em_tables[RECORD_TABLE].num_entries,
-					    em_tables[RECORD_TABLE].entry_size);
+				    tbl_scope_cb,
+				    em_tables[TF_RECORD_TABLE].num_entries,
+				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
 			PMD_DRV_LOG(ERR,
 				    "%d TBL: Unable to allocate idx pools %s\n",
@@ -1672,7 +1672,7 @@ tf_set_tbl_entry(struct tf *tfp,
 		base_addr = tf_em_get_table_page(tbl_scope_cb,
 						 parms->dir,
 						 offset,
-						 RECORD_TABLE);
+						 TF_RECORD_TABLE);
 		if (base_addr == NULL) {
 			PMD_DRV_LOG(ERR,
 				    "dir:%d, Base address lookup failed\n",
@@ -1972,7 +1972,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
 
-		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
 			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
 			printf
 	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index b335a9cf4..d78e4fe41 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -14,18 +14,18 @@
 struct tf_session;
 
 enum tf_pg_tbl_lvl {
-	PT_LVL_0,
-	PT_LVL_1,
-	PT_LVL_2,
-	PT_LVL_MAX
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
 };
 
 enum tf_em_table_type {
-	KEY0_TABLE,
-	KEY1_TABLE,
-	RECORD_TABLE,
-	EFC_TABLE,
-	MAX_TABLE
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
 };
 
 struct tf_em_page_tbl {
@@ -41,15 +41,15 @@ struct tf_em_table {
 	uint16_t			ctx_id;
 	uint32_t			entry_size;
 	int				num_lvl;
-	uint32_t			page_cnt[PT_LVL_MAX];
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
 	uint64_t			num_data_pages;
 	void				*l0_addr;
 	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
 };
 
 struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[MAX_TABLE];
+	struct tf_em_table		em_tables[TF_MAX_TABLE];
 };
 
 /** table scope control block content */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 16/51] net/bnxt: add core changes for EM and EEM lookups
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (14 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
                     ` (35 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru, Shahaji Bhosle

From: Randy Schacher <stuart.schacher@broadcom.com>

- Move External Exact and Exact Match to device module using HCAPI
  to add and delete entries
- Make EM active through the device interface.

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Shahaji Bhosle <shahaji.bhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   3 +-
 drivers/net/bnxt/hcapi/cfa_p40_hw.h       | 781 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 ---
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     |   2 +-
 drivers/net/bnxt/tf_core/Makefile         |   8 +
 drivers/net/bnxt/tf_core/hwrm_tf.h        |  24 +-
 drivers/net/bnxt/tf_core/stack.c          |   2 +-
 drivers/net/bnxt/tf_core/tf_core.c        | 441 ++++++------
 drivers/net/bnxt/tf_core/tf_core.h        | 141 ++--
 drivers/net/bnxt/tf_core/tf_device.h      |  32 +
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   3 +
 drivers/net/bnxt/tf_core/tf_em.c          | 567 +++++-----------
 drivers/net/bnxt/tf_core/tf_em.h          |  72 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |  23 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |   4 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  25 +-
 drivers/net/bnxt/tf_core/tf_rm.c          | 156 +++--
 drivers/net/bnxt/tf_core/tf_tbl.c         | 437 +++++-------
 drivers/net/bnxt/tf_core/tf_tbl.h         |  49 +-
 19 files changed, 1628 insertions(+), 1234 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 delete mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 365627499..349b09c36 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -46,9 +46,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
+include $(SRCDIR)/hcapi/Makefile
 endif
 
 #
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
new file mode 100644
index 000000000..172706f12
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
@@ -0,0 +1,781 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_hw.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  taken from 12/16/19 17:18:12
+ *
+ * Note:  This file was first generated using  tflib_decode.py.
+ *
+ *        Changes have been made due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union table fields)
+ *        Changes not autogenerated are noted in comments.
+ */
+
+#ifndef _CFA_P40_HW_H_
+#define _CFA_P40_HW_H_
+
+/**
+ * Valid TCAM entry. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Key type (pass). (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS 164
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS 2
+/**
+ * Tunnel HDR type. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS 160
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Number of VLAN tags in tunnel l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS 158
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Number of VLAN tags in l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS 156
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS    108
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS  48
+/**
+ * Tunnel Outer VLAN Tag ID. (for idx 3 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS  96
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS 12
+/**
+ * Tunnel Inner VLAN Tag ID. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS  84
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS 12
+/**
+ * Source Partition. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  80
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+/**
+ * Source Virtual I/F. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  8
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS    24
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS  48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS    12
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS  12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS    0
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS  12
+
+enum cfa_p40_prof_l2_ctxt_tcam_flds {
+	CFA_P40_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 1,
+	CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 2,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 3,
+	CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 4,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC1_FLD = 5,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_FLD = 6,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_FLD = 7,
+	CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_FLD = 8,
+	CFA_P40_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P40_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P40_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 167
+
+/**
+ * Valid entry. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_VALID_BITPOS        79
+#define CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS      1
+/**
+ * reserved program to 0. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS     78
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS   1
+/**
+ * PF Parif Number. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS     74
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS   4
+/**
+ * Number of VLAN Tags. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS    72
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS  2
+/**
+ * Dest. MAC Address.
+ */
+#define CFA_P40_ACT_VEB_TCAM_MAC_BITPOS          24
+#define CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS        48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_OVID_BITPOS         12
+#define CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS       12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_IVID_BITPOS         0
+#define CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS       12
+
+enum cfa_p40_act_veb_tcam_flds {
+	CFA_P40_ACT_VEB_TCAM_VALID_FLD = 0,
+	CFA_P40_ACT_VEB_TCAM_RESERVED_FLD = 1,
+	CFA_P40_ACT_VEB_TCAM_PARIF_IN_FLD = 2,
+	CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_FLD = 3,
+	CFA_P40_ACT_VEB_TCAM_MAC_FLD = 4,
+	CFA_P40_ACT_VEB_TCAM_OVID_FLD = 5,
+	CFA_P40_ACT_VEB_TCAM_IVID_FLD = 6,
+	CFA_P40_ACT_VEB_TCAM_MAX_FLD
+};
+
+#define CFA_P40_ACT_VEB_TCAM_TOTAL_NUM_BITS 80
+
+/**
+ * Entry is valid.
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS 18
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS 1
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS 2
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS 16
+/**
+ * for resolving TCAM/EM conflicts
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS 0
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS 2
+
+enum cfa_p40_lkup_tcam_record_mem_flds {
+	CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_FLD = 0,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_FLD = 1,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_FLD = 2,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_MAX_FLD
+};
+
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_TOTAL_NUM_BITS 19
+
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS 62
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_tpid_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_MAX = 0x3UL
+};
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS 60
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_pri_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_MAX = 0x3UL
+};
+/**
+ * Bypass Source Properties Lookup. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS 59
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS 1
+/**
+ * SP Record Pointer. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS 43
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS 16
+/**
+ * BD Action pointer passing enable. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS 42
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS 1
+/**
+ * Default VLAN TPID. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS 39
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS 3
+/**
+ * Allowed VLAN TPIDs. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS 33
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS 6
+/**
+ * Default VLAN PRI.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS 30
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS 3
+/**
+ * Allowed VLAN PRIs.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS 22
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS 8
+/**
+ * Partition.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS 18
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS 4
+/**
+ * Bypass Lookup.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS 17
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS 1
+
+/**
+ * L2 Context Remap Data. Action bypass mode (1) {7'd0,prof_vnic[9:0]} Note:
+ * should also set byp_lkup_en. Action bypass mode (0) byp_lkup_en(0) -
+ * {prof_func[6:0],l2_context[9:0]} byp_lkup_en(1) - {1'b0,act_rec_ptr[15:0]}
+ */
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS 12
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS 10
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS 7
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS 10
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS 16
+
+enum cfa_p40_prof_ctxt_remap_mem_flds {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_FLD = 0,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_FLD = 1,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_FLD = 2,
+	CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_FLD = 3,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_FLD = 4,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_FLD = 5,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_FLD = 6,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_FLD = 7,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_FLD = 8,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_FLD = 9,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_FLD = 10,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_FLD = 11,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_FLD = 12,
+	CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_FLD = 13,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ARP_FLD = 14,
+	CFA_P40_PROF_CTXT_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TOTAL_NUM_BITS 64
+
+/**
+ * Bypass action pointer look up (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS 37
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS 1
+/**
+ * Exact match search enable (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS 36
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS 1
+/**
+ * Exact match profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS 28
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS 8
+/**
+ * Exact match key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS 5
+/**
+ * Exact match key mask
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS 13
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS 10
+/**
+ * TCAM search enable
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS 12
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS 1
+/**
+ * TCAM profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS 4
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS 8
+/**
+ * TCAM key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS 4
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS 2
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS 16
+
+enum cfa_p40_prof_profile_tcam_remap_mem_flds {
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TOTAL_NUM_BITS 38
+
+/**
+ * Valid TCAM entry (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS   80
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS 1
+/**
+ * Packet type (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS 76
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS 4
+/**
+ * Pass through CFA (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS 74
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS 2
+/**
+ * Aggregate error (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS 73
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS 1
+/**
+ * Profile function (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS 66
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS 7
+/**
+ * Reserved for future use. Set to 0.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS 57
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS 9
+/**
+ * non-tunnel(0)/tunneled(1) packet (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS 56
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS 1
+/**
+ * Tunnel L2 tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS 55
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L2 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS 53
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped tunnel L2 dest_type UC(0)/MC(2)/BC(3) (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS 51
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS 2
+/**
+ * Tunnel L2 1+ VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS 50
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * Tunnel L2 2 VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS 49
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS 1
+/**
+ * Tunnel L3 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS 48
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS 1
+/**
+ * Tunnel L3 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS 47
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS 1
+/**
+ * Tunnel L3 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS 43
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L3 header is IPV4 or IPV6. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS 42
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 src address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS 41
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 dest address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS 40
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * Tunnel L4 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS 39
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L4 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS 38
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS 1
+/**
+ * Tunnel L4 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS 34
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L4 header is UDP or TCP (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS 33
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS 1
+/**
+ * Tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS 32
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS 31
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS 1
+/**
+ * Tunnel header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS 27
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel header flags
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS 24
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS 3
+/**
+ * L2 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS 1
+/**
+ * L2 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS 22
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS 1
+/**
+ * L2 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS 20
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped L2 dest_type UC(0)/MC(2)/BC(3)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS 18
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS 2
+/**
+ * L2 header 1+ VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS 17
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * L2 header 2 VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS 1
+/**
+ * L3 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS 15
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS 1
+/**
+ * L3 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS 14
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS 1
+/**
+ * L3 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS 10
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS 4
+/**
+ * L3 header is IPV4 or IPV6.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS 9
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS 1
+/**
+ * L3 header IPV6 src address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS 8
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * L3 header IPV6 dest address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS 7
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * L4 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS 6
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS 1
+/**
+ * L4 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS 5
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS 1
+/**
+ * L4 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS 1
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS 4
+/**
+ * L4 header is UDP or TCP
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS 1
+
+enum cfa_p40_prof_profile_tcam_flds {
+	CFA_P40_PROF_PROFILE_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_RESERVED_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_FLD = 10,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_FLD = 11,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_FLD = 12,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_FLD = 13,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_FLD = 14,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_FLD = 15,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_FLD = 16,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_FLD = 17,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_FLD = 18,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_FLD = 19,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_FLD = 20,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_FLD = 21,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_FLD = 22,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_FLD = 23,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_FLD = 24,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_FLD = 25,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_FLD = 26,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_FLD = 27,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_FLD = 28,
+	CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_FLD = 29,
+	CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_FLD = 30,
+	CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_FLD = 31,
+	CFA_P40_PROF_PROFILE_TCAM_L3_VALID_FLD = 32,
+	CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_FLD = 33,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_FLD = 34,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_FLD = 35,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_FLD = 36,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_FLD = 37,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_FLD = 38,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_FLD = 39,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_FLD = 40,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_FLD = 41,
+	CFA_P40_PROF_PROFILE_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_TOTAL_NUM_BITS 81
+
+/**
+ * CFA flexible key layout definition
+ */
+enum cfa_p40_key_fld_id {
+	CFA_P40_KEY_FLD_ID_MAX
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+/**
+ * Valid
+ */
+#define CFA_P40_EEM_KEY_TBL_VALID_BITPOS 0
+#define CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS 1
+
+/**
+ * L1 Cacheable
+ */
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS 1
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS 1
+
+/**
+ * Strength
+ */
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS 2
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS 2
+
+/**
+ * Key Size
+ */
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS 15
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS 9
+
+/**
+ * Record Size
+ */
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS 24
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS 5
+
+/**
+ * Action Record Internal
+ */
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS 29
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS 1
+
+/**
+ * External Flow Counter
+ */
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS 30
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS 31
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS 33
+
+/**
+ * EEM Key omitted - create using keybuilder
+ * Fields here cannot be larger than a uint64_t
+ */
+
+#define CFA_P40_EEM_KEY_TBL_TOTAL_NUM_BITS 64
+
+enum cfa_p40_eem_key_tbl_flds {
+	CFA_P40_EEM_KEY_TBL_VALID_FLD = 0,
+	CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_FLD = 1,
+	CFA_P40_EEM_KEY_TBL_STRENGTH_FLD = 2,
+	CFA_P40_EEM_KEY_TBL_KEY_SZ_FLD = 3,
+	CFA_P40_EEM_KEY_TBL_REC_SZ_FLD = 4,
+	CFA_P40_EEM_KEY_TBL_ACT_REC_INT_FLD = 5,
+	CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_FLD = 6,
+	CFA_P40_EEM_KEY_TBL_AR_PTR_FLD = 7,
+	CFA_P40_EEM_KEY_TBL_MAX_FLD
+};
+
+/**
+ * Mirror Destination 0 Source Property Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_SP_PTR_BITPOS 0
+#define CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS 11
+
+/**
+ * igonore or honor drop
+ */
+#define CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS 13
+#define CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS 1
+
+/**
+ * ingress or egress copy
+ */
+#define CFA_P40_MIRROR_TBL_COPY_BITPOS 14
+#define CFA_P40_MIRROR_TBL_COPY_NUM_BITS 1
+
+/**
+ * Mirror Destination enable.
+ */
+#define CFA_P40_MIRROR_TBL_EN_BITPOS 15
+#define CFA_P40_MIRROR_TBL_EN_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_AR_PTR_BITPOS 16
+#define CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS 16
+
+#define CFA_P40_MIRROR_TBL_TOTAL_NUM_BITS 32
+
+enum cfa_p40_mirror_tbl_flds {
+	CFA_P40_MIRROR_TBL_SP_PTR_FLD = 0,
+	CFA_P40_MIRROR_TBL_IGN_DROP_FLD = 1,
+	CFA_P40_MIRROR_TBL_COPY_FLD = 2,
+	CFA_P40_MIRROR_TBL_EN_FLD = 3,
+	CFA_P40_MIRROR_TBL_AR_PTR_FLD = 4,
+	CFA_P40_MIRROR_TBL_MAX_FLD
+};
+
+/**
+ * P45 Specific Updates (SR) - Non-autogenerated
+ */
+/**
+ * Valid TCAM entry.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Source Partition.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  166
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+
+/**
+ * Source Virtual I/F.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  12
+
+
+/* The SR layout of the l2 ctxt key is different from the Wh+.  Switch to
+ * cfa_p45_hw.h definition when available.
+ */
+enum cfa_p45_prof_l2_ctxt_tcam_flds {
+	CFA_P45_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_FLD = 1,
+	CFA_P45_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 2,
+	CFA_P45_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 3,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 4,
+	CFA_P45_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 5,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC1_FLD = 6,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_OVID_FLD = 7,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_IVID_FLD = 8,
+	CFA_P45_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P45_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P45_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P45_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 171
+
+#endif /* _CFA_P40_HW_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
deleted file mode 100644
index 39afd4dbc..000000000
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- *   Copyright(c) 2019-2020 Broadcom Limited.
- *   All rights reserved.
- */
-
-#include "bitstring.h"
-#include "hcapi_cfa_defs.h"
-#include <errno.h>
-#include "assert.h"
-
-/* HCAPI CFA common PUT APIs */
-int hcapi_cfa_put_field(uint64_t *data_buf,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id, uint64_t val)
-{
-	assert(layout);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		bs_put_msb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	else
-		bs_put_lsb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	return 0;
-}
-
-int hcapi_cfa_put_fields(uint64_t *obj_data,
-			 const struct hcapi_cfa_layout *layout,
-			 struct hcapi_cfa_data_obj *field_tbl,
-			 uint16_t field_tbl_sz)
-{
-	int i;
-	uint16_t bitpos;
-	uint8_t bitlen;
-	uint16_t field_id;
-
-	assert(layout);
-	assert(field_tbl);
-
-	if (layout->is_msb_order) {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_msb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	} else {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_lsb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	}
-	return 0;
-}
-
-/* HCAPI CFA common GET APIs */
-int hcapi_cfa_get_field(uint64_t *obj_data,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id,
-			uint64_t *val)
-{
-	assert(layout);
-	assert(val);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		*val = bs_get_msb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	else
-		*val = bs_get_lsb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	return 0;
-}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index ca0b1c923..42b37da0f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
-
+#include <inttypes.h>
 #include <stdint.h>
 #include <stdlib.h>
 #include <stdbool.h>
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 2c02e29e7..5ed32f12a 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -24,3 +24,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_device.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index c04d1034a..1e78296c6 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -27,8 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
+	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -82,8 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_tbl_type_get_bulk_input;
-struct tf_tbl_type_get_bulk_output;
+struct tf_tbl_type_bulk_get_input;
+struct tf_tbl_type_bulk_get_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -905,8 +905,6 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -922,17 +920,17 @@ typedef struct tf_tbl_type_get_output {
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
-typedef struct tf_tbl_type_get_bulk_input {
+typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
 	uint16_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_TX	   (0x1)
 	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Starting index to get from */
@@ -941,12 +939,12 @@ typedef struct tf_tbl_type_get_bulk_input {
 	uint32_t			 num_entries;
 	/* Host memory where data will be stored */
 	uint64_t			 host_addr;
-} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+} tf_tbl_type_bulk_get_input_t, *ptf_tbl_type_bulk_get_input_t;
 
 /* Output params for table type get */
-typedef struct tf_tbl_type_get_bulk_output {
+typedef struct tf_tbl_type_bulk_get_output {
 	/* Size of the total data read in bytes */
 	uint16_t			 size;
-} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+} tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 954806377..240fbe2c1 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -81,7 +81,7 @@ int
 stack_pop(struct stack *st, uint32_t *x)
 {
 	if (stack_is_empty(st))
-		return -ENOENT;
+		return -ENODATA;
 
 	*x = st->items[st->top];
 	st->top--;
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index eac57e7bd..648d0d1bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,33 +19,41 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static inline uint32_t SWAP_WORDS32(uint32_t val32)
+static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
+			       enum tf_device_type device,
+			       uint16_t key_sz_in_bits,
+			       uint16_t *num_slice_per_row)
 {
-	return (((val32 & 0x0000ffff) << 16) |
-		((val32 & 0xffff0000) >> 16));
-}
+	uint16_t key_bytes;
+	uint16_t slice_sz = 0;
+
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
+
+	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
+		if (device == TF_DEVICE_TYPE_WH) {
+			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
+			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		} else {
+			TFP_DRV_LOG(ERR,
+				    "Unsupported device type %d\n",
+				    device);
+			return -ENOTSUP;
+		}
 
-static void tf_seeds_init(struct tf_session *session)
-{
-	int i;
-	uint32_t r;
-
-	/* Initialize the lfsr */
-	rand_init();
-
-	/* RX and TX use the same seed values */
-	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
-		session->lkup_lkup3_init_cfg[TF_DIR_TX] =
-						SWAP_WORDS32(rand32());
-
-	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+		if (key_bytes > *num_slice_per_row * slice_sz) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Key size %d is not supported\n",
+				    tf_tcam_tbl_2_str(tcam_tbl_type),
+				    key_bytes);
+			return -ENOTSUP;
+		}
+	} else { /* for other type of tcam */
+		*num_slice_per_row = 1;
 	}
+
+	return 0;
 }
 
 /**
@@ -153,15 +161,18 @@ tf_open_session(struct tf                    *tfp,
 	uint8_t fw_session_id;
 	int dir;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
 	 * firmware open session succeeds.
 	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH)
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
 		return -ENOTSUP;
+	}
 
 	/* Build the beginning of session_id */
 	rc = sscanf(parms->ctrl_chan_name,
@@ -171,7 +182,7 @@ tf_open_session(struct tf                    *tfp,
 		    &slot,
 		    &device);
 	if (rc != 4) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "Failed to scan device ctrl_chan_name\n");
 		return -EINVAL;
 	}
@@ -183,13 +194,13 @@ tf_open_session(struct tf                    *tfp,
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
-			PMD_DRV_LOG(ERR,
-				    "Session is already open, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
 		else
-			PMD_DRV_LOG(ERR,
-				    "Open message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
 
 		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
 		return rc;
@@ -202,13 +213,13 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
-	tfp->session = alloc_parms.mem_va;
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -217,9 +228,9 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -240,12 +251,13 @@ tf_open_session(struct tf                    *tfp,
 	session->session_id.internal.device = device;
 	session->session_id.internal.fw_session_id = fw_session_id;
 
+	/* Query for Session Config
+	 */
 	rc = tf_msg_session_qcfg(tfp);
 	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
@@ -256,9 +268,9 @@ tf_open_session(struct tf                    *tfp,
 #if (TF_SHADOW == 1)
 		rc = tf_rm_shadow_db_init(tfs);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%d",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%s",
+				    strerror(-rc));
 		/* Add additional processing */
 #endif /* TF_SHADOW */
 	}
@@ -266,13 +278,12 @@ tf_open_session(struct tf                    *tfp,
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
-		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Rm allocate validate failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
-	/* Setup hash seeds */
-	tf_seeds_init(session);
-
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		rc = tf_create_em_pool(session,
@@ -290,11 +301,11 @@ tf_open_session(struct tf                    *tfp,
 	/* Return session ID */
 	parms->session_id = session->session_id;
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session created, session_id:%d\n",
 		    parms->session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
@@ -379,8 +390,7 @@ tf_attach_session(struct tf *tfp __rte_unused,
 #if (TF_SHARED == 1)
 	int rc;
 
-	if (tfp == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	/* - Open the shared memory for the attach_chan_name
 	 * - Point to the shared session for this Device instance
@@ -389,12 +399,10 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	 *   than one client of the session.
 	 */
 
-	if (tfp->session) {
-		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-			rc = tf_msg_session_attach(tfp,
-						   parms->ctrl_chan_name,
-						   parms->session_id);
-		}
+	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_attach(tfp,
+					   parms->ctrl_chan_name,
+					   parms->session_id);
 	}
 #endif /* TF_SHARED */
 	return -1;
@@ -472,8 +480,7 @@ tf_close_session(struct tf *tfp)
 	union tf_session_id session_id;
 	int dir;
 
-	if (tfp == NULL || tfp->session == NULL)
-		return -EINVAL;
+	TF_CHECK_TFP_SESSION(tfp);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -487,9 +494,9 @@ tf_close_session(struct tf *tfp)
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "Message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Message send failed, rc:%s\n",
+				    strerror(-rc));
 		}
 
 		/* Update the ref_count */
@@ -509,11 +516,11 @@ tf_close_session(struct tf *tfp)
 		tfp->session = NULL;
 	}
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session closed, session_id:%d\n",
 		    session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    session_id.internal.domain,
 		    session_id.internal.bus,
@@ -565,27 +572,39 @@ tf_close_session_new(struct tf *tfp)
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL) {
-		/* External EEM */
-		return tf_insert_eem_entry((struct tf_session *)
-					   (tfp->session->core_data),
-					   tbl_scope_cb,
-					   parms);
-	} else if (parms->mem == TF_MEM_INTERNAL) {
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM insert failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
 	return -EINVAL;
@@ -600,27 +619,44 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tfp, parms);
-	else
-		return tf_delete_em_internal_entry(tfp, parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM delete failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
 }
 
-/** allocate identifier resource
- *
- * Returns success or failure code.
- */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms)
 {
@@ -629,14 +665,7 @@ int tf_alloc_identifier(struct tf *tfp,
 	int id;
 	int rc;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -662,30 +691,31 @@ int tf_alloc_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: %s\n",
+		TFP_DRV_LOG(ERR, "%s: %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	id = ba_alloc(session_pool);
 
 	if (id == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		return -ENOMEM;
@@ -763,14 +793,7 @@ int tf_free_identifier(struct tf *tfp,
 	int ba_rc;
 	struct tf_session *tfs;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -796,29 +819,31 @@ int tf_free_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s Identifier pool access failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	ba_rc = ba_inuse(session_pool, (int)parms->id);
 
 	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type),
 			    parms->id);
@@ -893,21 +918,30 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index = 0;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -916,36 +950,16 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority) {
-		for (index = session_pool->size - 1; index >= 0; index--) {
-			if (ba_inuse(session_pool,
-					  index) == BA_ENTRY_FREE) {
-				break;
-			}
-		}
-		if (ba_alloc_index(session_pool,
-				   index) == BA_FAIL) {
-			TFP_DRV_LOG(ERR,
-				    "%s: %s: ba_alloc index %d failed\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-				    index);
-			return -ENOMEM;
-		}
-	} else {
-		index = ba_alloc(session_pool);
-		if (index == BA_FAIL) {
-			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-			return -ENOMEM;
-		}
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
 	}
 
+	index *= num_slice_per_row;
+
 	parms->idx = index;
 	return 0;
 }
@@ -956,26 +970,29 @@ tf_set_tcam_entry(struct tf *tfp,
 {
 	int rc;
 	int id;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
-	/*
-	 * Each tcam send msg function should check for key sizes range
-	 */
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
 
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
@@ -985,11 +1002,12 @@ tf_set_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 		   "%s: %s: Invalid or not allocated index, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
@@ -1006,21 +1024,8 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	int rc = -EOPNOTSUPP;
-
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	return rc;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+	return -EOPNOTSUPP;
 }
 
 int
@@ -1028,20 +1033,29 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row = 1;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 0,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -1050,24 +1064,27 @@ tf_free_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	rc = ba_inuse(session_pool, (int)parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	rc = ba_inuse(session_pool, index);
 	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    index);
 		return -EINVAL;
 	}
 
-	ba_free(session_pool, (int)parms->idx);
+	ba_free(session_pool, index);
 
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    parms->idx,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index f1ef00b30..bb456bba7 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -10,7 +10,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <stdio.h>
-
+#include "hcapi/hcapi_cfa.h"
 #include "tf_project.h"
 
 /**
@@ -54,6 +54,7 @@ enum tf_mem {
 #define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
 #define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
 
+
 /*
  * Helper Macros
  */
@@ -132,34 +133,40 @@ struct tf_session_version {
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
-	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_SR,     /**< Stingray  */
+	TF_DEVICE_TYPE_THOR,   /**< Thor      */
+	TF_DEVICE_TYPE_SR2,    /**< Stingray2 */
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
-/** Identifier resource types
+/**
+ * Identifier resource types
  */
 enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The L2 Context is returned from the L2 Ctxt TCAM lookup
 	 *  and can be used in WC TCAM or EM keys to virtualize further
 	 *  lookups.
 	 */
 	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The WC profile func is returned from the L2 Ctxt TCAM lookup
 	 *  to enable virtualization of the profile TCAM.
 	 */
 	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
+	/**
+	 *  The WC profile ID is included in the WC lookup key
 	 *  to enable virtualization of the WC TCAM hardware.
 	 */
 	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
+	/**
+	 *  The EM profile ID is included in the EM lookup key
 	 *  to enable virtualization of the EM hardware. (not required for SR2
 	 *  as it has table scope)
 	 */
 	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
+	/**
+	 *  The L2 func is included in the ILT result and from recycling to
 	 *  enable virtualization of further lookups.
 	 */
 	TF_IDENT_TYPE_L2_FUNC,
@@ -239,7 +246,8 @@ enum tf_tbl_type {
 
 	/* External */
 
-	/** External table type - initially 1 poolsize entries.
+	/**
+	 * External table type - initially 1 poolsize entries.
 	 * All External table types are associated with a table
 	 * scope. Internal types are not.
 	 */
@@ -279,13 +287,17 @@ enum tf_em_tbl_type {
 	TF_EM_TBL_TYPE_MAX
 };
 
-/** TruFlow Session Information
+/**
+ * TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
  * session. This structure is initialized at time of
  * tf_open_session(). It is passed to all of the TruFlow APIs as way
  * to prescribe and isolate resources between different TruFlow ULP
  * Applications.
+ *
+ * Ownership of the elements is split between ULP and TruFlow. Please
+ * see the individual elements.
  */
 struct tf_session_info {
 	/**
@@ -355,7 +367,8 @@ struct tf_session_info {
 	uint32_t              core_data_sz_bytes;
 };
 
-/** TruFlow handle
+/**
+ * TruFlow handle
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
@@ -405,7 +418,8 @@ struct tf_session_resources {
  * tf_open_session parameters definition.
  */
 struct tf_open_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -417,7 +431,8 @@ struct tf_open_session_parms {
 	 * shared memory allocation.
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
-	/** [in] shadow_copy
+	/**
+	 * [in] shadow_copy
 	 *
 	 * Boolean controlling the use and availability of shadow
 	 * copy. Shadow copy will allow the TruFlow to keep track of
@@ -430,7 +445,8 @@ struct tf_open_session_parms {
 	 * control channel.
 	 */
 	bool shadow_copy;
-	/** [in/out] session_id
+	/**
+	 * [in/out] session_id
 	 *
 	 * Session_id is unique per session.
 	 *
@@ -441,7 +457,8 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
-	/** [in] device type
+	/**
+	 * [in] device type
 	 *
 	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
@@ -484,7 +501,8 @@ int tf_open_session_new(struct tf *tfp,
 			struct tf_open_session_parms *parms);
 
 struct tf_attach_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -497,7 +515,8 @@ struct tf_attach_session_parms {
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] attach_chan_name
+	/**
+	 * [in] attach_chan_name
 	 *
 	 * String containing name of attach channel interface to be
 	 * used for this session.
@@ -510,7 +529,8 @@ struct tf_attach_session_parms {
 	 */
 	char attach_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] session_id
+	/**
+	 * [in] session_id
 	 *
 	 * Session_id is unique per session. For Attach the session_id
 	 * should be the session_id that was returned on the first
@@ -565,7 +585,8 @@ int tf_close_session_new(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-/** tf_alloc_identifier parameter definition
+/**
+ * tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
 	/**
@@ -582,7 +603,8 @@ struct tf_alloc_identifier_parms {
 	uint16_t id;
 };
 
-/** tf_free_identifier parameter definition
+/**
+ * tf_free_identifier parameter definition
  */
 struct tf_free_identifier_parms {
 	/**
@@ -599,7 +621,8 @@ struct tf_free_identifier_parms {
 	uint16_t id;
 };
 
-/** allocate identifier resource
+/**
+ * allocate identifier resource
  *
  * TruFlow core will allocate a free id from the per identifier resource type
  * pool reserved for the session during tf_open().  No firmware is involved.
@@ -611,7 +634,8 @@ int tf_alloc_identifier(struct tf *tfp,
 int tf_alloc_identifier_new(struct tf *tfp,
 			    struct tf_alloc_identifier_parms *parms);
 
-/** free identifier resource
+/**
+ * free identifier resource
  *
  * TruFlow core will return an id back to the per identifier resource type pool
  * reserved for the session.  No firmware is involved.  During tf_close, the
@@ -639,7 +663,8 @@ int tf_free_identifier_new(struct tf *tfp,
  */
 
 
-/** tf_alloc_tbl_scope_parms definition
+/**
+ * tf_alloc_tbl_scope_parms definition
  */
 struct tf_alloc_tbl_scope_parms {
 	/**
@@ -662,7 +687,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -684,7 +709,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -709,7 +734,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -719,7 +744,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -750,7 +775,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -758,7 +783,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-
 /**
  * @page tcam TCAM Access
  *
@@ -771,7 +795,9 @@ int tf_free_tbl_scope(struct tf *tfp,
  * @ref tf_free_tcam_entry
  */
 
-/** tf_alloc_tcam_entry parameter definition
+
+/**
+ * tf_alloc_tcam_entry parameter definition
  */
 struct tf_alloc_tcam_entry_parms {
 	/**
@@ -799,9 +825,7 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested
-	 * 0: index from top i.e. highest priority first
-	 * !0: index from bottom i.e lowest priority first
+	 * [in] Priority of entry requested (definition TBD)
 	 */
 	uint32_t priority;
 	/**
@@ -819,7 +843,8 @@ struct tf_alloc_tcam_entry_parms {
 	uint16_t idx;
 };
 
-/** allocate TCAM entry
+/**
+ * allocate TCAM entry
  *
  * Allocate a TCAM entry - one of these types:
  *
@@ -844,7 +869,8 @@ struct tf_alloc_tcam_entry_parms {
 int tf_alloc_tcam_entry(struct tf *tfp,
 			struct tf_alloc_tcam_entry_parms *parms);
 
-/** tf_set_tcam_entry parameter definition
+/**
+ * tf_set_tcam_entry parameter definition
  */
 struct	tf_set_tcam_entry_parms {
 	/**
@@ -881,7 +907,8 @@ struct	tf_set_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** set TCAM entry
+/**
+ * set TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -892,7 +919,8 @@ struct	tf_set_tcam_entry_parms {
 int tf_set_tcam_entry(struct tf	*tfp,
 		      struct tf_set_tcam_entry_parms *parms);
 
-/** tf_get_tcam_entry parameter definition
+/**
+ * tf_get_tcam_entry parameter definition
  */
 struct tf_get_tcam_entry_parms {
 	/**
@@ -929,7 +957,7 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/*
+/**
  * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
@@ -941,7 +969,7 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/*
+/**
  * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
@@ -963,7 +991,9 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/*
+/**
+ * free TCAM entry
+ *
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -989,6 +1019,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_get_tbl_entry
  */
 
+
 /**
  * tf_alloc_tbl_entry parameter definition
  */
@@ -1201,9 +1232,9 @@ int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
 /**
- * tf_get_bulk_tbl_entry parameter definition
+ * tf_bulk_get_tbl_entry parameter definition
  */
-struct tf_get_bulk_tbl_entry_parms {
+struct tf_bulk_get_tbl_entry_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -1212,11 +1243,6 @@ struct tf_get_bulk_tbl_entry_parms {
 	 * [in] Type of object to get
 	 */
 	enum tf_tbl_type type;
-	/**
-	 * [in] Clear hardware entries on reads only
-	 * supported for TF_TBL_TYPE_ACT_STATS_64
-	 */
-	bool clear_on_read;
 	/**
 	 * [in] Starting index to read from
 	 */
@@ -1250,8 +1276,8 @@ struct tf_get_bulk_tbl_entry_parms {
  * Returns success or failure code. Failure will be returned if the
  * provided data buffer is too small for the data type requested.
  */
-int tf_get_bulk_tbl_entry(struct tf *tfp,
-		     struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_bulk_get_tbl_entry(struct tf *tfp,
+		     struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
@@ -1280,7 +1306,7 @@ struct tf_insert_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1332,12 +1358,12 @@ struct tf_delete_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
 	 * [in] epoch group IDs of entry to delete
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1366,7 +1392,7 @@ struct tf_search_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1387,7 +1413,7 @@ struct tf_search_em_entry_parms {
 	uint16_t em_record_sz_in_bits;
 	/**
 	 * [in] epoch group IDs of entry to lookup
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1415,7 +1441,7 @@ struct tf_search_em_entry_parms {
  * specified direction and table scope.
  *
  * When inserting an entry into an exact match table, the TruFlow library may
- * need to allocate a dynamic bucket for the entry (Brd4 only).
+ * need to allocate a dynamic bucket for the entry (SR2 only).
  *
  * The insertion of duplicate entries in an EM table is not permitted.	If a
  * TruFlow application can guarantee that it will never insert duplicates, it
@@ -1490,4 +1516,5 @@ int tf_delete_em_entry(struct tf *tfp,
  */
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 6aeb6fedb..1501b20d9 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -366,6 +366,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_tcam)(struct tf *tfp,
 			       struct tf_tcam_get_parms *parms);
+
+	/**
+	 * Insert EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_em_entry)(struct tf *tfp,
+				      struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_em_entry)(struct tf *tfp,
+				      struct tf_delete_em_entry_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c235976fe..f4bd95f1c 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
+#include "tf_em.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -89,4 +90,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_insert_em_entry = tf_em_insert_entry,
+	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 38f7fe419..fcbbd7eca 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -17,11 +17,6 @@
 
 #include "bnxt.h"
 
-/* Enable EEM table dump
- */
-#define TF_EEM_DUMP
-
-static struct tf_eem_64b_entry zero_key_entry;
 
 static uint32_t tf_em_get_key_mask(int num_entries)
 {
@@ -36,326 +31,22 @@ static uint32_t tf_em_get_key_mask(int num_entries)
 	return mask;
 }
 
-/* CRC32i support for Key0 hash */
-#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
-#define crc32(x, y) crc32i(~0, x, y)
-
-static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
-0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
-0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
-0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
-0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
-0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
-0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
-0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
-0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
-0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
-0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
-0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
-0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
-0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
-0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
-0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
-0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
-0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
-0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
-0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
-0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
-0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
-0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
-0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
-0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
-0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
-0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
-0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
-0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
-0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
-0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
-0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
-0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
-0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
-0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
-0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
-0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
-0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
-0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
-0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
-0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
-0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
-0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
-0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
-0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
-0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
-0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
-0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
-0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
-0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
-0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
-0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
-0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
-0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
-0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
-0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
-0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
-0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
-0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
-0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
-0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
-0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
-0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
-0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
-0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
-};
-
-static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
-{
-	int l;
-
-	for (l = (len - 1); l >= 0; l--)
-		crc = ucrc32(buf[l], crc);
-
-	return ~crc;
-}
-
-static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
-					  uint8_t *key,
-					  enum tf_dir dir)
-{
-	int i;
-	uint32_t index;
-	uint32_t val1, val2;
-	uint8_t temp[4];
-	uint8_t *kptr = key;
-
-	/* Do byte-wise XOR of the 52-byte HASH key first. */
-	index = *key;
-	kptr--;
-
-	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
-		index = index ^ *kptr;
-		kptr--;
-	}
-
-	/* Get seeds */
-	val1 = session->lkup_em_seed_mem[dir][index * 2];
-	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
-
-	temp[3] = (uint8_t)(val1 >> 24);
-	temp[2] = (uint8_t)(val1 >> 16);
-	temp[1] = (uint8_t)(val1 >> 8);
-	temp[0] = (uint8_t)(val1 & 0xff);
-	val1 = 0;
-
-	/* Start with seed */
-	if (!(val2 & 0x1))
-		val1 = crc32i(~val1, temp, 4);
-
-	val1 = crc32i(~val1,
-		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
-		      TF_HW_EM_KEY_MAX_SIZE);
-
-	/* End with seed */
-	if (val2 & 0x1)
-		val1 = crc32i(~val1, temp, 4);
-
-	return val1;
-}
-
-static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
-					    uint8_t *in_key)
-{
-	uint32_t val1;
-
-	val1 = hashword(((uint32_t *)in_key) + 1,
-			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
-			 lookup3_init_value);
-
-	return val1;
-}
-
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum tf_em_table_type table_type)
-{
-	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
-	void *addr = NULL;
-	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
-
-	if (ctx == NULL)
-		return NULL;
-
-	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
-		return NULL;
-
-	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
-		return NULL;
-
-	/*
-	 * Use the level according to the num_level of page table
-	 */
-	level = ctx->em_tables[table_type].num_lvl - 1;
-
-	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
-
-	return addr;
-}
-
-/** Read Key table entry
- *
- * Entry is read in to entry
- */
-static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
-	return 0;
-}
-
-static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
-
-	return 0;
-}
-
-static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_eem_64b_entry *entry,
-			       uint32_t index,
-			       enum tf_em_table_type table_type,
-			       enum tf_dir dir)
-{
-	int rc;
-	struct tf_eem_64b_entry table_entry;
-
-	rc = tf_em_read_entry(tbl_scope_cb,
-			      &table_entry,
-			      TF_EM_KEY_RECORD_SIZE,
-			      index,
-			      table_type,
-			      dir);
-
-	if (rc != 0)
-		return -EINVAL;
-
-	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
-		if (entry != NULL) {
-			if (memcmp(&table_entry,
-				   entry,
-				   TF_EM_KEY_RECORD_SIZE) == 0)
-				return -EEXIST;
-		} else {
-			return -EEXIST;
-		}
-
-		return -EBUSY;
-	}
-
-	return 0;
-}
-
-static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t *in_key,
-				    struct tf_eem_64b_entry *key_entry)
+static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+				   uint8_t	       *in_key,
+				   struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
 
-	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
 		key_entry->hdr.pointer = result->pointer;
 	else
 		key_entry->hdr.pointer = result->pointer;
 
 	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-}
-
-/* tf_em_select_inject_table
- *
- * Returns:
- * 0 - Key does not exist in either table and can be inserted
- *		at "index" in table "table".
- * EEXIST  - Key does exist in table at "index" in table "table".
- * TF_ERR     - Something went horribly wrong.
- */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
-					  enum tf_dir dir,
-					  struct tf_eem_64b_entry *entry,
-					  uint32_t key0_hash,
-					  uint32_t key1_hash,
-					  uint32_t *index,
-					  enum tf_em_table_type *table)
-{
-	int key0_entry;
-	int key1_entry;
-
-	/*
-	 * Check KEY0 table.
-	 */
-	key0_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key0_hash,
-					 TF_KEY0_TABLE,
-					 dir);
 
-	/*
-	 * Check KEY1 table.
-	 */
-	key1_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key1_hash,
-					 TF_KEY1_TABLE,
-					 dir);
-
-	if (key0_entry == -EEXIST) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return -EEXIST;
-	} else if (key1_entry == -EEXIST) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return -EEXIST;
-	} else if (key0_entry == 0) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return 0;
-	} else if (key1_entry == 0) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return 0;
-	}
-
-	return -EINVAL;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
 }
 
 /** insert EEM entry API
@@ -368,20 +59,24 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms)
+static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			       struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
 	uint32_t	   key0_hash;
 	uint32_t	   key1_hash;
 	uint32_t	   key0_index;
 	uint32_t	   key1_index;
-	struct tf_eem_64b_entry key_entry;
+	struct cfa_p4_eem_64b_entry key_entry;
 	uint32_t	   index;
-	enum tf_em_table_type table_type;
+	enum hcapi_cfa_em_table_type table_type;
 	uint32_t	   gfid;
-	int		   num_of_entry;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
 
 	/* Get mask to use on hash */
 	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
@@ -389,72 +84,84 @@ int tf_insert_eem_entry(struct tf_session *session,
 	if (!mask)
 		return -EINVAL;
 
-	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
 
-	key0_hash = tf_em_lkup_get_crc32_hash(session,
-					      &parms->key[num_of_entry] - 1,
-					      parms->dir);
-	key0_index = key0_hash & mask;
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
 
-	key1_hash =
-	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-				       parms->key);
+	key0_index = key0_hash & mask;
 	key1_index = key1_hash & mask;
 
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
 	/*
 	 * Use the "result" arg to populate all of the key entry then
 	 * store the byte swapped "raw" entry in a local copy ready
 	 * for insertion in to the table.
 	 */
-	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
 				((uint8_t *)parms->key),
 				&key_entry);
 
 	/*
-	 * Find which table to use
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
 	 */
-	if (tf_em_select_inject_table(tbl_scope_cb,
-				      parms->dir,
-				      &key_entry,
-				      key0_index,
-				      key1_index,
-				      &index,
-				      &table_type) == 0) {
-		if (table_type == TF_KEY0_TABLE) {
-			TF_SET_GFID(gfid,
-				    key0_index,
-				    TF_KEY0_TABLE);
-		} else {
-			TF_SET_GFID(gfid,
-				    key1_index,
-				    TF_KEY1_TABLE);
-		}
-
-		/*
-		 * Inject
-		 */
-		if (tf_em_write_entry(tbl_scope_cb,
-				      &key_entry,
-				      TF_EM_KEY_RECORD_SIZE,
-				      index,
-				      table_type,
-				      parms->dir) == 0) {
-			TF_SET_FLOW_ID(parms->flow_id,
-				       gfid,
-				       TF_GFID_TABLE_EXTERNAL,
-				       parms->dir);
-			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-						     0,
-						     0,
-						     0,
-						     index,
-						     0,
-						     table_type);
-			return 0;
-		}
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
 	}
 
-	return -EINVAL;
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
 }
 
 /**
@@ -463,8 +170,8 @@ int tf_insert_eem_entry(struct tf_session *session,
  *  returns:
  *     0 - Success
  */
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms)
+static int tf_insert_em_internal_entry(struct tf                       *tfp,
+				       struct tf_insert_em_entry_parms *parms)
 {
 	int       rc;
 	uint32_t  gfid;
@@ -494,7 +201,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	TFP_DRV_LOG(INFO,
+	PMD_DRV_LOG(ERR,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
@@ -527,8 +234,8 @@ int tf_insert_em_internal_entry(struct tf *tfp,
  * 0
  * -EINVAL
  */
-int tf_delete_em_internal_entry(struct tf *tfp,
-				struct tf_delete_em_entry_parms *parms)
+static int tf_delete_em_internal_entry(struct tf                       *tfp,
+				       struct tf_delete_em_entry_parms *parms)
 {
 	int rc;
 	struct tf_session *session =
@@ -558,46 +265,96 @@ int tf_delete_em_internal_entry(struct tf *tfp,
  *   0
  *   TF_NO_EM_MATCH - entry not found
  */
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms)
+static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_session	   *session;
-	struct tf_tbl_scope_cb	   *tbl_scope_cb;
-	enum tf_em_table_type hash_type;
+	enum hcapi_cfa_em_table_type hash_type;
 	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
 
-	if (parms == NULL)
+	if (parms->flow_handle == 0)
 		return -EINVAL;
 
-	session = (struct tf_session *)tfp->session->core_data;
-	if (session == NULL)
-		return -EINVAL;
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
 
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
 
-	if (parms->flow_handle == 0)
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
+	}
 
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+	/* Process the EM entry per Table Scope type */
+	if (parms->mem == TF_MEM_EXTERNAL)
+		/* External EEM */
+		return tf_insert_eem_entry
+			(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
 
-	if (tf_em_entry_exists(tbl_scope_cb,
-			       NULL,
-			       index,
-			       hash_type,
-			       parms->dir) == -EEXIST) {
-		tf_em_write_entry(tbl_scope_cb,
-				  &zero_key_entry,
-				  TF_EM_KEY_RECORD_SIZE,
-				  index,
-				  hash_type,
-				  parms->dir);
+	return -EINVAL;
+}
 
-		return 0;
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
 	}
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		return tf_delete_em_internal_entry(tfp, parms);
 
 	return -EINVAL;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index c1805df73..2262ae7cc 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,13 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define SUPPORT_CFA_HW_P4 1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+#define SUPPORT_CFA_HW_ALL 0
+
+#include "hcapi/hcapi_cfa_defs.h"
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -26,56 +33,15 @@
 #define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
 #define TF_EM_INTERNAL_ENTRY_MASK  0x3
 
-/** EEM Entry header
- *
- */
-struct tf_eem_entry_hdr {
-	uint32_t pointer;
-	uint32_t word1;  /*
-			  * The header is made up of two words,
-			  * this is the first word. This field has multiple
-			  * subfields, there is no suitable single name for
-			  * it so just going with word1.
-			  */
-#define TF_LKUP_RECORD_VALID_SHIFT 31
-#define TF_LKUP_RECORD_VALID_MASK 0x80000000
-#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
-#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
-#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
-#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
-#define TF_LKUP_RECORD_RESERVED_SHIFT 17
-#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
-#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
-#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
-#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
-#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
-#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
-#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
-#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
-#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
-};
-
-/** EEM Entry
- *  Each EEM entry is 512-bit (64-bytes)
- */
-struct tf_eem_64b_entry {
-	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
-	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
-};
-
 /** EM Entry
  *  Each EM entry is 512-bit (64-bytes) but ordered differently to
  *  EEM.
  */
 struct tf_em_64b_entry {
 	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
+	struct cfa_p4_eem_entry_hdr hdr;
 	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
 /**
@@ -127,22 +93,14 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
 					  uint32_t tbl_scope_id);
 
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms);
-
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms);
-
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms);
-
-int tf_delete_em_internal_entry(struct tf                       *tfp,
-				struct tf_delete_em_entry_parms *parms);
-
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
-			   enum tf_em_table_type table_type);
+			   enum hcapi_cfa_em_table_type table_type);
+
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
 
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e08a96f23..60274eb35 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -183,6 +183,10 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
+/**
+ * NEW HWRM direct messages
+ */
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -1259,8 +1263,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
 	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
-		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.strength =
+		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
@@ -1436,22 +1441,20 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 }
 
 int
-tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *params)
+tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *params)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_bulk_input req = { 0 };
-	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_tbl_type_bulk_get_input req = { 0 };
+	struct tf_tbl_type_bulk_get_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	int data_size = 0;
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16((params->dir) |
-		((params->clear_on_read) ?
-		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.flags = tfp_cpu_to_le_16(params->dir);
 	req.type = tfp_cpu_to_le_32(params->type);
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
@@ -1462,7 +1465,7 @@ tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 	MSG_PREP(parms,
 		 TF_KONG_MB,
 		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 HWRM_TFT_TBL_TYPE_BULK_GET,
 		 req,
 		 resp);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 06f52ef00..1dad2b9fb 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -338,7 +338,7 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  * Returns:
  *  0 on Success else internal Truflow error
  */
-int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 9b7f5a069..b7b445102 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -23,29 +23,27 @@
 					    * IDs
 					    */
 #define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
-					    * TCAM. A slices is a WC TCAM entry.
-					    */
+#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
 #define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
 #define TF_NUM_METER             1024      /* < Number of meter instances */
 #define TF_NUM_MIRROR               2      /* < Number of mirror instances */
 #define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
 
-/* Wh+/Brd2 specific HW resources */
+/* Wh+/SR specific HW resources */
 #define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
 					    * entries
 					    */
 
-/* Brd2/Brd4 specific HW resources */
+/* SR/SR2 specific HW resources */
 #define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
 
 
-/* Brd3, Brd4 common HW resources */
+/* Thor, SR2 common HW resources */
 #define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
 					    * templates
 					    */
 
-/* Brd4 specific HW resources */
+/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
 #define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
 #define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
@@ -149,10 +147,11 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-#define TF_RSVD_MIRROR_RX                         1
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         1
+#define TF_RSVD_MIRROR_TX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
@@ -501,13 +500,13 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_METER_INST,
 	TF_RESC_TYPE_HW_MIRROR,
 	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/Brd2 specific HW resources */
+	/* Wh+/SR specific HW resources */
 	TF_RESC_TYPE_HW_SP_TCAM,
-	/* Brd2/Brd4 specific HW resources */
+	/* SR/SR2 specific HW resources */
 	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Brd3, Brd4 common HW resources */
+	/* Thor, SR2 common HW resources */
 	TF_RESC_TYPE_HW_FKB,
-	/* Brd4 specific HW resources */
+	/* SR2 specific HW resources */
 	TF_RESC_TYPE_HW_TBL_SCOPE,
 	TF_RESC_TYPE_HW_EPOCH0,
 	TF_RESC_TYPE_HW_EPOCH1,
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 2264704d2..b6fe2f1ad 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -14,6 +14,7 @@
 #include "tf_resources.h"
 #include "tf_msg.h"
 #include "bnxt.h"
+#include "tfp.h"
 
 /**
  * Internal macro to perform HW resource allocation check between what
@@ -329,13 +330,13 @@ tf_rm_print_hw_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_hw_2_str(i),
 				    hw_query->hw_query[i].max,
 				    tf_rm_rsvd_hw_value(dir, i));
@@ -359,13 +360,13 @@ tf_rm_print_sram_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_sram_2_str(i),
 				    sram_query->sram_query[i].max,
 				    tf_rm_rsvd_sram_value(dir, i));
@@ -1700,7 +1701,7 @@ tf_rm_hw_alloc_validate(enum tf_dir dir,
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed id:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1727,7 +1728,7 @@ tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed idx:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1820,19 +1821,22 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed,"
+			"error_flag:0x%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
@@ -1845,9 +1849,10 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1857,15 +1862,17 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -1903,19 +1910,22 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed,"
+			"error_flag:%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
 		goto cleanup;
 	}
@@ -1931,9 +1941,10 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 					    sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1943,15 +1954,18 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed,"
+			    " rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -2177,7 +2191,7 @@ tf_rm_hw_to_flush(struct tf_session *tfs,
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
 	} else {
-		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
 			    tf_dir_2_str(dir),
 			    free_cnt,
 			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
@@ -2538,8 +2552,8 @@ tf_rm_log_hw_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_hw_2_str(i));
 	}
@@ -2564,8 +2578,8 @@ tf_rm_log_sram_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_sram_2_str(i));
 	}
@@ -2777,9 +2791,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering HW resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering HW resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
@@ -2789,9 +2804,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2805,9 +2821,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
@@ -2818,9 +2835,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2828,18 +2846,20 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, HW free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, HW free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 
 		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, SRAM free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, SRAM free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 	}
 
@@ -2890,14 +2910,14 @@ tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Tcam type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "%s:, Tcam type lookup failed, type:%d\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type lookup failed, type:%d\n",
 			    tf_dir_2_str(dir),
 			    type);
 		return rc;
@@ -3057,15 +3077,15 @@ tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type lookup failed, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 35a7cfab5..a68335304 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "stack.h"
 #include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
@@ -53,14 +54,14 @@
  *   Pointer to the page table to free
  */
 static void
-tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
 {
 	uint32_t i;
 
 	for (i = 0; i < tp->pg_count; i++) {
 		if (!tp->pg_va_tbl[i]) {
-			PMD_DRV_LOG(WARNING,
-				    "No map for page %d table %016" PRIu64 "\n",
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
 				    i,
 				    (uint64_t)(uintptr_t)tp);
 			continue;
@@ -84,15 +85,14 @@ tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
  *   Pointer to the EM table to free
  */
 static void
-tf_em_free_page_table(struct tf_em_table *tbl)
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int i;
 
 	for (i = 0; i < tbl->num_lvl; i++) {
 		tp = &tbl->pg_tbl[i];
-
-		PMD_DRV_LOG(INFO,
+		TFP_DRV_LOG(INFO,
 			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
 			   TF_EM_PAGE_SIZE,
 			    i,
@@ -124,7 +124,7 @@ tf_em_free_page_table(struct tf_em_table *tbl)
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
 		   uint32_t pg_count,
 		   uint32_t pg_size)
 {
@@ -183,9 +183,9 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_page_table(struct tf_em_table *tbl)
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int rc = 0;
 	int i;
 	uint32_t j;
@@ -197,14 +197,15 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
 					tbl->page_cnt[i],
 					TF_EM_PAGE_SIZE);
 		if (rc) {
-			PMD_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d\n",
-				i);
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
 			goto cleanup;
 		}
 
 		for (j = 0; j < tp->pg_count; j++) {
-			PMD_DRV_LOG(INFO,
+			TFP_DRV_LOG(INFO,
 				"EEM: Allocated page table: size %u lvl %d cnt"
 				" %u VA:%p PA:%p\n",
 				TF_EM_PAGE_SIZE,
@@ -234,8 +235,8 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
  *   Flag controlling if the page table is last
  */
 static void
-tf_em_link_page_table(struct tf_em_page_tbl *tp,
-		      struct tf_em_page_tbl *tp_next,
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
 		      bool set_pte_last)
 {
 	uint64_t *pg_pa = tp_next->pg_pa_tbl;
@@ -270,10 +271,10 @@ tf_em_link_page_table(struct tf_em_page_tbl *tp,
  *   Pointer to EM page table
  */
 static void
-tf_em_setup_page_table(struct tf_em_table *tbl)
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
 	bool set_pte_last = 0;
 	int i;
 
@@ -415,7 +416,7 @@ tf_em_size_page_tbls(int max_lvl,
  *   - ENOMEM - Out of memory
  */
 static int
-tf_em_size_table(struct tf_em_table *tbl)
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
 {
 	uint64_t num_data_pages;
 	uint32_t *page_cnt;
@@ -456,11 +457,10 @@ tf_em_size_table(struct tf_em_table *tbl)
 					  tbl->num_entries,
 					  &num_data_pages);
 	if (max_lvl < 0) {
-		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		PMD_DRV_LOG(WARNING,
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
 			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type,
-			    (uint64_t)num_entries * tbl->entry_size,
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
 			    TF_EM_PAGE_SIZE);
 		return -ENOMEM;
 	}
@@ -474,8 +474,8 @@ tf_em_size_table(struct tf_em_table *tbl)
 	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
 				page_cnt);
 
-	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
 		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
@@ -504,8 +504,9 @@ tf_em_ctx_unreg(struct tf *tfp,
 		struct tf_tbl_scope_cb *tbl_scope_cb,
 		int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 
 	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
@@ -539,8 +540,9 @@ tf_em_ctx_reg(struct tf *tfp,
 	      struct tf_tbl_scope_cb *tbl_scope_cb,
 	      int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int rc = 0;
 	int i;
 
@@ -601,7 +603,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 					TF_MEGABYTE) / (key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
 				    "%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -613,7 +615,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
 				    "%u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -625,7 +627,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx flows "
 				    "requested:%u max:%u\n",
 				    parms->rx_num_flows_in_k * TF_KILOBYTE,
@@ -642,7 +644,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx requested: %u\n",
 				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -658,7 +660,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			(key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Insufficient memory requested:%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -670,7 +672,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -682,7 +684,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx flows "
 				    "requested:%u max:%u\n",
 				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
@@ -696,7 +698,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -705,7 +707,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->rx_num_flows_in_k != 0 &&
 	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Rx key size required: %u\n",
 			    (parms->rx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -713,7 +715,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->tx_num_flows_in_k != 0 &&
 	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Tx key size required: %u\n",
 			    (parms->tx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -795,11 +797,10 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -817,9 +818,9 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -833,11 +834,11 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Set failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -891,9 +892,9 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -907,11 +908,11 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Get failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -932,8 +933,8 @@ tf_get_tbl_entry_internal(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 static int
-tf_get_bulk_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc;
 	int id;
@@ -975,7 +976,7 @@ tf_get_bulk_tbl_entry_internal(struct tf *tfp,
 	}
 
 	/* Get the entry */
-	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1006,10 +1007,9 @@ static int
 tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
 			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Alloc with search not supported\n",
-		    parms->dir);
-
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Alloc with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1032,9 +1032,9 @@ static int
 tf_free_tbl_entry_shadow(struct tf_session *tfs,
 			 struct tf_free_tbl_entry_parms *parms)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Free with search not supported\n",
-		    parms->dir);
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Free with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1074,8 +1074,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	parms.alignment = 0;
 
 	if (tfp_calloc(&parms) != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
-			    dir, strerror(-ENOMEM));
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
 		return -ENOMEM;
 	}
 
@@ -1084,8 +1084,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	rc = stack_init(num_entries, parms.mem_va, pool);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1101,13 +1101,13 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	for (i = 0; i < num_entries; i++) {
 		rc = stack_push(pool, j);
 		if (rc != 0) {
-			PMD_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
 				    tf_dir_2_str(dir), strerror(-rc));
 			goto cleanup;
 		}
 
 		if (j < 0) {
-			PMD_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
 				    dir, j);
 			goto cleanup;
 		}
@@ -1116,8 +1116,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 	return 0;
@@ -1168,18 +1168,7 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1188,9 +1177,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-					"%s, table scope not allocated\n",
-					tf_dir_2_str(parms->dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1200,9 +1189,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type);
 		return rc;
 	}
@@ -1233,18 +1222,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	struct bitalloc *session_pool;
 	struct tf_session *tfs;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1254,11 +1232,10 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1276,9 +1253,9 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	if (id == -1) {
 		free_cnt = ba_free_count(session_pool);
 
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d, free:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d, free:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   free_cnt);
 		return -ENOMEM;
@@ -1323,18 +1300,7 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1343,9 +1309,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1355,9 +1321,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_push(pool, index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 	}
@@ -1386,18 +1352,7 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	struct tf_session *tfs;
 	uint32_t index;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1408,9 +1363,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1439,9 +1394,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	/* Check if element was indeed allocated */
 	id = ba_inuse_free(session_pool, index);
 	if (id == -1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -ENOMEM;
@@ -1485,8 +1440,10 @@ tf_free_eem_tbl_scope_cb(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 parms->tbl_scope_id);
 
-	if (tbl_scope_cb == NULL)
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
 		return -EINVAL;
+	}
 
 	/* Free Table control block */
 	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
@@ -1516,23 +1473,17 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	int rc;
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_table *em_tables;
+	struct hcapi_cfa_em_table *em_tables;
 	int index;
 	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 
-	/* check parameters */
-	if (parms == NULL || tfp->session == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
-
 	session = (struct tf_session *)tfp->session->core_data;
 
 	/* Get Table Scope control block from the session pool */
 	index = ba_alloc(session->tbl_scope_pool_rx);
 	if (index == -1) {
-		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
 			    "Control Block\n");
 		return -ENOMEM;
 	}
@@ -1547,8 +1498,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				     dir,
 				     &tbl_scope_cb->em_caps[dir]);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"EEM: Unable to query for EEM capability\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 	}
@@ -1565,8 +1518,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 */
 		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 
@@ -1580,8 +1535,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"TBL: Unable to configure EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1590,8 +1547,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
 
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1604,9 +1563,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				    em_tables[TF_RECORD_TABLE].num_entries,
 				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "%d TBL: Unable to allocate idx pools %s\n",
-				    dir,
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup_full;
 		}
@@ -1634,13 +1593,12 @@ tf_set_tbl_entry(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct tf_session *session;
 
-	if (tfp == NULL || parms == NULL || parms->data == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 
@@ -1654,9 +1612,9 @@ tf_set_tbl_entry(struct tf *tfp,
 		tbl_scope_id = parms->tbl_scope_id;
 
 		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Table scope not allocated\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Table scope not allocated\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1665,18 +1623,21 @@ tf_set_tbl_entry(struct tf *tfp,
 		 */
 		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
 
-		if (tbl_scope_cb == NULL)
-			return -EINVAL;
+		if (tbl_scope_cb == NULL) {
+			TFP_DRV_LOG(ERR,
+				    "%s, table scope error\n",
+				    tf_dir_2_str(parms->dir));
+				return -EINVAL;
+		}
 
 		/* External table, implicitly the Action table */
-		base_addr = tf_em_get_table_page(tbl_scope_cb,
-						 parms->dir,
-						 offset,
-						 TF_RECORD_TABLE);
+		base_addr = (void *)(uintptr_t)
+		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
+
 		if (base_addr == NULL) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Base address lookup failed\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Base address lookup failed\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1688,11 +1649,11 @@ tf_set_tbl_entry(struct tf *tfp,
 		/* Internal table type processing */
 		rc = tf_set_tbl_entry_internal(tfp, parms);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Set failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Set failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 		}
 	}
 
@@ -1706,31 +1667,24 @@ tf_get_tbl_entry(struct tf *tfp,
 {
 	int rc = 0;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	if (parms->type == TF_TBL_TYPE_EXT) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, External table type not supported\n",
-			    parms->dir);
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
 
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
 		rc = tf_get_tbl_entry_internal(tfp, parms);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Get failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 	}
 
 	return rc;
@@ -1738,8 +1692,8 @@ tf_get_tbl_entry(struct tf *tfp,
 
 /* API defined in tf_core.h */
 int
-tf_get_bulk_tbl_entry(struct tf *tfp,
-		 struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc = 0;
 
@@ -1754,7 +1708,7 @@ tf_get_bulk_tbl_entry(struct tf *tfp,
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
-		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
 		if (rc)
 			TFP_DRV_LOG(ERR,
 				    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1773,11 +1727,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	rc = tf_alloc_eem_tbl_scope(tfp, parms);
 
@@ -1791,11 +1741,7 @@ tf_free_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	/* free table scope and all associated resources */
 	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
@@ -1813,11 +1759,7 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	/*
 	 * No shadow copy support for external tables, allocate and return
 	 */
@@ -1827,13 +1769,6 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1849,9 +1784,9 @@ tf_alloc_tbl_entry(struct tf *tfp,
 
 	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 
 	return rc;
 }
@@ -1866,11 +1801,8 @@ tf_free_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
 	/*
 	 * No shadow of external tables so just free the entry
 	 */
@@ -1880,13 +1812,6 @@ tf_free_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1903,16 +1828,16 @@ tf_free_tbl_entry(struct tf *tfp,
 	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
 
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 	return rc;
 }
 
 
 static void
-tf_dump_link_page_table(struct tf_em_page_tbl *tp,
-			struct tf_em_page_tbl *tp_next)
+tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+			struct hcapi_cfa_em_page_tbl *tp_next)
 {
 	uint64_t *pg_va;
 	uint32_t i;
@@ -1951,9 +1876,9 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 {
 	struct tf_session      *session;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_page_tbl *tp;
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 	int j;
 	int dir;
@@ -1967,7 +1892,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		TFP_DRV_LOG(ERR, "No table scope\n");
+		PMD_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index d78e4fe41..2b7456faa 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -13,45 +13,6 @@
 
 struct tf_session;
 
-enum tf_pg_tbl_lvl {
-	TF_PT_LVL_0,
-	TF_PT_LVL_1,
-	TF_PT_LVL_2,
-	TF_PT_LVL_MAX
-};
-
-enum tf_em_table_type {
-	TF_KEY0_TABLE,
-	TF_KEY1_TABLE,
-	TF_RECORD_TABLE,
-	TF_EFC_TABLE,
-	TF_MAX_TABLE
-};
-
-struct tf_em_page_tbl {
-	uint32_t	pg_count;
-	uint32_t	pg_size;
-	void		**pg_va_tbl;
-	uint64_t	*pg_pa_tbl;
-};
-
-struct tf_em_table {
-	int				type;
-	uint32_t			num_entries;
-	uint16_t			ctx_id;
-	uint32_t			entry_size;
-	int				num_lvl;
-	uint32_t			page_cnt[TF_PT_LVL_MAX];
-	uint64_t			num_data_pages;
-	void				*l0_addr;
-	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
-};
-
-struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[TF_MAX_TABLE];
-};
-
 /** table scope control block content */
 struct tf_em_caps {
 	uint32_t flags;
@@ -74,18 +35,14 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps          em_caps[TF_DIR_MAX];
 	struct stack               ext_act_pool[TF_DIR_MAX];
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/**
- * Hardware Page sizes supported for EEM:
- *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- *
- * Round-down other page sizes to the lower hardware page
- * size supported.
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
  */
 #define TF_EM_PAGE_SIZE_4K 12
 #define TF_EM_PAGE_SIZE_8K 13
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 17/51] net/bnxt: implement support for TCAM access
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (15 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 18/51] net/bnxt: multiple device implementation Ajit Khaparde
                     ` (34 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Implement TCAM alloc, free, bind, and unbind functions
Update tf_core, tf_msg, etc.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      | 258 ++++++++++-----------
 drivers/net/bnxt/tf_core/tf_device.h    |  14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |  25 ++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  31 +--
 drivers/net/bnxt/tf_core/tf_msg.h       |   4 +-
 drivers/net/bnxt/tf_core/tf_tcam.c      | 285 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tcam.h      |  66 ++++--
 7 files changed, 480 insertions(+), 203 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 648d0d1bd..29522c66e 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,43 +19,6 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
-			       enum tf_device_type device,
-			       uint16_t key_sz_in_bits,
-			       uint16_t *num_slice_per_row)
-{
-	uint16_t key_bytes;
-	uint16_t slice_sz = 0;
-
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
-#define CFA_P4_WC_TCAM_SLICE_SIZE     12
-
-	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
-		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
-		if (device == TF_DEVICE_TYPE_WH) {
-			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
-			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
-		} else {
-			TFP_DRV_LOG(ERR,
-				    "Unsupported device type %d\n",
-				    device);
-			return -ENOTSUP;
-		}
-
-		if (key_bytes > *num_slice_per_row * slice_sz) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Key size %d is not supported\n",
-				    tf_tcam_tbl_2_str(tcam_tbl_type),
-				    key_bytes);
-			return -ENOTSUP;
-		}
-	} else { /* for other type of tcam */
-		*num_slice_per_row = 1;
-	}
-
-	return 0;
-}
-
 /**
  * Create EM Tbl pool of memory indexes.
  *
@@ -918,49 +881,56 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_alloc_parms aparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_alloc_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+	aparms.dir = parms->dir;
+	aparms.type = parms->tcam_tbl_type;
+	aparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	aparms.priority = parms->priority;
+	rc = dev->ops->tf_dev_alloc_tcam(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+			    strerror(-rc));
+		return rc;
 	}
 
-	index *= num_slice_per_row;
+	parms->idx = aparms.idx;
 
-	parms->idx = index;
 	return 0;
 }
 
@@ -969,55 +939,60 @@ tf_set_tcam_entry(struct tf *tfp,
 		  struct tf_set_tcam_entry_parms *parms)
 {
 	int rc;
-	int id;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_set_parms sparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_set_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	/* Verify that the entry has been previously allocated */
-	index = parms->idx / num_slice_per_row;
+	sparms.dir = parms->dir;
+	sparms.type = parms->tcam_tbl_type;
+	sparms.idx = parms->idx;
+	sparms.key = parms->key;
+	sparms.mask = parms->mask;
+	sparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	sparms.result = parms->result;
+	sparms.result_size = TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	rc = dev->ops->tf_dev_set_tcam(tfp, &sparms);
+	if (rc) {
 		TFP_DRV_LOG(ERR,
-		   "%s: %s: Invalid or not allocated index, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-		   parms->idx);
-		return -EINVAL;
+			    "%s: TCAM set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
-	rc = tf_msg_tcam_entry_set(tfp, parms);
-
-	return rc;
+	return 0;
 }
 
 int
@@ -1033,59 +1008,52 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row = 1;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_free_parms fparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 0,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = parms->idx / num_slice_per_row;
-
-	rc = ba_inuse(session_pool, index);
-	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+	if (dev->ops->tf_dev_free_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    index);
-		return -EINVAL;
+			    strerror(-rc));
+		return rc;
 	}
 
-	ba_free(session_pool, index);
-
-	rc = tf_msg_tcam_entry_free(tfp, parms);
+	fparms.dir = parms->dir;
+	fparms.type = parms->tcam_tbl_type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 1501b20d9..32d9a5442 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -116,8 +116,11 @@ struct tf_dev_ops {
 	 * [in] tfp
 	 *   Pointer to TF handle
 	 *
-	 * [out] slice_size
-	 *   Pointer to slice size the device supports
+	 * [in] type
+	 *   TCAM table type
+	 *
+	 * [in] key_sz
+	 *   Key size
 	 *
 	 * [out] num_slices_per_row
 	 *   Pointer to number of slices per row the device supports
@@ -126,9 +129,10 @@ struct tf_dev_ops {
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
-					 uint16_t *slice_size,
-					 uint16_t *num_slices_per_row);
+	int (*tf_dev_get_tcam_slice_info)(struct tf *tfp,
+					  enum tf_tcam_tbl_type type,
+					  uint16_t key_sz,
+					  uint16_t *num_slices_per_row);
 
 	/**
 	 * Allocation of an identifier element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index f4bd95f1c..77fb693dd 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -56,18 +56,21 @@ tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
  *   - (-EINVAL) on failure.
  */
 static int
-tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
-			     uint16_t *slice_size,
-			     uint16_t *num_slices_per_row)
+tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
+			      enum tf_tcam_tbl_type type,
+			      uint16_t key_sz,
+			      uint16_t *num_slices_per_row)
 {
-#define CFA_P4_WC_TCAM_SLICE_SIZE       12
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
 
-	if (slice_size == NULL || num_slices_per_row == NULL)
-		return -EINVAL;
-
-	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
-	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+	if (type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
+			return -ENOTSUP;
+	} else { /* for other type of tcam */
+		*num_slices_per_row = 1;
+	}
 
 	return 0;
 }
@@ -77,7 +80,7 @@ tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
  */
 const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
-	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 60274eb35..b50e1d48c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -9,6 +9,7 @@
 #include <string.h>
 
 #include "tf_msg_common.h"
+#include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
 #include "tf_session.h"
@@ -1480,27 +1481,19 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_set_tcam_entry_parms *parms)
+		      struct tf_tcam_set_parms *parms)
 {
 	int rc;
 	struct tfp_send_msg_parms mparms = { 0 };
 	struct hwrm_tf_tcam_set_input req = { 0 };
 	struct hwrm_tf_tcam_set_output resp = { 0 };
-	uint16_t key_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
-	uint16_t result_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
@@ -1508,11 +1501,11 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	if (parms->dir == TF_DIR_TX)
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	req.key_size = key_bytes;
-	req.mask_offset = key_bytes;
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
 	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * key_bytes;
-	req.result_size = result_bytes;
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
 	data_size = 2 * req.key_size + req.result_size;
 
 	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
@@ -1530,9 +1523,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 			   sizeof(buf.pa_addr));
 	}
 
-	tfp_memcpy(&data[0], parms->key, key_bytes);
-	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
-	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1554,7 +1547,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 int
 tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_free_tcam_entry_parms *in_parms)
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
 	struct hwrm_tf_tcam_free_input req =  { 0 };
@@ -1562,7 +1555,7 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1dad2b9fb..a3e0f7bba 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -247,7 +247,7 @@ int tf_msg_em_op(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_set(struct tf *tfp,
-			  struct tf_set_tcam_entry_parms *parms);
+			  struct tf_tcam_set_parms *parms);
 
 /**
  * Sends tcam entry 'free' to the Firmware.
@@ -262,7 +262,7 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_free(struct tf *tfp,
-			   struct tf_free_tcam_entry_parms *parms);
+			   struct tf_tcam_free_parms *parms);
 
 /**
  * Sends Set message of a Table Type element to the firmware.
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 3ad99dd0d..b9dba5323 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -3,16 +3,24 @@
  * All rights reserved.
  */
 
+#include <string.h>
 #include <rte_common.h>
 
 #include "tf_tcam.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_rm_new.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_session.h"
+#include "tf_msg.h"
 
 struct tf;
 
 /**
  * TCAM DBs.
  */
-/* static void *tcam_db[TF_DIR_MAX]; */
+static void *tcam_db[TF_DIR_MAX];
 
 /**
  * TCAM Shadow DBs
@@ -22,7 +30,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -33,19 +41,131 @@ int
 tf_tcam_bind(struct tf *tfp __rte_unused,
 	     struct tf_tcam_cfg_parms *parms __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
+		db_cfg.rm_db = tcam_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: TCAM DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_tcam_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tcam_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tcam_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
-tf_tcam_alloc(struct tf *tfp __rte_unused,
-	      struct tf_tcam_alloc_parms *parms __rte_unused)
+tf_tcam_alloc(struct tf *tfp,
+	      struct tf_tcam_alloc_parms *parms)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Allocate requested element */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = (uint32_t *)&parms->idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed tcam, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	parms->idx *= num_slice_per_row;
+
 	return 0;
 }
 
@@ -53,6 +173,92 @@ int
 tf_tcam_free(struct tf *tfp __rte_unused,
 	     struct tf_tcam_free_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  0,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tcam_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx / num_slice_per_row;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
@@ -67,6 +273,77 @@ int
 tf_tcam_set(struct tf *tfp __rte_unused,
 	    struct tf_tcam_set_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry is not allocated, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 68c25eb1b..67c3bcb49 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -50,10 +50,18 @@ struct tf_tcam_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint16_t idx;
 };
 
 /**
@@ -71,7 +79,7 @@ struct tf_tcam_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t idx;
+	uint16_t idx;
 	/**
 	 * [out] Reference count after free, only valid if session has been
 	 * created with shadow_copy.
@@ -90,7 +98,7 @@ struct tf_tcam_alloc_search_parms {
 	/**
 	 * [in] TCAM table type
 	 */
-	enum tf_tcam_tbl_type tcam_tbl_type;
+	enum tf_tcam_tbl_type type;
 	/**
 	 * [in] Enable search for matching entry
 	 */
@@ -100,9 +108,9 @@ struct tf_tcam_alloc_search_parms {
 	 */
 	uint8_t *key;
 	/**
-	 * [in] key size in bits (if search)
+	 * [in] key size (if search)
 	 */
-	uint16_t key_sz_in_bits;
+	uint16_t key_size;
 	/**
 	 * [in] Mask data to match on (if search)
 	 */
@@ -139,17 +147,29 @@ struct tf_tcam_set_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [in] Entry data
+	 * [in] Entry index to write to
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [in] Entry size
+	 * [in] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to write to
+	 * [in] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
@@ -165,17 +185,29 @@ struct tf_tcam_get_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [out] Entry data
+	 * [in] Entry index to read
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [out] Entry size
+	 * [out] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to read
+	 * [out] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [out] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [out] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 18/51] net/bnxt: multiple device implementation
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (16 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
                     ` (33 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

Implement the Identifier, Table Type and the Resource Manager
modules.
Integrate Resource Manager with HCAPI.
Update open/close session.
Move to direct msgs for qcaps and resv messages.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 751 ++++++++---------------
 drivers/net/bnxt/tf_core/tf_core.h       |  97 ++-
 drivers/net/bnxt/tf_core/tf_device.c     |  10 +-
 drivers/net/bnxt/tf_core/tf_device.h     |   1 +
 drivers/net/bnxt/tf_core/tf_device_p4.c  |  26 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |  29 +-
 drivers/net/bnxt/tf_core/tf_identifier.h |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  45 +-
 drivers/net/bnxt/tf_core/tf_msg.h        |   1 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 225 +++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  11 +-
 drivers/net/bnxt/tf_core/tf_session.c    |  28 +-
 drivers/net/bnxt/tf_core/tf_session.h    |   2 +-
 drivers/net/bnxt/tf_core/tf_tbl.c        | 611 +-----------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c   | 252 +++++++-
 drivers/net/bnxt/tf_core/tf_tbl_type.h   |   2 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |  12 +-
 drivers/net/bnxt/tf_core/tf_util.h       |  45 +-
 drivers/net/bnxt/tf_core/tfp.c           |   4 +-
 19 files changed, 880 insertions(+), 1276 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 29522c66e..3e23d0513 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,284 +19,15 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- * [in] num_entries
- *   number of entries to write
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i, j;
-	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
-			    strerror(-ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Fill pool with indexes
-	 */
-	j = num_entries - 1;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-		j--;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- *
- * Return:
- */
-static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
-{
-	struct stack *pool = &session->em_pool[dir];
-	uint32_t *ptr;
-
-	ptr = stack_items(pool);
-
-	tfp_free(ptr);
-}
-
 int
-tf_open_session(struct tf                    *tfp,
+tf_open_session(struct tf *tfp,
 		struct tf_open_session_parms *parms)
-{
-	int rc;
-	struct tf_session *session;
-	struct tfp_calloc_parms alloc_parms;
-	unsigned int domain, bus, slot, device;
-	uint8_t fw_session_id;
-	int dir;
-
-	TF_CHECK_PARMS(tfp, parms);
-
-	/* Filter out any non-supported device types on the Core
-	 * side. It is assumed that the Firmware will be supported if
-	 * firmware open session succeeds.
-	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH) {
-		TFP_DRV_LOG(ERR,
-			    "Unsupported device type %d\n",
-			    parms->device_type);
-		return -ENOTSUP;
-	}
-
-	/* Build the beginning of session_id */
-	rc = sscanf(parms->ctrl_chan_name,
-		    "%x:%x:%x.%d",
-		    &domain,
-		    &bus,
-		    &slot,
-		    &device);
-	if (rc != 4) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to scan device ctrl_chan_name\n");
-		return -EINVAL;
-	}
-
-	/* open FW session and get a new session_id */
-	rc = tf_msg_session_open(tfp,
-				 parms->ctrl_chan_name,
-				 &fw_session_id);
-	if (rc) {
-		/* Log error */
-		if (rc == -EEXIST)
-			TFP_DRV_LOG(ERR,
-				    "Session is already open, rc:%s\n",
-				    strerror(-rc));
-		else
-			TFP_DRV_LOG(ERR,
-				    "Open message send failed, rc:%s\n",
-				    strerror(-rc));
-
-		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
-		return rc;
-	}
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session_info);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
-
-	/* Allocate core data for the session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session->core_data = alloc_parms.mem_va;
-
-	session = (struct tf_session *)tfp->session->core_data;
-	tfp_memcpy(session->ctrl_chan_name,
-		   parms->ctrl_chan_name,
-		   TF_SESSION_NAME_MAX);
-
-	/* Initialize Session */
-	session->dev = NULL;
-	tf_rm_init(tfp);
-
-	/* Construct the Session ID */
-	session->session_id.internal.domain = domain;
-	session->session_id.internal.bus = bus;
-	session->session_id.internal.device = device;
-	session->session_id.internal.fw_session_id = fw_session_id;
-
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Shadow DB configuration */
-	if (parms->shadow_copy) {
-		/* Ignore shadow_copy setting */
-		session->shadow_copy = 0;/* parms->shadow_copy; */
-#if (TF_SHADOW == 1)
-		rc = tf_rm_shadow_db_init(tfs);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%s",
-				    strerror(-rc));
-		/* Add additional processing */
-#endif /* TF_SHADOW */
-	}
-
-	/* Adjust the Session with what firmware allowed us to get */
-	rc = tf_rm_allocate_validate(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Rm allocate validate failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Initialize EM pool */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session,
-				       (enum tf_dir)dir,
-				       TF_SESSION_EM_POOL_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EM Pool initialization failed\n");
-			goto cleanup_close;
-		}
-	}
-
-	session->ref_count++;
-
-	/* Return session ID */
-	parms->session_id = session->session_id;
-
-	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    parms->session_id.internal.domain,
-		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
-
-	return 0;
-
- cleanup:
-	tfp_free(tfp->session->core_data);
-	tfp_free(tfp->session);
-	tfp->session = NULL;
-	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
-}
-
-int
-tf_open_session_new(struct tf *tfp,
-		    struct tf_open_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
 	struct tf_session_open_session_parms oparms;
 
-	TF_CHECK_PARMS(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
@@ -347,33 +78,8 @@ tf_open_session_new(struct tf *tfp,
 }
 
 int
-tf_attach_session(struct tf *tfp __rte_unused,
-		  struct tf_attach_session_parms *parms __rte_unused)
-{
-#if (TF_SHARED == 1)
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/* - Open the shared memory for the attach_chan_name
-	 * - Point to the shared session for this Device instance
-	 * - Check that session is valid
-	 * - Attach to the firmware so it can record there is more
-	 *   than one client of the session.
-	 */
-
-	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_attach(tfp,
-					   parms->ctrl_chan_name,
-					   parms->session_id);
-	}
-#endif /* TF_SHARED */
-	return -1;
-}
-
-int
-tf_attach_session_new(struct tf *tfp,
-		      struct tf_attach_session_parms *parms)
+tf_attach_session(struct tf *tfp,
+		  struct tf_attach_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
@@ -436,65 +142,6 @@ tf_attach_session_new(struct tf *tfp,
 
 int
 tf_close_session(struct tf *tfp)
-{
-	int rc;
-	int rc_close = 0;
-	struct tf_session *tfs;
-	union tf_session_id session_id;
-	int dir;
-
-	TF_CHECK_TFP_SESSION(tfp);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Cleanup if we're last user of the session */
-	if (tfs->ref_count == 1) {
-		/* Cleanup any outstanding resources */
-		rc_close = tf_rm_close(tfp);
-	}
-
-	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Message send failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		/* Update the ref_count */
-		tfs->ref_count--;
-	}
-
-	session_id = tfs->session_id;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Free EM pool */
-		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, (enum tf_dir)dir);
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
-
-	TFP_DRV_LOG(INFO,
-		    "Session closed, session_id:%d\n",
-		    session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    session_id.internal.domain,
-		    session_id.internal.bus,
-		    session_id.internal.device,
-		    session_id.internal.fw_session_id);
-
-	return rc_close;
-}
-
-int
-tf_close_session_new(struct tf *tfp)
 {
 	int rc;
 	struct tf_session_close_session_parms cparms = { 0 };
@@ -620,76 +267,9 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
-int tf_alloc_identifier(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	int id;
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	if (rc) {
-		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	id = ba_alloc(session_pool);
-
-	if (id == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		return -ENOMEM;
-	}
-	parms->id = id;
-	return 0;
-}
-
 int
-tf_alloc_identifier_new(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
+tf_alloc_identifier(struct tf *tfp,
+		    struct tf_alloc_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -732,7 +312,7 @@ tf_alloc_identifier_new(struct tf *tfp,
 	}
 
 	aparms.dir = parms->dir;
-	aparms.ident_type = parms->ident_type;
+	aparms.type = parms->ident_type;
 	aparms.id = &id;
 	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
 	if (rc) {
@@ -748,79 +328,9 @@ tf_alloc_identifier_new(struct tf *tfp,
 	return 0;
 }
 
-int tf_free_identifier(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	int rc;
-	int ba_rc;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: %s Identifier pool access failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	ba_rc = ba_inuse(session_pool, (int)parms->id);
-
-	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    parms->id);
-		return -EINVAL;
-	}
-
-	ba_free(session_pool, (int)parms->id);
-
-	return 0;
-}
-
 int
-tf_free_identifier_new(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
+tf_free_identifier(struct tf *tfp,
+		   struct tf_free_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -862,12 +372,12 @@ tf_free_identifier_new(struct tf *tfp,
 	}
 
 	fparms.dir = parms->dir;
-	fparms.ident_type = parms->ident_type;
+	fparms.type = parms->ident_type;
 	fparms.id = parms->id;
 	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Identifier allocation failed, rc:%s\n",
+			    "%s: Identifier free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
@@ -1057,3 +567,242 @@ tf_free_tcam_entry(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_alloc_parms aparms;
+	uint32_t idx;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_tbl_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.type = parms->type;
+	aparms.idx = &idx;
+	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_tbl_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.type = parms->type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table free failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_set_parms sparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&sparms, 0, sizeof(struct tf_tbl_set_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.data = parms->data;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_parms gparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&gparms, 0, sizeof(struct tf_tbl_get_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.data = parms->data;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_get_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index bb456bba7..a7a7bd38a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -384,34 +384,87 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * Identifier resource definition
+ */
+struct tf_identifier_resources {
+	/**
+	 * Array of TF Identifiers where each entry is expected to be
+	 * set to the requested resource number of that specific type.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t cnt[TF_IDENT_TYPE_MAX];
+};
+
+/**
+ * Table type resource definition
+ */
+struct tf_tbl_resources {
+	/**
+	 * Array of TF Table types where each entry is expected to be
+	 * set to the requeste resource number of that specific
+	 * type. The index used is tf_tbl_type.
+	 */
+	uint16_t cnt[TF_TBL_TYPE_MAX];
+};
+
+/**
+ * TCAM type resource definition
+ */
+struct tf_tcam_resources {
+	/**
+	 * Array of TF TCAM types where each entry is expected to be
+	 * set to the requested resource number of that specific
+	 * type. The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t cnt[TF_TCAM_TBL_TYPE_MAX];
+};
+
+/**
+ * EM type resource definition
+ */
+struct tf_em_resources {
+	/**
+	 * Array of TF EM table types where each entry is expected to
+	 * be set to the requested resource number of that specific
+	 * type. The index used is tf_em_tbl_type.
+	 */
+	uint16_t cnt[TF_EM_TBL_TYPE_MAX];
+};
+
 /**
  * tf_session_resources parameter definition.
  */
 struct tf_session_resources {
-	/** [in] Requested Identifier Resources
+	/**
+	 * [in] Requested Identifier Resources
 	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
+	 * Number of identifier resources requested for the
+	 * session.
 	 */
-	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested Index Table resource counts
+	struct tf_identifier_resources ident_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested Index Table resource counts
 	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
+	 * The number of index table resources requested for the
+	 * session.
 	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested TCAM Table resource counts
+	struct tf_tbl_resources tbl_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested TCAM Table resource counts
 	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
+	 * The number of TCAM table resources requested for the
+	 * session.
 	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested EM resource counts
+
+	struct tf_tcam_resources tcam_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested EM resource counts
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * The number of internal EM table resources requested for the
+	 * session.
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+	struct tf_em_resources em_cnt[TF_DIR_MAX];
 };
 
 /**
@@ -497,9 +550,6 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
-int tf_open_session_new(struct tf *tfp,
-			struct tf_open_session_parms *parms);
-
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -565,8 +615,6 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
-int tf_attach_session_new(struct tf *tfp,
-			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -576,7 +624,6 @@ int tf_attach_session_new(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
-int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -631,8 +678,6 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
-int tf_alloc_identifier_new(struct tf *tfp,
-			    struct tf_alloc_identifier_parms *parms);
 
 /**
  * free identifier resource
@@ -645,8 +690,6 @@ int tf_alloc_identifier_new(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
-int tf_free_identifier_new(struct tf *tfp,
-			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
@@ -1277,7 +1320,7 @@ struct tf_bulk_get_tbl_entry_parms {
  * provided data buffer is too small for the data type requested.
  */
 int tf_bulk_get_tbl_entry(struct tf *tfp,
-		     struct tf_bulk_get_tbl_entry_parms *parms);
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 4c46cadc6..b474e8c25 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -43,6 +43,14 @@ dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
 	/* Initialize the modules */
 
 	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
@@ -78,7 +86,7 @@ dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
-	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 32d9a5442..c31bf2357 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -407,6 +407,7 @@ struct tf_dev_ops {
 /**
  * Supported device operation structures
  */
+extern const struct tf_dev_ops tf_dev_ops_p4_init;
 extern const struct tf_dev_ops tf_dev_ops_p4;
 
 #endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 77fb693dd..9e332c594 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -75,6 +75,26 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 	return 0;
 }
 
+/**
+ * Truflow P4 device specific functions
+ */
+const struct tf_dev_ops tf_dev_ops_p4_init = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
+	.tf_dev_alloc_ident = NULL,
+	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_tbl = NULL,
+	.tf_dev_alloc_search_tbl = NULL,
+	.tf_dev_set_tbl = NULL,
+	.tf_dev_get_tbl = NULL,
+	.tf_dev_alloc_tcam = NULL,
+	.tf_dev_free_tcam = NULL,
+	.tf_dev_alloc_search_tcam = NULL,
+	.tf_dev_set_tcam = NULL,
+	.tf_dev_get_tcam = NULL,
+};
+
 /**
  * Truflow P4 device specific functions
  */
@@ -85,14 +105,14 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
-	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
-	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
-	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_get_tcam = NULL,
 	.tf_dev_insert_em_entry = tf_em_insert_entry,
 	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index e89f9768b..ee07a6aea 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -45,19 +45,22 @@ tf_ident_bind(struct tf *tfp,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
-		db_cfg.rm_db = ident_db[i];
+		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
+		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "%s: Identifier DB creation failed\n",
 				    tf_dir_2_str(i));
+
 			return rc;
 		}
 	}
 
 	init = 1;
 
+	printf("Identifier - initialized\n");
+
 	return 0;
 }
 
@@ -73,8 +76,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 	/* Bail if nothing has been initialized done silent as to
 	 * allow for creation cleanup.
 	 */
-	if (!init)
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Identifier DBs created\n");
 		return -EINVAL;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
@@ -96,6 +102,7 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 	       struct tf_ident_alloc_parms *parms)
 {
 	int rc;
+	uint32_t id;
 	struct tf_rm_allocate_parms aparms = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -109,17 +116,19 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 
 	/* Allocate requested element */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
-	aparms.index = (uint32_t *)&parms->id;
+	aparms.db_index = parms->type;
+	aparms.index = &id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Failed allocate, type:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type);
+			    parms->type);
 		return rc;
 	}
 
+	*parms->id = id;
+
 	return 0;
 }
 
@@ -143,7 +152,7 @@ tf_ident_free(struct tf *tfp __rte_unused,
 
 	/* Check if element is in use */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
+	aparms.db_index = parms->type;
 	aparms.index = parms->id;
 	aparms.allocated = &allocated;
 	rc = tf_rm_is_allocated(&aparms);
@@ -154,21 +163,21 @@ tf_ident_free(struct tf *tfp __rte_unused,
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
 
 	/* Free requested element */
 	fparms.rm_db = ident_db[parms->dir];
-	fparms.db_index = parms->ident_type;
+	fparms.db_index = parms->type;
 	fparms.index = parms->id;
 	rc = tf_rm_free(&fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Free failed, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index 1c5319b5e..6e36c525f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -43,7 +43,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [out] Identifier allocated
 	 */
@@ -61,7 +61,7 @@ struct tf_ident_free_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [in] ID to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b50e1d48c..a2e3840f0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -12,6 +12,7 @@
 #include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
+#include "tf_common.h"
 #include "tf_session.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -935,13 +936,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	struct tf_rm_resc_req_entry *data;
 	int dma_size;
 
-	if (size == 0 || query == NULL || resv_strategy == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Resource QCAPS parameter error, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-EINVAL));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS3(tfp, query, resv_strategy);
 
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
@@ -962,7 +957,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.qcaps_size = size;
-	req.qcaps_addr = qcaps_buf.pa_addr;
+	req.qcaps_addr = tfp_cpu_to_le_64(qcaps_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
 	parms.req_data = (uint32_t *)&req;
@@ -980,18 +975,29 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: QCAPS message error, rc:%s\n",
+			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+
+	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_cpu_to_le_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
+
+		printf("type: %d(0x%x) %d %d\n",
+		       query[i].type,
+		       query[i].type,
+		       query[i].min,
+		       query[i].max);
+
 	}
 
 	*resv_strategy = resp.flags &
@@ -1021,6 +1027,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	struct tf_rm_resc_entry *resv_data;
 	int dma_size;
 
+	TF_CHECK_PARMS3(tfp, request, resv);
+
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1053,8 +1061,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
 	}
 
-	req.req_addr = req_buf.pa_addr;
-	req.resp_addr = resv_buf.pa_addr;
+	req.req_addr = tfp_cpu_to_le_64(req_buf.pa_addr);
+	req.resc_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
 	parms.req_data = (uint32_t *)&req;
@@ -1072,18 +1080,28 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Alloc message error, rc:%s\n",
+			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("\nRESV\n");
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
 		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
 		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
 		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+
+		printf("%d type: %d(0x%x) %d %d\n",
+		       i,
+		       resv[i].type,
+		       resv[i].type,
+		       resv[i].start,
+		       resv[i].stride);
 	}
 
 	tf_msg_free_dma_buf(&req_buf);
@@ -1460,7 +1478,8 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
 
-	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	data_size = params->num_entries * params->entry_sz_in_bytes;
+
 	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
 
 	MSG_PREP(parms,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index a3e0f7bba..fb635f6dc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_rm_new.h"
+#include "tf_tcam.h"
 
 struct tf;
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 7cadb231f..6abf79aa1 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -10,6 +10,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_rm_new.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
 #include "tf_device.h"
@@ -65,6 +66,46 @@ struct tf_rm_new_db {
 	struct tf_rm_element *db;
 };
 
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] cfg
+ *   Pointer to the DB configuration
+ *
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
+ *
+ * [in] count
+ *   Number of DB configuration elements
+ *
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static void
+tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
+{
+	int i;
+	uint16_t cnt = 0;
+
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
+	}
+
+	*valid_count = cnt;
+}
 
 /**
  * Resource Manager Adjust of base index definitions.
@@ -132,6 +173,7 @@ tf_rm_create_db(struct tf *tfp,
 {
 	int rc;
 	int i;
+	int j;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	uint16_t max_types;
@@ -143,6 +185,9 @@ tf_rm_create_db(struct tf *tfp,
 	struct tf_rm_new_db *rm_db;
 	struct tf_rm_element *db;
 	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -177,10 +222,19 @@ tf_rm_create_db(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/* Process capabilities against db requirements */
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
 
 	/* Alloc request, alignment already set */
-	cparms.nitems = parms->num_elements;
+	cparms.nitems = (size_t)hcapi_items;
 	cparms.size = sizeof(struct tf_rm_resc_req_entry);
 	rc = tfp_calloc(&cparms);
 	if (rc)
@@ -195,15 +249,24 @@ tf_rm_create_db(struct tf *tfp,
 	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
 
 	/* Build the request */
-	for (i = 0; i < parms->num_elements; i++) {
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
 		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			req[i].type = parms->cfg[i].hcapi_type;
-			/* Check that we can get the full amount allocated */
-			if (parms->alloc_num[i] <=
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
 			    query[parms->cfg[i].hcapi_type].max) {
-				req[i].min = parms->alloc_num[i];
-				req[i].max = parms->alloc_num[i];
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
 			} else {
 				TFP_DRV_LOG(ERR,
 					    "%s: Resource failure, type:%d\n",
@@ -211,19 +274,16 @@ tf_rm_create_db(struct tf *tfp,
 					    parms->cfg[i].hcapi_type);
 				TFP_DRV_LOG(ERR,
 					"req:%d, avail:%d\n",
-					parms->alloc_num[i],
+					parms->alloc_cnt[i],
 					query[parms->cfg[i].hcapi_type].max);
 				return -EINVAL;
 			}
-		} else {
-			/* Skip the element */
-			req[i].type = CFA_RESOURCE_TYPE_INVALID;
 		}
 	}
 
 	rc = tf_msg_session_resc_alloc(tfp,
 				       parms->dir,
-				       parms->num_elements,
+				       hcapi_items,
 				       req,
 				       resv);
 	if (rc)
@@ -246,42 +306,74 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
 	db = rm_db->db;
-	for (i = 0; i < parms->num_elements; i++) {
-		/* If allocation failed for a single entry the DB
-		 * creation is considered a failure.
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
+
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
 		 */
-		if (parms->alloc_num[i] != resv[i].stride) {
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
 				    "%s: Alloc failed, type:%d\n",
 				    tf_dir_2_str(parms->dir),
-				    i);
+				    db[i].cfg_type);
 			TFP_DRV_LOG(ERR,
 				    "req:%d, alloc:%d\n",
-				    parms->alloc_num[i],
-				    resv[i].stride);
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
 			goto fail;
 		}
-
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-		db[i].alloc.entry.start = resv[i].start;
-		db[i].alloc.entry.stride = resv[i].stride;
-
-		/* Create pool */
-		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
-			     sizeof(struct bitalloc));
-		/* Alloc request, alignment already set */
-		cparms.nitems = pool_size;
-		cparms.size = sizeof(struct bitalloc);
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-		db[i].pool = (struct bitalloc *)cparms.mem_va;
 	}
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
-	parms->rm_db = (void *)rm_db;
+	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
@@ -307,13 +399,15 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 	int i;
 	struct tf_rm_new_db *rm_db;
 
+	TF_CHECK_PARMS1(parms);
+
 	/* Traverse the DB and clear each pool.
 	 * NOTE:
 	 *   Firmware is not cleared. It will be cleared on close only.
 	 */
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db->pool);
+		tfp_free((void *)rm_db->db[i].pool);
 
 	tfp_free((void *)parms->rm_db);
 
@@ -325,11 +419,11 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc = 0;
 	int id;
+	uint32_t index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -339,6 +433,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		TFP_DRV_LOG(ERR,
@@ -353,15 +458,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 				TF_RM_ADJUST_ADD_BASE,
 				parms->db_index,
 				id,
-				parms->index);
+				&index);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc adjust of base index failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -1;
+		return -EINVAL;
 	}
 
+	*parms->index = index;
+
 	return rc;
 }
 
@@ -373,8 +480,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -384,6 +490,17 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -409,8 +526,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -420,6 +536,17 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -442,8 +569,7 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -465,8 +591,7 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 6d8234ddc..ebf38c411 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -135,13 +135,16 @@ struct tf_rm_create_db_parms {
 	 */
 	struct tf_rm_element_cfg *cfg;
 	/**
-	 * Allocation number array. Array size is num_elements.
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
 	 */
-	uint16_t *alloc_num;
+	uint16_t *alloc_cnt;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *rm_db;
+	void **rm_db;
 };
 
 /**
@@ -382,7 +385,7 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
  * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI type.
+ * requested HCAPI RM type.
  *
  * [in] parms
  *   Pointer to get hcapi parameters
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 1bf30c996..bac9c76af 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -95,21 +95,11 @@ tf_session_open_session(struct tf *tfp,
 		      parms->open_cfg->device_type,
 		      session->shadow_copy,
 		      &parms->open_cfg->resources,
-		      session->dev);
+		      &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
 
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
 	session->ref_count++;
 
 	return 0;
@@ -119,10 +109,6 @@ tf_session_open_session(struct tf *tfp,
 	tfp_free(tfp->session);
 	tfp->session = NULL;
 	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
 }
 
 int
@@ -231,17 +217,7 @@ int
 tf_session_get_device(struct tf_session *tfs,
 		      struct tf_dev_info **tfd)
 {
-	int rc;
-
-	if (tfs->dev == NULL) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR,
-			    "Device not created, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	*tfd = tfs->dev;
+	*tfd = &tfs->dev;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 92792518b..705bb0955 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -97,7 +97,7 @@ struct tf_session {
 	uint8_t ref_count;
 
 	/** Device handle */
-	struct tf_dev_info *dev;
+	struct tf_dev_info dev;
 
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index a68335304..e594f0248 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -761,163 +761,6 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 	return 0;
 }
 
-/**
- * Internal function to set a Table Entry. Supports all internal Table Types
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_set_tbl_entry_internal(struct tf *tfp,
-			  struct tf_set_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -1145,266 +988,6 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
-/**
- * Allocate External Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_external(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	uint32_t index;
-	struct tf_session *tfs;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	/* Allocate an element
-	 */
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type);
-		return rc;
-	}
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Allocate Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	int free_cnt;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	id = ba_alloc(session_pool);
-	if (id == -1) {
-		free_cnt = ba_free_count(session_pool);
-
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d, free:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   free_cnt);
-		return -ENOMEM;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_ADD_BASE,
-			    id,
-			    &index);
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Free External Tbl entry to the session pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry freed
- *
- * - Failure, entry not successfully freed for these reasons
- *  -ENOMEM
- *  -EOPNOTSUPP
- *  -EINVAL
- */
-static int
-tf_free_tbl_entry_pool_external(struct tf *tfp,
-				struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_session *tfs;
-	uint32_t index;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	index = parms->idx;
-
-	rc = stack_push(pool, index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, consistency error, stack full, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-	}
-	return rc;
-}
-
-/**
- * Free Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_free_tbl_entry_pool_internal(struct tf *tfp,
-		       struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	int id;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	uint32_t index;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Check if element was indeed allocated */
-	id = ba_inuse_free(session_pool, index);
-	if (id == -1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -ENOMEM;
-	}
-
-	return rc;
-}
-
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
 tbl_scope_cb_find(struct tf_session *session,
@@ -1584,113 +1167,7 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	return -EINVAL;
 }
 
-/* API defined in tf_core.h */
-int
-tf_set_tbl_entry(struct tf *tfp,
-		 struct tf_set_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		void *base_addr;
-		uint32_t offset = parms->idx;
-		uint32_t tbl_scope_id;
-
-		session = (struct tf_session *)(tfp->session->core_data);
-
-		tbl_scope_id = parms->tbl_scope_id;
-
-		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			TFP_DRV_LOG(ERR,
-				    "%s, Table scope not allocated\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		/* Get the table scope control block associated with the
-		 * external pool
-		 */
-		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
-
-		if (tbl_scope_cb == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, table scope error\n",
-				    tf_dir_2_str(parms->dir));
-				return -EINVAL;
-		}
-
-		/* External table, implicitly the Action table */
-		base_addr = (void *)(uintptr_t)
-		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
-
-		if (base_addr == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Base address lookup failed\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		offset %= TF_EM_PAGE_SIZE;
-		rte_memcpy((char *)base_addr + offset,
-			   parms->data,
-			   parms->data_sz_in_bytes);
-	} else {
-		/* Internal table type processing */
-		rc = tf_set_tbl_entry_internal(tfp, parms);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Set failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-		}
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_get_tbl_entry(struct tf *tfp,
-		 struct tf_get_tbl_entry_parms *parms)
-{
-	int rc = 0;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
-		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
-			    tf_dir_2_str(parms->dir));
-
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_get_tbl_entry_internal(tfp, parms);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s, Get failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
+ /* API defined in tf_core.h */
 int
 tf_bulk_get_tbl_entry(struct tf *tfp,
 		 struct tf_bulk_get_tbl_entry_parms *parms)
@@ -1749,92 +1226,6 @@ tf_free_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_entry(struct tf *tfp,
-		   struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	/*
-	 * No shadow copy support for external tables, allocate and return
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
-		/* Entry found and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_entry(struct tf *tfp,
-		  struct tf_free_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/*
-	 * No shadow of external tables so just free the entry
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_free_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_free_tbl_entry_shadow(tfs, parms);
-		/* Entry free'ed and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
-
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-	return rc;
-}
-
-
 static void
 tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
 			struct hcapi_cfa_em_page_tbl *tp_next)
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index b79706f97..51f8f0740 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -6,13 +6,18 @@
 #include <rte_common.h>
 
 #include "tf_tbl_type.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Table DBs.
  */
-/* static void *tbl_db[TF_DIR_MAX]; */
+static void *tbl_db[TF_DIR_MAX];
 
 /**
  * Table Shadow DBs
@@ -22,7 +27,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -30,29 +35,164 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_bind(struct tf *tfp __rte_unused,
-	    struct tf_tbl_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
 	return 0;
 }
 
 int
 tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Table DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms __rte_unused)
+	     struct tf_tbl_alloc_parms *parms)
 {
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
 	return 0;
 }
 
 int
 tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms __rte_unused)
+	    struct tf_tbl_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
 	return 0;
 }
 
@@ -64,15 +204,107 @@ tf_tbl_alloc_search(struct tf *tfp __rte_unused,
 }
 
 int
-tf_tbl_set(struct tf *tfp __rte_unused,
-	   struct tf_tbl_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
 int
-tf_tbl_get(struct tf *tfp __rte_unused,
-	   struct tf_tbl_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index 11f2aa333..3474489a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -55,7 +55,7 @@ struct tf_tbl_alloc_parms {
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint32_t *idx;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b9dba5323..e0fac31f2 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -38,8 +38,8 @@ static uint8_t init;
 /* static uint8_t shadow_init; */
 
 int
-tf_tcam_bind(struct tf *tfp __rte_unused,
-	     struct tf_tcam_cfg_parms *parms __rte_unused)
+tf_tcam_bind(struct tf *tfp,
+	     struct tf_tcam_cfg_parms *parms)
 {
 	int rc;
 	int i;
@@ -59,8 +59,8 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
-		db_cfg.rm_db = tcam_db[i];
+		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
+		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
@@ -72,11 +72,13 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 
 	init = 1;
 
+	printf("TCAM - initialized\n");
+
 	return 0;
 }
 
 int
-tf_tcam_unbind(struct tf *tfp __rte_unused)
+tf_tcam_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index 4099629ea..ad8edaf30 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -10,32 +10,57 @@
 
 /**
  * Helper function converting direction to text string
+ *
+ * [in] dir
+ *   Receive or transmit direction identifier
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the direction
  */
-const char
-*tf_dir_2_str(enum tf_dir dir);
+const char *tf_dir_2_str(enum tf_dir dir);
 
 /**
  * Helper function converting identifier to text string
+ *
+ * [in] id_type
+ *   Identifier type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the identifier
  */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
+const char *tf_ident_2_str(enum tf_identifier_type id_type);
 
 /**
  * Helper function converting tcam type to text string
+ *
+ * [in] tcam_type
+ *   TCAM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the tcam
  */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+const char *tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
 
 /**
  * Helper function converting tbl type to text string
+ *
+ * [in] tbl_type
+ *   Table type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the table type
  */
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
 
 /**
  * Helper function converting em tbl type to text string
+ *
+ * [in] em_type
+ *   EM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
  */
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
 #endif /* _TF_UTIL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 3bce3ade1..69d1c9a1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -102,13 +102,13 @@ tfp_calloc(struct tfp_calloc_parms *parms)
 				    (parms->nitems * parms->size),
 				    parms->alignment);
 	if (parms->mem_va == NULL) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_va\n");
 		return -ENOMEM;
 	}
 
 	parms->mem_pa = (void *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
 	if (parms->mem_pa == (void *)((uintptr_t)RTE_BAD_IOVA)) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_pa\n");
 		return -ENOMEM;
 	}
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 19/51] net/bnxt: update identifier with remap support
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (17 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 18/51] net/bnxt: multiple device implementation Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
                     ` (32 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add Identifier L2 CTXT Remap to the P4 device and updated the
  cfa_resource_types.h to get the type support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 110 ++++++++++--------
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   2 +-
 2 files changed, 60 insertions(+), 52 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 11e8892f4..058d8cc88 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -20,46 +20,48 @@
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_REMAP   0x1UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x2UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x3UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x4UL
 /* Wildcard TCAM Profile Id */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x5UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x6UL
 /* Meter Profile */
-#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x7UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+#define CFA_RESOURCE_TYPE_P59_METER           0x8UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x9UL
 /* Source Properties TCAM */
-#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0xaUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xbUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xcUL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
-#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
 /* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
@@ -105,30 +107,32 @@
 #define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
 #define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
@@ -176,26 +180,28 @@
 #define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
 #define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
@@ -243,24 +249,26 @@
 #define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
 #define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 5cd02b298..235d81f96 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,7 +12,7 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 20/51] net/bnxt: update RM with residual checker
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (18 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
                     ` (31 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add residual checker to the TF Host RM as well as new RM APIs. On
  close it will scan the DB and check of any remaining elements. If
  found they will be logged and FW msg sent for FW to scrub that
  specific type of resources.
- Update the module bind to be aware of the module type, for each of
  the modules.
- Add additional type 2 string util functions.
- Fix the device naming to be in compliance with TF.
- Update the device unbind order as to assure TCAMs gets flushed
  first.
- Update the close functionality such that the session gets
  closed after the device is unbound.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device.c     |  53 +++--
 drivers/net/bnxt/tf_core/tf_device.h     |  25 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   1 -
 drivers/net/bnxt/tf_core/tf_identifier.c |  10 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  67 +++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |   7 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 287 +++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  45 +++-
 drivers/net/bnxt/tf_core/tf_session.c    |  58 +++--
 drivers/net/bnxt/tf_core/tf_tbl_type.c   |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.h       |   4 +
 drivers/net/bnxt/tf_core/tf_util.c       |  55 ++++-
 drivers/net/bnxt/tf_core/tf_util.h       |  32 +++
 14 files changed, 561 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index b474e8c25..441d0c678 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -10,7 +10,7 @@
 struct tf;
 
 /* Forward declarations */
-static int dev_unbind_p4(struct tf *tfp);
+static int tf_dev_unbind_p4(struct tf *tfp);
 
 /**
  * Device specific bind function, WH+
@@ -32,10 +32,10 @@ static int dev_unbind_p4(struct tf *tfp);
  *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp,
-	    bool shadow_copy,
-	    struct tf_session_resources *resources,
-	    struct tf_dev_info *dev_handle)
+tf_dev_bind_p4(struct tf *tfp,
+	       bool shadow_copy,
+	       struct tf_session_resources *resources,
+	       struct tf_dev_info *dev_handle)
 {
 	int rc;
 	int frc;
@@ -93,7 +93,7 @@ dev_bind_p4(struct tf *tfp,
 
  fail:
 	/* Cleanup of already created modules */
-	frc = dev_unbind_p4(tfp);
+	frc = tf_dev_unbind_p4(tfp);
 	if (frc)
 		return frc;
 
@@ -111,7 +111,7 @@ dev_bind_p4(struct tf *tfp,
  *   - (-EINVAL) on failure.
  */
 static int
-dev_unbind_p4(struct tf *tfp)
+tf_dev_unbind_p4(struct tf *tfp)
 {
 	int rc = 0;
 	bool fail = false;
@@ -119,25 +119,28 @@ dev_unbind_p4(struct tf *tfp)
 	/* Unbind all the support modules. As this is only done on
 	 * close we only report errors as everything has to be cleaned
 	 * up regardless.
+	 *
+	 * In case of residuals TCAMs are cleaned up first as to
+	 * invalidate the pipeline in a clean manner.
 	 */
-	rc = tf_ident_unbind(tfp);
+	rc = tf_tcam_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Identifier\n");
+			    "Device unbind failed, TCAM\n");
 		fail = true;
 	}
 
-	rc = tf_tbl_unbind(tfp);
+	rc = tf_ident_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Table Type\n");
+			    "Device unbind failed, Identifier\n");
 		fail = true;
 	}
 
-	rc = tf_tcam_unbind(tfp);
+	rc = tf_tbl_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, TCAM\n");
+			    "Device unbind failed, Table Type\n");
 		fail = true;
 	}
 
@@ -148,18 +151,18 @@ dev_unbind_p4(struct tf *tfp)
 }
 
 int
-dev_bind(struct tf *tfp __rte_unused,
-	 enum tf_device_type type,
-	 bool shadow_copy,
-	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_handle)
+tf_dev_bind(struct tf *tfp __rte_unused,
+	    enum tf_device_type type,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_bind_p4(tfp,
-				   shadow_copy,
-				   resources,
-				   dev_handle);
+		return tf_dev_bind_p4(tfp,
+				      shadow_copy,
+				      resources,
+				      dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
@@ -168,12 +171,12 @@ dev_bind(struct tf *tfp __rte_unused,
 }
 
 int
-dev_unbind(struct tf *tfp,
-	   struct tf_dev_info *dev_handle)
+tf_dev_unbind(struct tf *tfp,
+	      struct tf_dev_info *dev_handle)
 {
 	switch (dev_handle->type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_unbind_p4(tfp);
+		return tf_dev_unbind_p4(tfp);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c31bf2357..c8feac55d 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -14,6 +14,17 @@
 struct tf;
 struct tf_session;
 
+/**
+ *
+ */
+enum tf_device_module_type {
+	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	TF_DEVICE_MODULE_TYPE_TABLE,
+	TF_DEVICE_MODULE_TYPE_TCAM,
+	TF_DEVICE_MODULE_TYPE_EM,
+	TF_DEVICE_MODULE_TYPE_MAX
+};
+
 /**
  * The Device module provides a general device template. A supported
  * device type should implement one or more of the listed function
@@ -60,11 +71,11 @@ struct tf_dev_info {
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_bind(struct tf *tfp,
-	     enum tf_device_type type,
-	     bool shadow_copy,
-	     struct tf_session_resources *resources,
-	     struct tf_dev_info *dev_handle);
+int tf_dev_bind(struct tf *tfp,
+		enum tf_device_type type,
+		bool shadow_copy,
+		struct tf_session_resources *resources,
+		struct tf_dev_info *dev_handle);
 
 /**
  * Device release handles cleanup of the device specific information.
@@ -80,8 +91,8 @@ int dev_bind(struct tf *tfp,
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_unbind(struct tf *tfp,
-	       struct tf_dev_info *dev_handle);
+int tf_dev_unbind(struct tf *tfp,
+		  struct tf_dev_info *dev_handle);
 
 /**
  * Truflow device specific function hooks structure
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 235d81f96..411e21637 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -77,5 +77,4 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
-
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index ee07a6aea..b197bb271 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -39,12 +39,12 @@ tf_ident_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_IDENTIFIER;
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
 		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
@@ -86,8 +86,10 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 		fparms.dir = i;
 		fparms.rm_db = ident_db[i];
 		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "rm free failed on unbind\n");
+		}
 
 		ident_db[i] = NULL;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index a2e3840f0..c015b0ce2 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1110,6 +1110,69 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	return rc;
 }
 
+int
+tf_msg_session_resc_flush(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_flush_input req = { 0 };
+	struct hwrm_tf_session_resc_flush_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	TF_CHECK_PARMS2(tfp, resv);
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.flush_size = size;
+
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv_data[i].type = tfp_cpu_to_le_32(resv[i].type);
+		resv_data[i].start = tfp_cpu_to_le_16(resv[i].start);
+		resv_data[i].stride = tfp_cpu_to_le_16(resv[i].stride);
+	}
+
+	req.flush_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_FLUSH;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1512,9 +1575,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
-	if (rc != 0)
-		return rc;
+	req.type = parms->type;
 
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fb635f6dc..1ff1044e8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -181,6 +181,13 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 			      struct tf_rm_resc_req_entry *request,
 			      struct tf_rm_resc_entry *resv);
 
+/**
+ * Sends session resource flush request to TF Firmware
+ */
+int tf_msg_session_resc_flush(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 6abf79aa1..02b4b5c8f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -60,6 +60,11 @@ struct tf_rm_new_db {
 	 */
 	enum tf_dir dir;
 
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
@@ -167,6 +172,178 @@ tf_rm_adjust_index(struct tf_rm_element *db,
 	return rc;
 }
 
+/**
+ * Logs an array of found residual entries to the console.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] type
+ *   Type of Device Module
+ *
+ * [in] count
+ *   Number of entries in the residual array
+ *
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
+ */
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
+{
+	int i;
+
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
+	 */
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
+	}
+}
+
+/**
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
+ *
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
+{
+	int rc;
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
+	}
+
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
+		}
+		*resv_size = found;
+	}
+
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
 int
 tf_rm_create_db(struct tf *tfp,
 		struct tf_rm_create_db_parms *parms)
@@ -373,6 +550,7 @@ tf_rm_create_db(struct tf *tfp,
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
@@ -392,20 +570,69 @@ tf_rm_create_db(struct tf *tfp,
 }
 
 int
-tf_rm_free_db(struct tf *tfp __rte_unused,
+tf_rm_free_db(struct tf *tfp,
 	      struct tf_rm_free_db_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
 	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
 
-	TF_CHECK_PARMS1(parms);
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	/* Traverse the DB and clear each pool.
-	 * NOTE:
-	 *   Firmware is not cleared. It will be cleared on close only.
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
 	 */
+
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
+	 */
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
+	}
+
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -417,7 +644,7 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 int
 tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int id;
 	uint32_t index;
 	struct tf_rm_new_db *rm_db;
@@ -446,11 +673,12 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
+		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
 			    "%s: Allocation failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -ENOMEM;
+		return rc;
 	}
 
 	/* Adjust for any non zero start value */
@@ -475,7 +703,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 int
 tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -521,7 +749,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 int
 tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -565,7 +793,6 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 int
 tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -579,15 +806,16 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
-	parms->info = &rm_db->db[parms->db_index].alloc;
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
-	return rc;
+	return 0;
 }
 
 int
 tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -603,5 +831,36 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
 
+	return 0;
+}
+
+int
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
+{
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
+	}
+
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
 	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index ebf38c411..a40296ed2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -8,6 +8,7 @@
 
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
 
@@ -57,9 +58,9 @@ struct tf_rm_new_entry {
 enum tf_rm_elem_cfg_type {
 	/** No configuration */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled' */
+	/** HCAPI 'controlled', uses a Pool for internal storage */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled' */
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
 	TF_RM_ELEM_CFG_PRIVATE,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
@@ -123,7 +124,11 @@ struct tf_rm_alloc_info {
  */
 struct tf_rm_create_db_parms {
 	/**
-	 * [in] Receive or transmit direction
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
 	 */
 	enum tf_dir dir;
 	/**
@@ -263,6 +268,25 @@ struct tf_rm_get_hcapi_parms {
 	uint16_t *hcapi_type;
 };
 
+/**
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
 /**
  * @page rm Resource Manager
  *
@@ -279,6 +303,8 @@ struct tf_rm_get_hcapi_parms {
  * @ref tf_rm_get_info
  *
  * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
 
 /**
@@ -396,4 +422,17 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
+ *
+ * [in] parms
+ *   Pointer to get inuse parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
+
 #endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index bac9c76af..ab9e05f2d 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -91,11 +91,11 @@ tf_session_open_session(struct tf *tfp,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
-	rc = dev_bind(tfp,
-		      parms->open_cfg->device_type,
-		      session->shadow_copy,
-		      &parms->open_cfg->resources,
-		      &session->dev);
+	rc = tf_dev_bind(tfp,
+			 parms->open_cfg->device_type,
+			 session->shadow_copy,
+			 &parms->open_cfg->resources,
+			 &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
@@ -151,6 +151,8 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	tfs->ref_count--;
+
 	/* Record the session we're closing so the caller knows the
 	 * details.
 	 */
@@ -164,6 +166,32 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	if (tfs->ref_count > 0) {
+		/* In case we're attached only the session client gets
+		 * closed.
+		 */
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "FW Session close failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		return 0;
+	}
+
+	/* Final cleanup as we're last user of the session */
+
+	/* Unbind the device */
+	rc = tf_dev_unbind(tfp, tfd);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
 	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
@@ -173,23 +201,9 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	tfs->ref_count--;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Unbind the device */
-		rc = dev_unbind(tfp, tfd);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Device unbind failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index 51f8f0740..bdf7d2089 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -51,11 +51,12 @@ tf_tbl_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
 		db_cfg.rm_db = &tbl_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index e0fac31f2..2f4441de8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -54,11 +54,12 @@ tf_tcam_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
 		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 67c3bcb49..5090dfd9f 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -146,6 +146,10 @@ struct tf_tcam_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Entry index to write to
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index a9010543d..16c43eb67 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -7,8 +7,8 @@
 
 #include "tf_util.h"
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
+const char *
+tf_dir_2_str(enum tf_dir dir)
 {
 	switch (dir) {
 	case TF_DIR_RX:
@@ -20,8 +20,8 @@ const char
 	}
 }
 
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
+const char *
+tf_ident_2_str(enum tf_identifier_type id_type)
 {
 	switch (id_type) {
 	case TF_IDENT_TYPE_L2_CTXT:
@@ -39,8 +39,8 @@ const char
 	}
 }
 
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+const char *
+tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
 {
 	switch (tcam_type) {
 	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
@@ -60,8 +60,8 @@ const char
 	}
 }
 
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+const char *
+tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 {
 	switch (tbl_type) {
 	case TF_TBL_TYPE_FULL_ACT_RECORD:
@@ -131,8 +131,8 @@ const char
 	}
 }
 
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+const char *
+tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
 {
 	switch (em_type) {
 	case TF_EM_TBL_TYPE_EM_RECORD:
@@ -143,3 +143,38 @@ const char
 		return "Invalid EM type";
 	}
 }
+
+const char *
+tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
+				    uint16_t mod_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return tf_ident_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return tf_tcam_tbl_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return tf_em_tbl_type_2_str(mod_type);
+	default:
+		return "Invalid Device Module type";
+	}
+}
+
+const char *
+tf_device_module_type_2_str(enum tf_device_module_type dm_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return "Identifer";
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return "Table";
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return "TCAM";
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return "EM";
+	default:
+		return "Invalid Device Module type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index ad8edaf30..c97e2a66a 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -7,6 +7,7 @@
 #define _TF_UTIL_H_
 
 #include "tf_core.h"
+#include "tf_device.h"
 
 /**
  * Helper function converting direction to text string
@@ -63,4 +64,35 @@ const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
  */
 const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
+/**
+ * Helper function converting device module type and module type to
+ * text string.
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_subtype_2_str
+					(enum tf_device_module_type dm_type,
+					 uint16_t mod_type);
+
+/**
+ * Helper function converting device module type to text string
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_2_str(enum tf_device_module_type dm_type);
+
 #endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 21/51] net/bnxt: support two level priority for TCAMs
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (19 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
                     ` (30 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/bitalloc.c  | 107 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h  |   5 ++
 drivers/net/bnxt/tf_core/tf_rm_new.c |   9 ++-
 drivers/net/bnxt/tf_core/tf_rm_new.h |   8 ++
 drivers/net/bnxt/tf_core/tf_tcam.c   |   1 +
 5 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
index fb4df9a19..918cabf19 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.c
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -7,6 +7,40 @@
 
 #define BITALLOC_MAX_LEVELS 6
 
+
+/* Finds the last bit set plus 1, equivalent to gcc __builtin_fls */
+static int
+ba_fls(bitalloc_word_t v)
+{
+	int c = 32;
+
+	if (!v)
+		return 0;
+
+	if (!(v & 0xFFFF0000u)) {
+		v <<= 16;
+		c -= 16;
+	}
+	if (!(v & 0xFF000000u)) {
+		v <<= 8;
+		c -= 8;
+	}
+	if (!(v & 0xF0000000u)) {
+		v <<= 4;
+		c -= 4;
+	}
+	if (!(v & 0xC0000000u)) {
+		v <<= 2;
+		c -= 2;
+	}
+	if (!(v & 0x80000000u)) {
+		v <<= 1;
+		c -= 1;
+	}
+
+	return c;
+}
+
 /* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
 static int
 ba_ffs(bitalloc_word_t v)
@@ -120,6 +154,79 @@ ba_alloc(struct bitalloc *pool)
 	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
 }
 
+/**
+ * Help function to alloc entry from highest available index
+ *
+ * Searching the pool from highest index for the empty entry.
+ *
+ * [in] pool
+ *   Pointer to the resource pool
+ *
+ * [in] offset
+ *   Offset of the storage in the pool
+ *
+ * [in] words
+ *   Number of words in this level
+ *
+ * [in] size
+ *   Number of entries in this level
+ *
+ * [in] index
+ *   Index of words that has the entry
+ *
+ * [in] clear
+ *   Indicate if a bit needs to be clear due to the entry is allocated
+ *
+ * Returns:
+ *     0 - Success
+ *    -1 - Failure
+ */
+static int
+ba_alloc_reverse_helper(struct bitalloc *pool,
+			int offset,
+			int words,
+			unsigned int size,
+			int index,
+			int *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int loc = ba_fls(storage[index]);
+	int r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_reverse_helper(pool,
+					    offset + words + 1,
+					    storage[words],
+					    size * 32,
+					    index * 32 + loc,
+					    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_reverse(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_reverse_helper(pool, 0, 1, 32, 0, &clear);
+}
+
 static int
 ba_alloc_index_helper(struct bitalloc *pool,
 		      int              offset,
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
index 563c8531a..2825bb37e 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.h
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -72,6 +72,11 @@ int ba_init(struct bitalloc *pool, int size);
 int ba_alloc(struct bitalloc *pool);
 int ba_alloc_index(struct bitalloc *pool, int index);
 
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc_reverse(struct bitalloc *pool);
+
 /**
  * Query a particular index in a pool to check if its in use.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 02b4b5c8f..de8f11955 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -671,7 +671,14 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 		return rc;
 	}
 
-	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index a40296ed2..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -185,6 +185,14 @@ struct tf_rm_allocate_parms {
 	 * i.e. Full Action Record offsets.
 	 */
 	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 2f4441de8..260fb15a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -157,6 +157,7 @@ tf_tcam_alloc(struct tf *tfp,
 	/* Allocate requested element */
 	aparms.rm_db = tcam_db[parms->dir];
 	aparms.db_index = parms->type;
+	aparms.priority = parms->priority;
 	aparms.index = (uint32_t *)&parms->idx;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 22/51] net/bnxt: support EM and TCAM lookup with table scope
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (20 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 23/51] net/bnxt: update table get to use new design Ajit Khaparde
                     ` (29 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Support for table scope within the EM module
- Support for host and system memory
- Update TCAM set/free.
- Replace TF device type by HCAPI RM type.
- Update TCAM set and free for HCAPI RM type

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |    5 +-
 drivers/net/bnxt/tf_core/Makefile             |    5 +-
 drivers/net/bnxt/tf_core/cfa_resource_types.h |    8 +-
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  864 +-----------
 drivers/net/bnxt/tf_core/tf_core.c            |  100 +-
 drivers/net/bnxt/tf_core/tf_device.c          |   50 +-
 drivers/net/bnxt/tf_core/tf_device.h          |   86 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |   14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   20 +-
 drivers/net/bnxt/tf_core/tf_em.c              |  360 -----
 drivers/net/bnxt/tf_core/tf_em.h              |  310 +++-
 drivers/net/bnxt/tf_core/tf_em_common.c       |  281 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  107 ++
 drivers/net/bnxt/tf_core/tf_em_host.c         | 1146 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  312 +++++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  118 ++
 drivers/net/bnxt/tf_core/tf_msg.c             | 1248 ++++-------------
 drivers/net/bnxt/tf_core/tf_msg.h             |  233 +--
 drivers/net/bnxt/tf_core/tf_rm.c              |   89 +-
 drivers/net/bnxt/tf_core/tf_rm_new.c          |   40 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1134 ---------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |   39 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |   25 +-
 drivers/net/bnxt/tf_core/tf_tcam.h            |    4 +
 drivers/net/bnxt/tf_core/tf_util.c            |    4 +-
 25 files changed, 3030 insertions(+), 3572 deletions(-)
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 33e6ebd66..35038dc8b 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -28,7 +28,10 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_msg.c',
 	'tf_core/rand.c',
 	'tf_core/stack.c',
-	'tf_core/tf_em.c',
+        'tf_core/tf_em_common.c',
+        'tf_core/tf_em_host.c',
+        'tf_core/tf_em_internal.c',
+        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 5ed32f12a..f186741e4 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -12,8 +12,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 058d8cc88..6e79facec 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -202,7 +202,9 @@
 #define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
 #define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
@@ -269,7 +271,9 @@
 #define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
 #define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 1e78296c6..26836e488 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -13,20 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_SESSION_ATTACH = 712,
-	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
-	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
-	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
-	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
-	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
-	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
-	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
-	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
-	HWRM_TFT_TBL_SCOPE_CFG = 731,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
-	HWRM_TFT_TBL_TYPE_SET = 823,
-	HWRM_TFT_TBL_TYPE_GET = 824,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
@@ -66,858 +54,8 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
-struct tf_session_attach_input;
-struct tf_session_hw_resc_qcaps_input;
-struct tf_session_hw_resc_qcaps_output;
-struct tf_session_hw_resc_alloc_input;
-struct tf_session_hw_resc_alloc_output;
-struct tf_session_hw_resc_free_input;
-struct tf_session_hw_resc_flush_input;
-struct tf_session_sram_resc_qcaps_input;
-struct tf_session_sram_resc_qcaps_output;
-struct tf_session_sram_resc_alloc_input;
-struct tf_session_sram_resc_alloc_output;
-struct tf_session_sram_resc_free_input;
-struct tf_session_sram_resc_flush_input;
-struct tf_tbl_type_set_input;
-struct tf_tbl_type_get_input;
-struct tf_tbl_type_get_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
-/* Input params for session attach */
-typedef struct tf_session_attach_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* Session Name */
-	char				 session_name[TF_SESSION_NAME_MAX];
-} tf_session_attach_input_t, *ptf_session_attach_input_t;
-
-/* Input params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
-
-/* Output params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_output {
-	/* Control Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Unused */
-	uint8_t			  unused[4];
-	/* Minimum guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_min;
-	/* Maximum non-guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_max;
-	/* Minimum guaranteed number of profile functions */
-	uint16_t			 prof_func_min;
-	/* Maximum non-guaranteed number of profile functions */
-	uint16_t			 prof_func_max;
-	/* Minimum guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_min;
-	/* Maximum non-guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_max;
-	/* Minimum guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_min;
-	/* Maximum non-guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_max;
-	/* Minimum guaranteed number of EM records entries */
-	uint16_t			 em_record_entries_min;
-	/* Maximum non-guaranteed number of EM record entries */
-	uint16_t			 em_record_entries_max;
-	/* Minimum guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_min;
-	/* Maximum non-guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_max;
-	/* Minimum guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_min;
-	/* Maximum non-guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_max;
-	/* Minimum guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_min;
-	/* Maximum non-guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_max;
-	/* Minimum guaranteed number of meter instances */
-	uint16_t			 meter_inst_min;
-	/* Maximum non-guaranteed number of meter instances */
-	uint16_t			 meter_inst_max;
-	/* Minimum guaranteed number of mirrors */
-	uint16_t			 mirrors_min;
-	/* Maximum non-guaranteed number of mirrors */
-	uint16_t			 mirrors_max;
-	/* Minimum guaranteed number of UPAR */
-	uint16_t			 upar_min;
-	/* Maximum non-guaranteed number of UPAR */
-	uint16_t			 upar_max;
-	/* Minimum guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_min;
-	/* Maximum non-guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_max;
-	/* Minimum guaranteed number of L2 Functions */
-	uint16_t			 l2_func_min;
-	/* Maximum non-guaranteed number of L2 Functions */
-	uint16_t			 l2_func_max;
-	/* Minimum guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_min;
-	/* Maximum non-guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_max;
-	/* Minimum guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_min;
-	/* Maximum non-guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_max;
-	/* Minimum guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_min;
-	/* Maximum non-guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_max;
-	/* Minimum guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_min;
-	/* Maximum non-guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_max;
-	/* Minimum guaranteed number of metadata */
-	uint16_t			 metadata_min;
-	/* Maximum non-guaranteed number of metadata */
-	uint16_t			 metadata_max;
-	/* Minimum guaranteed number of CT states */
-	uint16_t			 ct_state_min;
-	/* Maximum non-guaranteed number of CT states */
-	uint16_t			 ct_state_max;
-	/* Minimum guaranteed number of range profiles */
-	uint16_t			 range_prof_min;
-	/* Maximum non-guaranteed number range profiles */
-	uint16_t			 range_prof_max;
-	/* Minimum guaranteed number of range entries */
-	uint16_t			 range_entries_min;
-	/* Maximum non-guaranteed number of range entries */
-	uint16_t			 range_entries_max;
-	/* Minimum guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_min;
-	/* Maximum non-guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_max;
-} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
-
-/* Input params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of L2 CTX TCAM entries to be allocated */
-	uint16_t			 num_l2_ctx_tcam_entries;
-	/* Number of profile functions to be allocated */
-	uint16_t			 num_prof_func_entries;
-	/* Number of profile TCAM entries to be allocated */
-	uint16_t			 num_prof_tcam_entries;
-	/* Number of EM profile ids to be allocated */
-	uint16_t			 num_em_prof_id;
-	/* Number of EM records entries to be allocated */
-	uint16_t			 num_em_record_entries;
-	/* Number of WC profiles ids to be allocated */
-	uint16_t			 num_wc_tcam_prof_id;
-	/* Number of WC TCAM entries to be allocated */
-	uint16_t			 num_wc_tcam_entries;
-	/* Number of meter profiles to be allocated */
-	uint16_t			 num_meter_profiles;
-	/* Number of meter instances to be allocated */
-	uint16_t			 num_meter_inst;
-	/* Number of mirrors to be allocated */
-	uint16_t			 num_mirrors;
-	/* Number of UPAR to be allocated */
-	uint16_t			 num_upar;
-	/* Number of SP TCAM entries to be allocated */
-	uint16_t			 num_sp_tcam_entries;
-	/* Number of L2 functions to be allocated */
-	uint16_t			 num_l2_func;
-	/* Number of flexible key templates to be allocated */
-	uint16_t			 num_flex_key_templ;
-	/* Number of table scopes to be allocated */
-	uint16_t			 num_tbl_scope;
-	/* Number of epoch0 entries to be allocated */
-	uint16_t			 num_epoch0_entries;
-	/* Number of epoch1 entries to be allocated */
-	uint16_t			 num_epoch1_entries;
-	/* Number of metadata to be allocated */
-	uint16_t			 num_metadata;
-	/* Number of CT states to be allocated */
-	uint16_t			 num_ct_state;
-	/* Number of range profiles to be allocated */
-	uint16_t			 num_range_prof;
-	/* Number of range Entries to be allocated */
-	uint16_t			 num_range_entries;
-	/* Number of LAG table entries to be allocated */
-	uint16_t			 num_lag_tbl_entries;
-} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
-
-/* Output params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_output {
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
-
-/* Input params for session resource HW free */
-typedef struct tf_session_hw_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
-
-/* Input params for session resource HW flush */
-typedef struct tf_session_hw_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
-
-/* Input params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
-
-/* Output params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_output {
-	/* Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Minimum guaranteed number of Full Action */
-	uint16_t			 full_action_min;
-	/* Maximum non-guaranteed number of Full Action */
-	uint16_t			 full_action_max;
-	/* Minimum guaranteed number of MCG */
-	uint16_t			 mcg_min;
-	/* Maximum non-guaranteed number of MCG */
-	uint16_t			 mcg_max;
-	/* Minimum guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_min;
-	/* Maximum non-guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_max;
-	/* Minimum guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_min;
-	/* Maximum non-guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_max;
-	/* Minimum guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_min;
-	/* Maximum non-guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_max;
-	/* Minimum guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_min;
-	/* Maximum non-guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_max;
-	/* Minimum guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_max;
-	/* Minimum guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_max;
-	/* Minimum guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_min;
-	/* Maximum non-guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_max;
-	/* Minimum guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_min;
-	/* Maximum non-guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_max;
-	/* Minimum guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_min;
-	/* Maximum non-guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_max;
-	/* Minimum guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_min;
-	/* Maximum non-guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_max;
-	/* Minimum guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_min;
-	/* Maximum non-guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_max;
-} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
-
-/* Input params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_input {
-	/* FW Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of full action SRAM entries to be allocated */
-	uint16_t			 num_full_action;
-	/* Number of multicast groups to be allocated */
-	uint16_t			 num_mcg;
-	/* Number of Encap 8B entries to be allocated */
-	uint16_t			 num_encap_8b;
-	/* Number of Encap 16B entries to be allocated */
-	uint16_t			 num_encap_16b;
-	/* Number of Encap 64B entries to be allocated */
-	uint16_t			 num_encap_64b;
-	/* Number of SP SMAC entries to be allocated */
-	uint16_t			 num_sp_smac;
-	/* Number of SP SMAC IPv4 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv4;
-	/* Number of SP SMAC IPv6 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv6;
-	/* Number of Counter 64B entries to be allocated */
-	uint16_t			 num_counter_64b;
-	/* Number of NAT source ports to be allocated */
-	uint16_t			 num_nat_sport;
-	/* Number of NAT destination ports to be allocated */
-	uint16_t			 num_nat_dport;
-	/* Number of NAT source iPV4 addresses to be allocated */
-	uint16_t			 num_nat_s_ipv4;
-	/* Number of NAT destination IPV4 addresses to be allocated */
-	uint16_t			 num_nat_d_ipv4;
-} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
-
-/* Output params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_output {
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
-
-/* Input params for session resource SRAM free */
-typedef struct tf_session_sram_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
-
-/* Input params for session resource SRAM flush */
-typedef struct tf_session_sram_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
-
-/* Input params for table type set */
-typedef struct tf_tbl_type_set_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Size of the data to set in bytes */
-	uint16_t			 size;
-	/* Data to set */
-	uint8_t			  data[TF_BULK_SEND];
-	/* Index to set */
-	uint32_t			 index;
-} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
-
-/* Input params for table type get */
-typedef struct tf_tbl_type_get_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Index to get */
-	uint32_t			 index;
-} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
-
-/* Output params for table type get */
-typedef struct tf_tbl_type_get_output {
-	/* Size of the data read in bytes */
-	uint16_t			 size;
-	/* Data read */
-	uint8_t			  data[TF_BULK_RECV];
-} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3e23d0513..8b3e15c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -208,7 +208,15 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL &&
+		dev->ops->tf_dev_insert_ext_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL &&
+		dev->ops->tf_dev_insert_int_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM insert failed, rc:%s\n",
@@ -217,7 +225,7 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	return -EINVAL;
+	return 0;
 }
 
 /** Delete EM hash entry API
@@ -255,7 +263,13 @@ int tf_delete_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		rc = dev->ops->tf_dev_delete_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		rc = dev->ops->tf_dev_delete_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM delete failed, rc:%s\n",
@@ -806,3 +820,83 @@ tf_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl_scope != NULL) {
+		rc = dev->ops->tf_dev_alloc_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Alloc table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl_scope) {
+		rc = dev->ops->tf_dev_free_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Free table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 441d0c678..20b0c5948 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,6 +6,7 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
+#include "tf_em.h"
 
 struct tf;
 
@@ -42,10 +43,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_ident_cfg_parms ident_cfg;
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
-
-	dev_handle->type = TF_DEVICE_TYPE_WH;
-	/* Initial function initialization */
-	dev_handle->ops = &tf_dev_ops_p4_init;
+	struct tf_em_cfg_parms em_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -86,6 +84,36 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * EEM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_ext_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
+
+	rc = tf_em_ext_common_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EEM initialization failure\n");
+		goto fail;
+	}
+
+	/*
+	 * EM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_int_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = 0; /* Not used by EM */
+
+	rc = tf_em_int_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EM initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -144,6 +172,20 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_em_ext_common_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EEM\n");
+		fail = true;
+	}
+
+	rc = tf_em_int_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EM\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c8feac55d..2712d1039 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -15,12 +15,24 @@ struct tf;
 struct tf_session;
 
 /**
- *
+ * Device module types
  */
 enum tf_device_module_type {
+	/**
+	 * Identifier module
+	 */
 	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	/**
+	 * Table type module
+	 */
 	TF_DEVICE_MODULE_TYPE_TABLE,
+	/**
+	 * TCAM module
+	 */
 	TF_DEVICE_MODULE_TYPE_TCAM,
+	/**
+	 * EM module
+	 */
 	TF_DEVICE_MODULE_TYPE_EM,
 	TF_DEVICE_MODULE_TYPE_MAX
 };
@@ -395,8 +407,8 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_insert_em_entry)(struct tf *tfp,
-				      struct tf_insert_em_entry_parms *parms);
+	int (*tf_dev_insert_int_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
 
 	/**
 	 * Delete EM hash entry API
@@ -411,8 +423,72 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_delete_em_entry)(struct tf *tfp,
-				      struct tf_delete_em_entry_parms *parms);
+	int (*tf_dev_delete_int_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Insert EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_ext_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_ext_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Allocate EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope alloc parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_alloc_tbl_scope)(struct tf *tfp,
+				      struct tf_alloc_tbl_scope_parms *parms);
+
+	/**
+	 * Free EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope free parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
+				     struct tf_free_tbl_scope_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9e332c594..127c655a6 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -93,6 +93,12 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = NULL,
 	.tf_dev_get_tcam = NULL,
+	.tf_dev_insert_int_em_entry = NULL,
+	.tf_dev_delete_int_em_entry = NULL,
+	.tf_dev_insert_ext_em_entry = NULL,
+	.tf_dev_delete_ext_em_entry = NULL,
+	.tf_dev_alloc_tbl_scope = NULL,
+	.tf_dev_free_tbl_scope = NULL,
 };
 
 /**
@@ -113,6 +119,10 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = NULL,
-	.tf_dev_insert_em_entry = tf_em_insert_entry,
-	.tf_dev_delete_em_entry = tf_em_delete_entry,
+	.tf_dev_insert_int_em_entry = tf_em_insert_int_entry,
+	.tf_dev_delete_int_em_entry = tf_em_delete_int_entry,
+	.tf_dev_insert_ext_em_entry = tf_em_insert_ext_entry,
+	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
+	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
+	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 411e21637..da6dd65a3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -36,13 +36,12 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
@@ -77,4 +76,17 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
+
+struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
+	/* CFA_RESOURCE_TYPE_P4_EM_REC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+};
+
+struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_REC },
+	/* CFA_RESOURCE_TYPE_P4_TBL_SCOPE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
deleted file mode 100644
index fcbbd7eca..000000000
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ /dev/null
@@ -1,360 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-#include <rte_common.h>
-#include <rte_errno.h>
-#include <rte_log.h>
-
-#include "tf_core.h"
-#include "tf_em.h"
-#include "tf_msg.h"
-#include "tfp.h"
-#include "lookup3.h"
-#include "tf_ext_flow_handle.h"
-
-#include "bnxt.h"
-
-
-static uint32_t tf_em_get_key_mask(int num_entries)
-{
-	uint32_t mask = num_entries - 1;
-
-	if (num_entries & 0x7FFF)
-		return 0;
-
-	if (num_entries > (128 * 1024 * 1024))
-		return 0;
-
-	return mask;
-}
-
-static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
-				   uint8_t	       *in_key,
-				   struct cfa_p4_eem_64b_entry *key_entry)
-{
-	key_entry->hdr.word1 = result->word1;
-
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
-
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
-#endif
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
-			       struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t	   mask;
-	uint32_t	   key0_hash;
-	uint32_t	   key1_hash;
-	uint32_t	   key0_index;
-	uint32_t	   key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t	   index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t	   gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/**
- * Insert EM internal entry API
- *
- *  returns:
- *     0 - Success
- */
-static int tf_insert_em_internal_entry(struct tf                       *tfp,
-				       struct tf_insert_em_entry_parms *parms)
-{
-	int       rc;
-	uint32_t  gfid;
-	uint16_t  rptr_index = 0;
-	uint8_t   rptr_entry = 0;
-	uint8_t   num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-	uint32_t index;
-
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
-		return rc;
-	}
-
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
-	rc = tf_msg_insert_em_internal_entry(tfp,
-					     parms,
-					     &rptr_index,
-					     &rptr_entry,
-					     &num_of_entries);
-	if (rc != 0)
-		return -1;
-
-	PMD_DRV_LOG(ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
-		   rptr_index,
-		   rptr_entry,
-		   num_of_entries);
-
-	TF_SET_GFID(gfid,
-		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
-		     rptr_entry),
-		    0); /* N/A for internal table */
-
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_INTERNAL,
-		       parms->dir);
-
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     num_of_entries,
-				     0,
-				     0,
-				     rptr_index,
-				     rptr_entry,
-				     0);
-	return 0;
-}
-
-/** Delete EM internal entry API
- *
- * returns:
- * 0
- * -EINVAL
- */
-static int tf_delete_em_internal_entry(struct tf                       *tfp,
-				       struct tf_delete_em_entry_parms *parms)
-{
-	int rc;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-
-	rc = tf_msg_delete_em_entry(tfp, parms);
-
-	/* Return resource to pool */
-	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
-
-	return rc;
-}
-
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL)
-		/* External EEM */
-		return tf_insert_eem_entry
-			(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
-
-	return -EINVAL;
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		return tf_delete_em_internal_entry(tfp, parms);
-
-	return -EINVAL;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 2262ae7cc..cf799c200 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
@@ -19,6 +20,9 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+#define TF_EM_MAX_MASK 0x7FFF
+#define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
+
 /*
  * Used to build GFID:
  *
@@ -44,6 +48,47 @@ struct tf_em_64b_entry {
 	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
+/** EEM Memory Type
+ *
+ */
+enum tf_mem_type {
+	TF_EEM_MEM_TYPE_INVALID,
+	TF_EEM_MEM_TYPE_HOST,
+	TF_EEM_MEM_TYPE_SYSTEM
+};
+
+/**
+ * tf_em_cfg_parms definition
+ */
+struct tf_em_cfg_parms {
+	/**
+	 * [in] Num entries in resource config
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Resource config
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+	/**
+	 * [in] Memory type.
+	 */
+	enum tf_mem_type mem_type;
+};
+
+/**
+ * @page table Table
+ *
+ * @ref tf_alloc_eem_tbl_scope
+ *
+ * @ref tf_free_eem_tbl_scope_cb
+ *
+ * @ref tbl_scope_cb_find
+ */
+
 /**
  * Allocates EEM Table scope
  *
@@ -78,29 +123,258 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 			     struct tf_free_tbl_scope_parms *parms);
 
 /**
- * Function to search for table scope control block structure
- * with specified table scope ID.
+ * Insert record in to internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_int_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_int_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
+
+/**
+ * Insert record in to external EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Insert record from external EEM table
  *
- * [in] session
- *   Session to use for the search of the table scope control block
- * [in] tbl_scope_id
- *   Table scope ID to search for
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
  *
  * Returns:
- *  Pointer to the found table scope control block struct or NULL if
- *  table scope control block struct not found
+ *   0       - Success
+ *   -EINVAL - Parameter error
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+int tf_em_delete_ext_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
 
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum hcapi_cfa_em_table_type table_type);
+/**
+ * Insert record in to external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_sys_entry(struct tf *tfp,
+			       struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_ext_sys_entry(struct tf *tfp,
+			       struct tf_delete_em_entry_parms *parms);
 
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms);
+/**
+ * Bind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_bind(struct tf *tfp,
+		   struct tf_em_cfg_parms *parms);
 
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms);
+/**
+ * Unbind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_unbind(struct tf *tfp);
+
+/**
+ * Common bind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_bind(struct tf *tfp,
+			  struct tf_em_cfg_parms *parms);
+
+/**
+ * Common unbind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_unbind(struct tf *tfp);
+
+/**
+ * Alloc for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Alloc for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common free for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_free(struct tf *tfp,
+			  struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common alloc for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_alloc(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
new file mode 100644
index 000000000..ba6aa7ac1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_device.h"
+#include "tf_ext_flow_handle.h"
+#include "cfa_resource_types.h"
+
+#include "bnxt.h"
+
+
+/**
+ * EM DBs.
+ */
+void *eem_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Host or system
+ */
+static enum tf_mem_type mem_type;
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+	struct tf_rm_is_allocated_parms parms;
+	int allocated;
+
+	/* Check that id is valid */
+	parms.rm_db = eem_db[TF_DIR_RX];
+	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.allocated = &allocated;
+
+	i = tf_rm_is_allocated(&parms);
+
+	if (i < 0 || !allocated)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+int
+tf_create_tbl_pool_external(enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i;
+	int32_t j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes in reverse
+	 */
+	j = (num_entries - 1) * entry_sz_bytes;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+				    tf_dir_2_str(dir), strerror(-rc));
+			goto cleanup;
+		}
+
+		if (j < 0) {
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+				    dir, j);
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ */
+void
+tf_destroy_tbl_pool_external(enum tf_dir dir,
+			     struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_act_pool_mem =
+		tbl_scope_cb->ext_act_pool_mem[dir];
+
+	tfp_free(ext_act_pool_mem);
+}
+
+uint32_t
+tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & TF_EM_MAX_MASK)
+		return 0;
+
+	if (num_entries > TF_EM_MAX_ENTRY)
+		return 0;
+
+	return mask;
+}
+
+void
+tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+		       uint8_t *in_key,
+		       struct cfa_p4_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
+}
+
+int
+tf_em_ext_common_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+		db_cfg.rm_db = &eem_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	mem_type = parms->mem_type;
+	init = 1;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = eem_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		eem_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_alloc(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_alloc(tfp, parms);
+	else
+		return tf_em_ext_system_alloc(tfp, parms);
+}
+
+int
+tf_em_ext_common_free(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_free(tfp, parms);
+	else
+		return tf_em_ext_system_free(tfp, parms);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
new file mode 100644
index 000000000..45699a7c3
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_COMMON_H_
+#define _TF_EM_COMMON_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *   table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+/**
+ * Create and initialize a stack to use for action entries
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ * [in] num_entries
+ *   Number of EEM entries
+ * [in] entry_sz_bytes
+ *   Size of the entry
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memory
+ *   -EINVAL - Failure
+ */
+int tf_create_tbl_pool_external(enum tf_dir dir,
+				struct tf_tbl_scope_cb *tbl_scope_cb,
+				uint32_t num_entries,
+				uint32_t entry_sz_bytes);
+
+/**
+ * Delete and cleanup action record allocation stack
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ *
+ */
+void tf_destroy_tbl_pool_external(enum tf_dir dir,
+				  struct tf_tbl_scope_cb *tbl_scope_cb);
+
+/**
+ * Get hash mask for current EEM table size
+ *
+ * [in] num_entries
+ *   Number of EEM entries
+ */
+uint32_t tf_em_get_key_mask(int num_entries);
+
+/**
+ * Populate key_entry
+ *
+ * [in] result
+ *   Entry data
+ * [in] in_key
+ *   Key data
+ * [out] key_entry
+ *   Completed key record
+ */
+void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+			    uint8_t	       *in_key,
+			    struct cfa_p4_eem_64b_entry *key_entry);
+
+/**
+ * Find base page address for offset into specified table type
+ *
+ * [in] tbl_scope_cb
+ *   Table scope
+ * [in] dir
+ *   Direction
+ * [in] Offset
+ *   Offset in to table
+ * [in] table_type
+ *   Table type
+ *
+ * Returns:
+ *
+ * 0                                 - Failure
+ * Void pointer to page base address - Success
+ */
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum hcapi_cfa_em_table_type table_type);
+
+#endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
new file mode 100644
index 000000000..8be39afdd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -0,0 +1,1146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * EM DBs.
+ */
+extern void *eem_db[TF_DIR_MAX];
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)(uintptr_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+		TFP_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			TFP_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(void *)(uintptr_t)tp->pg_va_tbl[j],
+				(void *)(uintptr_t)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_host_alloc(struct tf *tfp,
+		     struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *em_tables;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
+	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
+				   parms->hw_flow_cache_flush_timer,
+				   dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(dir,
+					    tbl_scope_cb,
+					    em_tables[TF_RECORD_TABLE].num_entries,
+					    em_tables[TF_RECORD_TABLE].entry_size);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_host_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_host_free(struct tf *tfp,
+		    struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
+		return -EINVAL;
+	}
+
+	/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
new file mode 100644
index 000000000..9be91ad5d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -0,0 +1,312 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/**
+ * EM DBs.
+ */
+static void *em_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	rc = tfp_calloc(&parms);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	if (ptr != NULL)
+		tfp_free(ptr);
+}
+
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int
+tf_em_insert_int_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	int rc;
+	uint32_t gfid;
+	uint16_t rptr_index = 0;
+	uint8_t rptr_entry = 0;
+	uint8_t num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc) {
+		PMD_DRV_LOG
+		  (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc)
+		return -1;
+
+	PMD_DRV_LOG
+		  (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     (uint32_t)num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int
+tf_em_delete_int_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+int
+tf_em_int_bind(struct tf *tfp,
+	       struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		tf_create_em_pool(session,
+				  i,
+				  TF_SESSION_EM_POOL_SIZE);
+	}
+
+	/*
+	 * I'm not sure that this code is needed.
+	 * leaving for now until resolved
+	 */
+	if (parms->num_elements) {
+		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+
+		for (i = 0; i < TF_DIR_MAX; i++) {
+			db_cfg.dir = i;
+			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+			db_cfg.rm_db = &em_db[i];
+			rc = tf_rm_create_db(tfp, &db_cfg);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: EM DB creation failed\n",
+					    tf_dir_2_str(i));
+
+				return rc;
+			}
+		}
+	}
+
+	init = 1;
+	return 0;
+}
+
+int
+tf_em_int_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		tf_free_em_pool(session, i);
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = em_db[i];
+		if (em_db[i] != NULL) {
+			rc = tf_rm_free_db(tfp, &fparms);
+			if (rc)
+				return rc;
+		}
+
+		em_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
new file mode 100644
index 000000000..ee18a0c70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_insert_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_delete_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_sys_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_sys_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
+		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_em_ext_system_free(struct tf *tfp __rte_unused,
+		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c015b0ce2..d8b80bc84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,82 +18,6 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
-/**
- * Endian converts min and max values from the HW response to the query
- */
-#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
-	(query)->hw_query[index].min =                                       \
-		tfp_le_to_cpu_16(response. element ## _min);                 \
-	(query)->hw_query[index].max =                                       \
-		tfp_le_to_cpu_16(response. element ## _max);                 \
-} while (0)
-
-/**
- * Endian converts the number of entries from the alloc to the request
- */
-#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
-	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(hw_entry[index].start);                     \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
-	hw_entry[index].start =                                              \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	hw_entry[index].stride =                                             \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
-/**
- * Endian converts min and max values from the SRAM response to the
- * query
- */
-#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
-	(query)->sram_query[index].min =                                     \
-		tfp_le_to_cpu_16(response.element ## _min);                  \
-	(query)->sram_query[index].max =                                     \
-		tfp_le_to_cpu_16(response.element ## _max);                  \
-} while (0)
-
-/**
- * Endian converts the number of entries from the action (alloc) to
- * the request
- */
-#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
-	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(sram_entry[index].start);                   \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
-	sram_entry[index].start =                                            \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	sram_entry[index].stride =                                           \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -107,39 +31,6 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
-static int
-tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
-		   uint32_t *hwrm_type)
-{
-	int rc = 0;
-
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	return rc;
-}
-
 /**
  * Allocates a DMA buffer that can be used for message transfer.
  *
@@ -185,13 +76,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
-/**
- * NEW HWRM direct messages
- */
+/* HWRM Direct messages */
 
-/**
- * Sends session open request to TF Firmware
- */
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
@@ -222,9 +108,6 @@ tf_msg_session_open(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends session attach request to TF Firmware
- */
 int
 tf_msg_session_attach(struct tf *tfp __rte_unused,
 		      char *ctrl_chan_name __rte_unused,
@@ -233,9 +116,6 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
-/**
- * Sends session close request to TF Firmware
- */
 int
 tf_msg_session_close(struct tf *tfp)
 {
@@ -261,14 +141,11 @@ tf_msg_session_close(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session query config request to TF Firmware
- */
 int
 tf_msg_session_qcfg(struct tf *tfp)
 {
 	int rc;
-	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
@@ -289,636 +166,6 @@ tf_msg_session_qcfg(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_query *query)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_qcaps_input req = { 0 };
-	struct tf_session_hw_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(query, 0, sizeof(*query));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
-			    resp, meter_inst);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
-			     struct tf_rm_entry *hw_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_alloc_input req = { 0 };
-	struct tf_session_hw_resc_alloc_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			   l2_ctx_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			   prof_func_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			   prof_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			   em_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
-			   em_record_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			   wc_tcam_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
-			   wc_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
-			   meter_profiles);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
-			   meter_inst);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
-			   mirrors);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
-			   upar);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
-			   sp_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
-			   l2_func);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
-			   flex_key_templ);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			   tbl_scope);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
-			   epoch0_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
-			   epoch1_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
-			   metadata);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
-			   ct_state);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			   range_prof);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			   range_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			   lag_tbl_entries);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
-			    meter_inst);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_free(struct tf *tfp,
-			    enum tf_dir dir,
-			    struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_HW_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_flush(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_query *query __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_qcaps_input req = { 0 };
-	struct tf_session_sram_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
-			      full_action);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
-			      sp_smac_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
-			      sp_smac_ipv6);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
-			       struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_alloc_input req = { 0 };
-	struct tf_session_sram_resc_alloc_output resp;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(&resp, 0, sizeof(resp));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			     full_action);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
-			     mcg);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			     encap_8b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			     encap_16b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			     encap_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			     sp_smac);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			     req, sp_smac_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			     req, sp_smac_ipv6);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
-			     req, counter_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			     nat_sport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			     nat_dport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			     nat_s_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			     nat_d_ipv4);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
-			      resp, full_action);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			      resp, sp_smac_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			      resp, sp_smac_ipv6);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
-			      enum tf_dir dir,
-			      struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_flush(struct tf *tfp,
-			       enum tf_dir dir,
-			       struct tf_rm_entry *sram_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
 int
 tf_msg_session_resc_qcaps(struct tf *tfp,
 			  enum tf_dir dir,
@@ -973,7 +220,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -981,14 +228,14 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
 	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
-		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
@@ -1078,7 +325,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -1087,14 +334,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	}
 
 	printf("\nRESV\n");
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
-		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
-		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
-		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+		resv[i].type = tfp_le_to_cpu_32(resv_data[i].type);
+		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
+		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
@@ -1173,24 +420,112 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem register request to Firmware
- */
-int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id)
+int
+tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
 {
 	int rc;
-	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
-	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_insert_input req = { 0 };
+	struct hwrm_tf_em_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.strength = (em_result->hdr.word1 &
+			CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+			CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return 0;
+}
+
+int
+tf_msg_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_delete_input req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return 0;
+}
+
+int
+tf_msg_em_mem_rgtr(struct tf *tfp,
+		   int page_lvl,
+		   int page_size,
+		   uint64_t dma_addr,
+		   uint16_t *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
 
-	req.page_level = page_lvl;
-	req.page_size = page_size;
-	req.page_dir = tfp_cpu_to_le_64(dma_addr);
-
 	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
@@ -1208,11 +543,9 @@ int tf_msg_em_mem_rgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem unregister request to Firmware
- */
-int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t  *ctx_id)
+int
+tf_msg_em_mem_unrgtr(struct tf *tfp,
+		     uint16_t *ctx_id)
 {
 	int rc;
 	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
@@ -1233,12 +566,10 @@ int tf_msg_em_mem_unrgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM qcaps request to Firmware
- */
-int tf_msg_em_qcaps(struct tf *tfp,
-		    int dir,
-		    struct tf_em_caps *em_caps)
+int
+tf_msg_em_qcaps(struct tf *tfp,
+		int dir,
+		struct tf_em_caps *em_caps)
 {
 	int rc;
 	struct hwrm_tf_ext_em_qcaps_input  req = {0};
@@ -1273,17 +604,15 @@ int tf_msg_em_qcaps(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM config request to Firmware
- */
-int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t   num_entries,
-		  uint16_t   key0_ctx_id,
-		  uint16_t   key1_ctx_id,
-		  uint16_t   record_ctx_id,
-		  uint16_t   efc_ctx_id,
-		  uint8_t    flush_interval,
-		  int        dir)
+int
+tf_msg_em_cfg(struct tf *tfp,
+	      uint32_t num_entries,
+	      uint16_t key0_ctx_id,
+	      uint16_t key1_ctx_id,
+	      uint16_t record_ctx_id,
+	      uint16_t efc_ctx_id,
+	      uint8_t flush_interval,
+	      int dir)
 {
 	int rc;
 	struct hwrm_tf_ext_em_cfg_input  req = {0};
@@ -1317,42 +646,23 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM internal insert request to Firmware
- */
-int tf_msg_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *em_parms,
-				uint16_t *rptr_index,
-				uint8_t *rptr_entry,
-				uint8_t *num_of_entries)
+int
+tf_msg_em_op(struct tf *tfp,
+	     int dir,
+	     uint16_t op)
 {
-	int                         rc;
-	struct tfp_send_msg_parms        parms = { 0 };
-	struct hwrm_tf_em_insert_input   req = { 0 };
-	struct hwrm_tf_em_insert_output  resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_em_64b_entry *em_result =
-		(struct tf_em_64b_entry *)em_parms->em_record;
+	int rc;
+	struct hwrm_tf_ext_em_op_input req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	tfp_memcpy(req.em_key,
-		   em_parms->key,
-		   ((em_parms->key_sz_in_bits + 7) / 8));
-
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength =
-		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
-		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
-	req.em_key_bitlen = em_parms->key_sz_in_bits;
-	req.action_ptr = em_result->hdr.pointer;
-	req.em_record_idx = *rptr_index;
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
 
-	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1361,75 +671,86 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &parms);
-	if (rc)
-		return rc;
-
-	*rptr_entry = resp.rptr_entry;
-	*rptr_index = resp.rptr_index;
-	*num_of_entries = resp.num_of_entries;
-
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM delete insert request to Firmware
- */
-int tf_msg_delete_em_entry(struct tf *tfp,
-			   struct tf_delete_em_entry_parms *em_parms)
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_tcam_set_parms *parms)
 {
-	int                             rc;
-	struct tfp_send_msg_parms       parms = { 0 };
-	struct hwrm_tf_em_delete_input  req = { 0 };
-	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.type = parms->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
+	data_size = 2 * req.key_size + req.result_size;
 
-	parms.tf_type = HWRM_TF_EM_DELETE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
+		data = buf.va_addr;
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
+	}
+
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp,
-				 &parms);
+				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
-	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+cleanup:
+	tf_msg_free_dma_buf(&buf);
 
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM operation request to Firmware
- */
-int tf_msg_em_op(struct tf *tfp,
-		 int dir,
-		 uint16_t op)
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input req = {0};
-	struct hwrm_tf_ext_em_op_output resp = {0};
-	uint32_t flags;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
 
-	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_32(flags);
-	req.op = tfp_cpu_to_le_16(op);
+	req.type = in_parms->hcapi_type;
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
 
-	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.tf_type = HWRM_TF_TCAM_FREE;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1444,21 +765,32 @@ int tf_msg_em_op(struct tf *tfp,
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_set_input req = { 0 };
+	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_set_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
 	req.index = tfp_cpu_to_le_32(index);
 
@@ -1466,13 +798,15 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 		   data,
 		   size);
 
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_TBL_TYPE_SET,
-			 req);
+	parms.tf_type = HWRM_TF_TBL_TYPE_SET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1482,32 +816,43 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 int
 tf_msg_get_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_get_input req = { 0 };
+	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_input req = { 0 };
-	struct tf_tbl_type_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_TBL_TYPE_GET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1522,6 +867,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/* HWRM Tunneled messages */
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  struct tf_bulk_get_tbl_entry_parms *params)
@@ -1562,96 +909,3 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
-
-int
-tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_tcam_set_parms *parms)
-{
-	int rc;
-	struct tfp_send_msg_parms mparms = { 0 };
-	struct hwrm_tf_tcam_set_input req = { 0 };
-	struct hwrm_tf_tcam_set_output resp = { 0 };
-	struct tf_msg_dma_buf buf = { 0 };
-	uint8_t *data = NULL;
-	int data_size = 0;
-
-	req.type = parms->type;
-
-	req.idx = tfp_cpu_to_le_16(parms->idx);
-	if (parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
-
-	req.key_size = parms->key_size;
-	req.mask_offset = parms->key_size;
-	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * parms->key_size;
-	req.result_size = parms->result_size;
-	data_size = 2 * req.key_size + req.result_size;
-
-	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
-		/* use pci buffer */
-		data = &req.dev_data[0];
-	} else {
-		/* use dma buffer */
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_alloc_dma_buf(&buf, data_size);
-		if (rc)
-			goto cleanup;
-		data = buf.va_addr;
-		tfp_memcpy(&req.dev_data[0],
-			   &buf.pa_addr,
-			   sizeof(buf.pa_addr));
-	}
-
-	tfp_memcpy(&data[0], parms->key, parms->key_size);
-	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
-	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
-
-	mparms.tf_type = HWRM_TF_TCAM_SET;
-	mparms.req_data = (uint32_t *)&req;
-	mparms.req_size = sizeof(req);
-	mparms.resp_data = (uint32_t *)&resp;
-	mparms.resp_size = sizeof(resp);
-	mparms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &mparms);
-	if (rc)
-		goto cleanup;
-
-cleanup:
-	tf_msg_free_dma_buf(&buf);
-
-	return rc;
-}
-
-int
-tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_tcam_free_parms *in_parms)
-{
-	int rc;
-	struct hwrm_tf_tcam_free_input req =  { 0 };
-	struct hwrm_tf_tcam_free_output resp = { 0 };
-	struct tfp_send_msg_parms parms = { 0 };
-
-	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
-	if (rc != 0)
-		return rc;
-
-	req.count = 1;
-	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
-	if (in_parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
-
-	parms.tf_type = HWRM_TF_TCAM_FREE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &parms);
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1ff1044e8..8e276d4c0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -16,6 +16,8 @@
 
 struct tf;
 
+/* HWRM Direct messages */
+
 /**
  * Sends session open request to Firmware
  *
@@ -29,7 +31,7 @@ struct tf;
  *   Pointer to the fw_session_id that is allocated on firmware side
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
@@ -46,7 +48,7 @@ int tf_msg_session_open(struct tf *tfp,
  *   time of session open
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
@@ -59,73 +61,21 @@ int tf_msg_session_attach(struct tf *tfp,
  *   Pointer to session handle
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_close(struct tf *tfp);
 
 /**
  * Sends session query config request to TF Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_qcfg(struct tf *tfp);
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_query *hw_query);
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int tf_msg_session_hw_resc_alloc(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_alloc *hw_alloc,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int tf_msg_session_hw_resc_free(struct tf *tfp,
-				enum tf_dir dir,
-				struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int tf_msg_session_hw_resc_flush(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_query *sram_query);
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int tf_msg_session_sram_resc_alloc(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_alloc *sram_alloc,
-				   struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int tf_msg_session_sram_resc_free(struct tf *tfp,
-				  enum tf_dir dir,
-				  struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int tf_msg_session_sram_resc_flush(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_entry *sram_entry);
-
 /**
  * Sends session HW resource query capability request to TF Firmware
  *
@@ -183,6 +133,21 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 
 /**
  * Sends session resource flush request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements that needs to be flushed
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_resc_flush(struct tf *tfp,
 			      enum tf_dir dir,
@@ -190,6 +155,24 @@ int tf_msg_session_resc_flush(struct tf *tfp,
 			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to em insert parameter list
+ *
+ * [in] rptr_index
+ *   Record ptr index
+ *
+ * [in] rptr_entry
+ *   Record ptr entry
+ *
+ * [in] num_of_entries
+ *   Number of entries to insert
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    struct tf_insert_em_entry_parms *params,
@@ -198,26 +181,75 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    uint8_t *num_of_entries);
 /**
  * Sends EM internal delete request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] em_parms
+ *   Pointer to em delete parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms);
+
 /**
  * Sends EM mem register request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] page_lvl
+ *   Page level
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] dma_addr
+ *   DMA Address for the memory page
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id);
+		       int page_lvl,
+		       int page_size,
+		       uint64_t dma_addr,
+		       uint16_t *ctx_id);
 
 /**
  * Sends EM mem unregister request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t     *ctx_id);
+			 uint16_t *ctx_id);
 
 /**
  * Sends EM qcaps request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] em_caps
+ *   Pointer to EM capabilities
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_qcaps(struct tf *tfp,
 		    int dir,
@@ -225,22 +257,63 @@ int tf_msg_em_qcaps(struct tf *tfp,
 
 /**
  * Sends EM config request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] num_entries
+ *   EM Table, key 0, number of entries to configure
+ *
+ * [in] key0_ctx_id
+ *   EM Table, Key 0 context id
+ *
+ * [in] key1_ctx_id
+ *   EM Table, Key 1 context id
+ *
+ * [in] record_ctx_id
+ *   EM Table, Record context id
+ *
+ * [in] efc_ctx_id
+ *   EM Table, EFC Table context id
+ *
+ * [in] flush_interval
+ *   Flush pending HW cached flows every 1/10th of value set in
+ *   seconds, both idle and active flows are flushed from the HW
+ *   cache. If set to 0, this feature will be disabled.
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t      num_entries,
-		  uint16_t      key0_ctx_id,
-		  uint16_t      key1_ctx_id,
-		  uint16_t      record_ctx_id,
-		  uint16_t      efc_ctx_id,
-		  uint8_t       flush_interval,
-		  int           dir);
+		  uint32_t num_entries,
+		  uint16_t key0_ctx_id,
+		  uint16_t key1_ctx_id,
+		  uint16_t record_ctx_id,
+		  uint16_t efc_ctx_id,
+		  uint8_t flush_interval,
+		  int dir);
 
 /**
  * Sends EM operation request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] op
+ *   CFA Operator
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op);
+		 int dir,
+		 uint16_t op);
 
 /**
  * Sends tcam entry 'set' to the Firmware.
@@ -281,7 +354,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to set
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to set
  *
  * [in] size
@@ -298,7 +371,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  */
 int tf_msg_set_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
@@ -312,7 +385,7 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to get
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to get
  *
  * [in] size
@@ -329,11 +402,13 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  */
 int tf_msg_get_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
 
+/* HWRM Tunneled messages */
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index b6fe2f1ad..e0a84e64d 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -1818,16 +1818,8 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_entries = tfs->resc.tx.hw_entry;
 
 	/* Query for Session HW Resources */
-	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
 
+	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1846,16 +1838,6 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
 
 	/* Allocate Session HW Resources */
-	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform HW allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -1906,17 +1888,7 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	else
 		sram_entries = tfs->resc.tx.sram_entry;
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
+	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1934,20 +1906,6 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
 		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform SRAM allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -2798,17 +2756,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
-			rc = tf_msg_session_hw_resc_flush(tfp,
-							  i,
-							  hw_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
 		}
 
 		/* Check for any not previously freed SRAM resources
@@ -2828,38 +2775,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
-
-			rc = tf_msg_session_sram_resc_flush(tfp,
-							    i,
-							    sram_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
-		}
-
-		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, HW free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
-		}
-
-		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, SRAM free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
 		}
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index de8f11955..2d9be654a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -95,7 +95,9 @@ struct tf_rm_new_db {
  *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
 			       uint16_t *reservations,
 			       uint16_t count,
 			       uint16_t *valid_count)
@@ -107,6 +109,26 @@ tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
 		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
 		    reservations[i] > 0)
 			cnt++;
+
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
+		 */
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
+			TFP_DRV_LOG(ERR,
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+		}
 	}
 
 	*valid_count = cnt;
@@ -405,7 +427,9 @@ tf_rm_create_db(struct tf *tfp,
 	 * the DB holds them all as to give a fast lookup. We can also
 	 * remove entries where there are no request for elements.
 	 */
-	tf_rm_count_hcapi_reservations(parms->cfg,
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
 				       parms->alloc_cnt,
 				       parms->num_elements,
 				       &hcapi_items);
@@ -507,6 +531,11 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
 			/* Create pool */
 			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
 				     sizeof(struct bitalloc));
@@ -548,11 +577,16 @@ tf_rm_create_db(struct tf *tfp,
 		}
 	}
 
-	rm_db->num_entries = i;
+	rm_db->num_entries = parms->num_elements;
 	rm_db->dir = parms->dir;
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index e594f0248..d7f5de4c4 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -26,741 +26,6 @@
 #include "stack.h"
 #include "tf_common.h"
 
-#define PTU_PTE_VALID          0x1UL
-#define PTU_PTE_LAST           0x2UL
-#define PTU_PTE_NEXT_TO_LAST   0x4UL
-
-/* Number of pointers per page_size */
-#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
-
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define	TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
-/**
- * Function to free a page table
- *
- * [in] tp
- *   Pointer to the page table to free
- */
-static void
-tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
-{
-	uint32_t i;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		if (!tp->pg_va_tbl[i]) {
-			TFP_DRV_LOG(WARNING,
-				    "No mapping for page: %d table: %016" PRIu64 "\n",
-				    i,
-				    (uint64_t)(uintptr_t)tp);
-			continue;
-		}
-
-		tfp_free(tp->pg_va_tbl[i]);
-		tp->pg_va_tbl[i] = NULL;
-	}
-
-	tp->pg_count = 0;
-	tfp_free(tp->pg_va_tbl);
-	tp->pg_va_tbl = NULL;
-	tfp_free(tp->pg_pa_tbl);
-	tp->pg_pa_tbl = NULL;
-}
-
-/**
- * Function to free an EM table
- *
- * [in] tbl
- *   Pointer to the EM table to free
- */
-static void
-tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-		TFP_DRV_LOG(INFO,
-			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
-			   TF_EM_PAGE_SIZE,
-			    i,
-			    tp->pg_count);
-
-		tf_em_free_pg_tbl(tp);
-	}
-
-	tbl->l0_addr = NULL;
-	tbl->l0_dma_addr = 0;
-	tbl->num_lvl = 0;
-	tbl->num_data_pages = 0;
-}
-
-/**
- * Allocation of page tables
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] pg_count
- *   Page count to allocate
- *
- * [in] pg_size
- *   Size of each page
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
-		   uint32_t pg_count,
-		   uint32_t pg_size)
-{
-	uint32_t i;
-	struct tfp_calloc_parms parms;
-
-	parms.nitems = pg_count;
-	parms.size = sizeof(void *);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0)
-		return -ENOMEM;
-
-	tp->pg_va_tbl = parms.mem_va;
-
-	if (tfp_calloc(&parms) != 0) {
-		tfp_free(tp->pg_va_tbl);
-		return -ENOMEM;
-	}
-
-	tp->pg_pa_tbl = parms.mem_va;
-
-	tp->pg_count = 0;
-	tp->pg_size = pg_size;
-
-	for (i = 0; i < pg_count; i++) {
-		parms.nitems = 1;
-		parms.size = pg_size;
-		parms.alignment = TF_EM_PAGE_ALIGNMENT;
-
-		if (tfp_calloc(&parms) != 0)
-			goto cleanup;
-
-		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
-		tp->pg_va_tbl[i] = parms.mem_va;
-
-		memset(tp->pg_va_tbl[i], 0, pg_size);
-		tp->pg_count++;
-	}
-
-	return 0;
-
-cleanup:
-	tf_em_free_pg_tbl(tp);
-	return -ENOMEM;
-}
-
-/**
- * Allocates EM page tables
- *
- * [in] tbl
- *   Table to allocate pages for
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int rc = 0;
-	int i;
-	uint32_t j;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-
-		rc = tf_em_alloc_pg_tbl(tp,
-					tbl->page_cnt[i],
-					TF_EM_PAGE_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d, rc:%s\n",
-				i,
-				strerror(-rc));
-			goto cleanup;
-		}
-
-		for (j = 0; j < tp->pg_count; j++) {
-			TFP_DRV_LOG(INFO,
-				"EEM: Allocated page table: size %u lvl %d cnt"
-				" %u VA:%p PA:%p\n",
-				TF_EM_PAGE_SIZE,
-				i,
-				tp->pg_count,
-				(uint32_t *)tp->pg_va_tbl[j],
-				(uint32_t *)(uintptr_t)tp->pg_pa_tbl[j]);
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_free_page_table(tbl);
-	return rc;
-}
-
-/**
- * Links EM page tables
- *
- * [in] tp
- *   Pointer to page table
- *
- * [in] tp_next
- *   Pointer to the next page table
- *
- * [in] set_pte_last
- *   Flag controlling if the page table is last
- */
-static void
-tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-		      struct hcapi_cfa_em_page_tbl *tp_next,
-		      bool set_pte_last)
-{
-	uint64_t *pg_pa = tp_next->pg_pa_tbl;
-	uint64_t *pg_va;
-	uint64_t valid;
-	uint32_t k = 0;
-	uint32_t i;
-	uint32_t j;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			if (k == tp_next->pg_count - 2 && set_pte_last)
-				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
-			else if (k == tp_next->pg_count - 1 && set_pte_last)
-				valid = PTU_PTE_LAST | PTU_PTE_VALID;
-			else
-				valid = PTU_PTE_VALID;
-
-			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
-			if (++k >= tp_next->pg_count)
-				return;
-		}
-	}
-}
-
-/**
- * Setup a EM page table
- *
- * [in] tbl
- *   Pointer to EM page table
- */
-static void
-tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_page_tbl *tp;
-	bool set_pte_last = 0;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl - 1; i++) {
-		tp = &tbl->pg_tbl[i];
-		tp_next = &tbl->pg_tbl[i + 1];
-		if (i == tbl->num_lvl - 2)
-			set_pte_last = 1;
-		tf_em_link_page_table(tp, tp_next, set_pte_last);
-	}
-
-	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
-}
-
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
-/**
- * Unregisters EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- */
-static void
-tf_em_ctx_unreg(struct tf *tfp,
-		struct tf_tbl_scope_cb *tbl_scope_cb,
-		int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
-			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
-			tf_em_free_page_table(tbl);
-		}
-	}
-}
-
-/**
- * Registers EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of Memory
- */
-static int
-tf_em_ctx_reg(struct tf *tfp,
-	      struct tf_tbl_scope_cb *tbl_scope_cb,
-	      int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int rc = 0;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
-
-			if (rc)
-				goto cleanup;
-
-			rc = tf_em_alloc_page_table(tbl);
-			if (rc)
-				goto cleanup;
-
-			tf_em_setup_page_table(tbl);
-			rc = tf_msg_em_mem_rgtr(tfp,
-						tbl->num_lvl - 1,
-						TF_EM_PAGE_SIZE_ENUM,
-						tbl->l0_dma_addr,
-						&tbl->ctx_id);
-			if (rc)
-				goto cleanup;
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	return rc;
-}
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	return 0;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -883,289 +148,6 @@ tf_free_tbl_entry_shadow(struct tf_session *tfs,
 }
 #endif /* TF_SHADOW */
 
-/**
- * Create External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- * [in] num_entries
- *   number of entries to write
- * [in] entry_sz_bytes
- *   size of each entry
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_tbl_pool_external(enum tf_dir dir,
-			    struct tf_tbl_scope_cb *tbl_scope_cb,
-			    uint32_t num_entries,
-			    uint32_t entry_sz_bytes)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i;
-	int32_t j;
-	int rc = 0;
-	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
-			    tf_dir_2_str(dir), strerror(ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Save the  malloced memory address so that it can
-	 * be freed when the table scope is freed.
-	 */
-	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
-
-	/* Fill pool with indexes in reverse
-	 */
-	j = (num_entries - 1) * entry_sz_bytes;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-				    tf_dir_2_str(dir), strerror(-rc));
-			goto cleanup;
-		}
-
-		if (j < 0) {
-			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
-				    dir, j);
-			goto cleanup;
-		}
-		j -= entry_sz_bytes;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Destroy External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- *
- */
-static void
-tf_destroy_tbl_pool_external(enum tf_dir dir,
-			     struct tf_tbl_scope_cb *tbl_scope_cb)
-{
-	uint32_t *ext_act_pool_mem =
-		tbl_scope_cb->ext_act_pool_mem[dir];
-
-	tfp_free(ext_act_pool_mem);
-}
-
-/* API defined in tf_em.h */
-struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
-{
-	int i;
-
-	/* Check that id is valid */
-	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
-	if (i < 0)
-		return NULL;
-
-	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
-	}
-
-	return NULL;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			 struct tf_free_tbl_scope_parms *parms)
-{
-	int rc = 0;
-	enum tf_dir  dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Table scope error\n");
-		return -EINVAL;
-	}
-
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-
-	/* free table scope locks */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/* Free associated external pools
-		 */
-		tf_destroy_tbl_pool_external(dir,
-					     tbl_scope_cb);
-		tf_msg_em_op(tfp,
-			     dir,
-			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
-
-		/* free table scope and all associated resources */
-		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	}
-
-	return rc;
-}
-
-/* API defined in tf_em.h */
-int
-tf_alloc_eem_tbl_scope(struct tf *tfp,
-		       struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-	enum tf_dir dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_table *em_tables;
-	int index;
-	struct tf_session *session;
-	struct tf_free_tbl_scope_parms free_parms;
-
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* Get Table Scope control block from the session pool */
-	index = ba_alloc(session->tbl_scope_pool_rx);
-	if (index == -1) {
-		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
-			    "Control Block\n");
-		return -ENOMEM;
-	}
-
-	tbl_scope_cb = &session->tbl_scopes[index];
-	tbl_scope_cb->index = index;
-	tbl_scope_cb->tbl_scope_id = index;
-	parms->tbl_scope_id = index;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_msg_em_qcaps(tfp,
-				     dir,
-				     &tbl_scope_cb->em_caps[dir]);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to query for EEM capability,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-	}
-
-	/*
-	 * Validate and setup table sizes
-	 */
-	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
-		goto cleanup;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/*
-		 * Allocate tables and signal configuration to FW
-		 */
-		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-
-		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
-		rc = tf_msg_em_cfg(tfp,
-				   em_tables[TF_KEY0_TABLE].num_entries,
-				   em_tables[TF_KEY0_TABLE].ctx_id,
-				   em_tables[TF_KEY1_TABLE].ctx_id,
-				   em_tables[TF_RECORD_TABLE].ctx_id,
-				   em_tables[TF_EFC_TABLE].ctx_id,
-				   parms->hw_flow_cache_flush_timer,
-				   dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "TBL: Unable to configure EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		rc = tf_msg_em_op(tfp,
-				  dir,
-				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
-
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		/* Allocate the pool of offsets of the external memory.
-		 * Initially, this is a single fixed size pool for all external
-		 * actions related to a single table scope.
-		 */
-		rc = tf_create_tbl_pool_external(dir,
-				    tbl_scope_cb,
-				    em_tables[TF_RECORD_TABLE].num_entries,
-				    em_tables[TF_RECORD_TABLE].entry_size);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s TBL: Unable to allocate idx pools %s\n",
-				    tf_dir_2_str(dir),
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-	}
-
-	return 0;
-
-cleanup_full:
-	free_parms.tbl_scope_id = index;
-	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
-	return -EINVAL;
-
-cleanup:
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-	return -EINVAL;
-}
 
  /* API defined in tf_core.h */
 int
@@ -1196,119 +178,3 @@ tf_bulk_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
-
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_scope(struct tf *tfp,
-		   struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	rc = tf_alloc_eem_tbl_scope(tfp, parms);
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_scope(struct tf *tfp,
-		  struct tf_free_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	/* free table scope and all associated resources */
-	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
-
-	return rc;
-}
-
-static void
-tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-			struct hcapi_cfa_em_page_tbl *tp_next)
-{
-	uint64_t *pg_va;
-	uint32_t i;
-	uint32_t j;
-	uint32_t k = 0;
-
-	printf("pg_count:%d pg_size:0x%x\n",
-	       tp->pg_count,
-	       tp->pg_size);
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-		printf("\t%p\n", (void *)pg_va);
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
-			if (((pg_va[j] & 0x7) ==
-			     tfp_cpu_to_le_64(PTU_PTE_LAST |
-					      PTU_PTE_VALID)))
-				return;
-
-			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
-				printf("** Invalid entry **\n");
-				return;
-			}
-
-			if (++k >= tp_next->pg_count) {
-				printf("** Shouldn't get here **\n");
-				return;
-			}
-		}
-	}
-}
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
-{
-	struct tf_session      *session;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_page_tbl *tp;
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-	int j;
-	int dir;
-
-	printf("called %s\n", __func__);
-
-	/* find session struct */
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* find control block for table scope */
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-
-		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
-			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
-			printf
-	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
-			       j,
-			       tbl->type,
-			       tbl->num_entries,
-			       tbl->entry_size,
-			       tbl->num_lvl);
-			if (tbl->pg_tbl[0].pg_va_tbl &&
-			    tbl->pg_tbl[0].pg_pa_tbl)
-				printf("%p %p\n",
-			       tbl->pg_tbl[0].pg_va_tbl[0],
-			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
-			for (i = 0; i < tbl->num_lvl - 1; i++) {
-				printf("Level:%d\n", i);
-				tp = &tbl->pg_tbl[i];
-				tp_next = &tbl->pg_tbl[i + 1];
-				tf_dump_link_page_table(tp, tp_next);
-			}
-			printf("\n");
-		}
-	}
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index bdf7d2089..2f5af6060 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -209,8 +209,10 @@ tf_tbl_set(struct tf *tfp,
 	   struct tf_tbl_set_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
 	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -240,9 +242,22 @@ tf_tbl_set(struct tf *tfp,
 	}
 
 	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	rc = tf_msg_set_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
@@ -262,8 +277,10 @@ tf_tbl_get(struct tf *tfp,
 	   struct tf_tbl_get_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
+	uint16_t hcapi_type;
 	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -292,10 +309,24 @@ tf_tbl_get(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Get the entry */
 	rc = tf_msg_get_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 260fb15a6..a1761ad56 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -53,7 +53,6 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	db_cfg.num_elements = parms->num_elements;
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -174,14 +173,15 @@ tf_tcam_alloc(struct tf *tfp,
 }
 
 int
-tf_tcam_free(struct tf *tfp __rte_unused,
-	     struct tf_tcam_free_parms *parms __rte_unused)
+tf_tcam_free(struct tf *tfp,
+	     struct tf_tcam_free_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -253,6 +253,15 @@ tf_tcam_free(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
@@ -281,6 +290,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -338,6 +348,15 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 5090dfd9f..ee5bacc09 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -76,6 +76,10 @@ struct tf_tcam_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Index to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 16c43eb67..5472a9aac 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -152,9 +152,9 @@ tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
 	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
 		return tf_ident_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_TABLE:
-		return tf_tcam_tbl_2_str(mod_type);
-	case TF_DEVICE_MODULE_TYPE_TCAM:
 		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tcam_tbl_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_EM:
 		return tf_em_tbl_type_2_str(mod_type);
 	default:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 23/51] net/bnxt: update table get to use new design
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (21 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
                     ` (28 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Move bulk table get implementation to new Tbl Module design.
- Update messages for bulk table get
- Retrieve specified table element using bulk mechanism
- Remove deprecated resource definitions
- Update device type configuration for P4.
- Update RM DB HCAPI count check and fix EM internal and host
  code such that EM DBs can be created correctly.
- Update error logging to be info on unbind in the different modules.
- Move RTE RSVD out of tf_resources.h

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h      |  250 ++
 drivers/net/bnxt/hcapi/hcapi_cfa.h        |    2 +
 drivers/net/bnxt/meson.build              |    3 +-
 drivers/net/bnxt/tf_core/Makefile         |    2 -
 drivers/net/bnxt/tf_core/tf_common.h      |   55 +-
 drivers/net/bnxt/tf_core/tf_core.c        |   86 +-
 drivers/net/bnxt/tf_core/tf_device.h      |   24 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c   |    4 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h   |    5 +-
 drivers/net/bnxt/tf_core/tf_em.h          |   88 +-
 drivers/net/bnxt/tf_core/tf_em_common.c   |   29 +-
 drivers/net/bnxt/tf_core/tf_em_internal.c |   59 +-
 drivers/net/bnxt/tf_core/tf_identifier.c  |   14 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |   31 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |    8 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  529 ---
 drivers/net/bnxt/tf_core/tf_rm.c          | 3695 ++++-----------------
 drivers/net/bnxt/tf_core/tf_rm.h          |  539 +--
 drivers/net/bnxt/tf_core/tf_rm_new.c      |  907 -----
 drivers/net/bnxt/tf_core/tf_rm_new.h      |  446 ---
 drivers/net/bnxt/tf_core/tf_session.h     |  214 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  478 ++-
 drivers/net/bnxt/tf_core/tf_tbl.h         |  436 ++-
 drivers/net/bnxt/tf_core/tf_tbl_type.c    |  342 --
 drivers/net/bnxt/tf_core/tf_tbl_type.h    |  318 --
 drivers/net/bnxt/tf_core/tf_tcam.c        |   15 +-
 26 files changed, 2337 insertions(+), 6242 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
new file mode 100644
index 000000000..c30e4f49c
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_tbl.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  12/16/19 17:18:12
+ *
+ * Note:  This file was originally generated by tflib_decode.py.
+ *        Remainder is hand coded due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union fields)
+ *
+ **/
+#ifndef _CFA_P40_TBL_H_
+#define _CFA_P40_TBL_H_
+
+#include "cfa_p40_hw.h"
+
+#include "hcapi_cfa_defs.h"
+
+const struct hcapi_cfa_field cfa_p40_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_act_veb_tcam_layout[] = {
+	{CFA_P40_ACT_VEB_TCAM_VALID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_MAC_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_OVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_IVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_lkup_tcam_record_mem_layout[] = {
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_ctxt_remap_mem_layout[] = {
+	{CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS},
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
+	{CFA_P40_EEM_KEY_TBL_VALID_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
+
+};
+#endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index f60af4e56..7a67493bd 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -243,6 +243,8 @@ int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
 			       struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
+			     struct hcapi_cfa_data *mirror);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 35038dc8b..7f3ec6204 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -41,10 +41,9 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
 	'tf_core/tf_shadow_tcam.c',
-	'tf_core/tf_tbl_type.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm_new.c',
+	'tf_core/tf_rm.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index f186741e4..9ba60e1c2 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -23,10 +23,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index ec3bca835..b982203db 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -6,52 +6,11 @@
 #ifndef _TF_COMMON_H_
 #define _TF_COMMON_H_
 
-/* Helper to check the parms */
-#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "%s: session error\n", \
-				    tf_dir_2_str((parms)->dir)); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_TFP_SESSION(tfp) do { \
-		if ((tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
+/* Helpers to performs parameter check */
 
+/**
+ * Checks 1 parameter against NULL.
+ */
 #define TF_CHECK_PARMS1(parms) do {					\
 		if ((parms) == NULL) {					\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -59,6 +18,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 2 parameters against NULL.
+ */
 #define TF_CHECK_PARMS2(parms1, parms2) do {				\
 		if ((parms1) == NULL || (parms2) == NULL) {		\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -66,6 +28,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 3 parameters against NULL.
+ */
 #define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
 		if ((parms1) == NULL ||					\
 		    (parms2) == NULL ||					\
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8b3e15c8a..8727900c4 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -186,7 +186,7 @@ int tf_insert_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -241,7 +241,7 @@ int tf_delete_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -523,7 +523,7 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 	return -EOPNOTSUPP;
 }
 
@@ -821,7 +821,80 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
+int
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_bulk_parms bparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+
+		return rc;
+	}
+
+	/* Internal table type processing */
+
+	if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	bparms.dir = parms->dir;
+	bparms.type = parms->type;
+	bparms.starting_idx = parms->starting_idx;
+	bparms.num_entries = parms->num_entries;
+	bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
+	bparms.physical_mem_addr = parms->physical_mem_addr;
+	rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get bulk failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
@@ -830,7 +903,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -861,7 +934,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
 int
 tf_free_tbl_scope(struct tf *tfp,
 		  struct tf_free_tbl_scope_parms *parms)
@@ -870,7 +942,7 @@ tf_free_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 2712d1039..93f3627d4 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -8,7 +8,7 @@
 
 #include "tf_core.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -293,7 +293,27 @@ struct tf_dev_ops {
 	 *   - (-EINVAL) on failure.
 	 */
 	int (*tf_dev_get_tbl)(struct tf *tfp,
-			       struct tf_tbl_get_parms *parms);
+			      struct tf_tbl_get_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element using 'bulk'
+	 * mechanism.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get bulk parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_bulk_tbl)(struct tf *tfp,
+				   struct tf_tbl_get_bulk_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 127c655a6..e3526672f 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -8,7 +8,7 @@
 
 #include "tf_device.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
 
@@ -88,6 +88,7 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
+	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
 	.tf_dev_free_tcam = NULL,
 	.tf_dev_alloc_search_tcam = NULL,
@@ -114,6 +115,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
+	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = NULL,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index da6dd65a3..473e4eae5 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -9,7 +9,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_core.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -41,8 +41,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index cf799c200..7042f44e9 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -23,6 +23,56 @@
 #define TF_EM_MAX_MASK 0x7FFF
 #define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
 
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
+ */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define PAGE_SIZE TF_EM_PAGE_SIZE_2M
+
+#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /*
  * Used to build GFID:
  *
@@ -80,13 +130,43 @@ struct tf_em_cfg_parms {
 };
 
 /**
- * @page table Table
+ * @page em EM
  *
  * @ref tf_alloc_eem_tbl_scope
  *
  * @ref tf_free_eem_tbl_scope_cb
  *
- * @ref tbl_scope_cb_find
+ * @ref tf_em_insert_int_entry
+ *
+ * @ref tf_em_delete_int_entry
+ *
+ * @ref tf_em_insert_ext_entry
+ *
+ * @ref tf_em_delete_ext_entry
+ *
+ * @ref tf_em_insert_ext_sys_entry
+ *
+ * @ref tf_em_delete_ext_sys_entry
+ *
+ * @ref tf_em_int_bind
+ *
+ * @ref tf_em_int_unbind
+ *
+ * @ref tf_em_ext_common_bind
+ *
+ * @ref tf_em_ext_common_unbind
+ *
+ * @ref tf_em_ext_host_alloc
+ *
+ * @ref tf_em_ext_host_free
+ *
+ * @ref tf_em_ext_system_alloc
+ *
+ * @ref tf_em_ext_system_free
+ *
+ * @ref tf_em_ext_common_free
+ *
+ * @ref tf_em_ext_common_alloc
  */
 
 /**
@@ -328,7 +408,7 @@ int tf_em_ext_host_free(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+			   struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using system memory
@@ -344,7 +424,7 @@ int tf_em_ext_system_alloc(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
+			  struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index ba6aa7ac1..d0d80daeb 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -194,12 +194,13 @@ tf_em_ext_common_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Ext DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -210,19 +211,29 @@ tf_em_ext_common_bind(struct tf *tfp,
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
 		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Ext DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_TBL_SCOPE] == 0)
+			continue;
+
 		db_cfg.rm_db = &eem_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
-				    "%s: EM DB creation failed\n",
+				    "%s: EM Ext DB creation failed\n",
 				    tf_dir_2_str(i));
 
 			return rc;
 		}
+		db_exists = 1;
 	}
 
-	mem_type = parms->mem_type;
-	init = 1;
+	if (db_exists) {
+		mem_type = parms->mem_type;
+		init = 1;
+	}
 
 	return 0;
 }
@@ -236,13 +247,11 @@ tf_em_ext_common_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Ext DBs created\n");
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 9be91ad5d..1c514747d 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -225,12 +225,13 @@ tf_em_int_bind(struct tf *tfp,
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 	struct tf_session *session;
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Int DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -242,31 +243,35 @@ tf_em_int_bind(struct tf *tfp,
 				  TF_SESSION_EM_POOL_SIZE);
 	}
 
-	/*
-	 * I'm not sure that this code is needed.
-	 * leaving for now until resolved
-	 */
-	if (parms->num_elements) {
-		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
-
-		for (i = 0; i < TF_DIR_MAX; i++) {
-			db_cfg.dir = i;
-			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
-			db_cfg.rm_db = &em_db[i];
-			rc = tf_rm_create_db(tfp, &db_cfg);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: EM DB creation failed\n",
-					    tf_dir_2_str(i));
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
-				return rc;
-			}
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Int DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
+			continue;
+
+		db_cfg.rm_db = &em_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM Int DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
 		}
+		db_exists = 1;
 	}
 
-	init = 1;
+	if (db_exists)
+		init = 1;
+
 	return 0;
 }
 
@@ -280,13 +285,11 @@ tf_em_int_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Int DBs created\n");
+		return 0;
 	}
 
 	session = (struct tf_session *)tfp->session->core_data;
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index b197bb271..211371081 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -7,7 +7,7 @@
 
 #include "tf_identifier.h"
 #include "tf_common.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_util.h"
 #include "tfp.h"
 
@@ -35,7 +35,7 @@ tf_ident_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "Identifier DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -65,7 +65,7 @@ tf_ident_bind(struct tf *tfp,
 }
 
 int
-tf_ident_unbind(struct tf *tfp __rte_unused)
+tf_ident_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
@@ -73,13 +73,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
+		TFP_DRV_LOG(INFO,
 			    "No Identifier DBs created\n");
-		return -EINVAL;
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index d8b80bc84..02d8a4971 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -871,26 +871,41 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *params)
+			  enum tf_dir dir,
+			  uint16_t hcapi_type,
+			  uint32_t starting_idx,
+			  uint16_t num_entries,
+			  uint16_t entry_sz_in_bytes,
+			  uint64_t physical_mem_addr)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
 	int data_size = 0;
 
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(params->dir);
-	req.type = tfp_cpu_to_le_32(params->type);
-	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
-	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
+	req.start_index = tfp_cpu_to_le_32(starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
 
-	data_size = params->num_entries * params->entry_sz_in_bytes;
+	data_size = num_entries * entry_sz_in_bytes;
 
-	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+	req.host_addr = tfp_cpu_to_le_64(physical_mem_addr);
 
 	MSG_PREP(parms,
 		 TF_KONG_MB,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8e276d4c0..7432873d7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -11,7 +11,6 @@
 
 #include "tf_tbl.h"
 #include "tf_rm.h"
-#include "tf_rm_new.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -422,6 +421,11 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms);
+			      enum tf_dir dir,
+			      uint16_t hcapi_type,
+			      uint32_t starting_idx,
+			      uint16_t num_entries,
+			      uint16_t entry_sz_in_bytes,
+			      uint64_t physical_mem_addr);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index b7b445102..4688514fc 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,535 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
-/* Common HW resources for all chip variants */
-#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
-					    * entries
-					    */
-#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
-#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
-					    * TCAM
-					    */
-#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
-					    * IDs
-					    */
-#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
-#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
-#define TF_NUM_METER             1024      /* < Number of meter instances */
-#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
-#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
-
-/* Wh+/SR specific HW resources */
-#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
-					    * entries
-					    */
-
-/* SR/SR2 specific HW resources */
-#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
-
-
-/* Thor, SR2 common HW resources */
-#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
-					    * templates
-					    */
-
-/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
-#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
-#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
-#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
-#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
-					    * States
-					    */
-#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
-#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
-#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
-
-/*
- * Common for the Reserved Resource defines below:
- *
- * - HW Resources
- *   For resources where a priority level plays a role, i.e. l2 ctx
- *   tcam entries, both a number of resources and a begin/end pair is
- *   required. The begin/end is used to assure TFLIB gets the correct
- *   priority setting for that resource.
- *
- *   For EM records there is no priority required thus a number of
- *   resources is sufficient.
- *
- *   Example, TCAM:
- *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
- *     0-63 as HW presents 0 as the highest priority entry.
- *
- * - SRAM Resources
- *   Handled as regular resources as there is no priority required.
- *
- * Common for these resources is that they are handled per direction,
- * rx/tx.
- */
-
-/* HW Resources */
-
-/* L2 CTX */
-#define TF_RSVD_L2_CTXT_TCAM_RX                   64
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
-#define TF_RSVD_L2_CTXT_TCAM_TX                   960
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
-
-/* Profiler */
-#define TF_RSVD_PROF_FUNC_RX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
-#define TF_RSVD_PROF_FUNC_TX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
-
-#define TF_RSVD_PROF_TCAM_RX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
-#define TF_RSVD_PROF_TCAM_TX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
-
-/* EM Profiles IDs */
-#define TF_RSVD_EM_PROF_ID_RX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
-#define TF_RSVD_EM_PROF_ID_TX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
-
-/* EM Records */
-#define TF_RSVD_EM_REC_RX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
-#define TF_RSVD_EM_REC_TX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
-
-/* Wildcard */
-#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
-#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
-
-#define TF_RSVD_WC_TCAM_RX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_WC_TCAM_END_IDX_RX                63
-#define TF_RSVD_WC_TCAM_TX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_WC_TCAM_END_IDX_TX                63
-
-#define TF_RSVD_METER_PROF_RX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_PROF_END_IDX_RX             0
-#define TF_RSVD_METER_PROF_TX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_PROF_END_IDX_TX             0
-
-#define TF_RSVD_METER_INST_RX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_INST_END_IDX_RX             0
-#define TF_RSVD_METER_INST_TX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_INST_END_IDX_TX             0
-
-/* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
-#define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
-#define TF_RSVD_MIRROR_END_IDX_TX                 0
-
-/* UPAR */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_UPAR_RX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
-#define TF_RSVD_UPAR_END_IDX_RX                   0
-#define TF_RSVD_UPAR_TX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
-#define TF_RSVD_UPAR_END_IDX_TX                   0
-
-/* Source Properties */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SP_TCAM_RX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_SP_TCAM_END_IDX_RX                0
-#define TF_RSVD_SP_TCAM_TX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_SP_TCAM_END_IDX_TX                0
-
-/* L2 Func */
-#define TF_RSVD_L2_FUNC_RX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
-#define TF_RSVD_L2_FUNC_END_IDX_RX                0
-#define TF_RSVD_L2_FUNC_TX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
-#define TF_RSVD_L2_FUNC_END_IDX_TX                0
-
-/* FKB */
-#define TF_RSVD_FKB_RX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
-#define TF_RSVD_FKB_END_IDX_RX                    0
-#define TF_RSVD_FKB_TX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
-#define TF_RSVD_FKB_END_IDX_TX                    0
-
-/* TBL Scope */
-#define TF_RSVD_TBL_SCOPE_RX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
-#define TF_RSVD_TBL_SCOPE_TX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
-
-/* EPOCH0 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH0_RX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH0_END_IDX_RX                 0
-#define TF_RSVD_EPOCH0_TX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH0_END_IDX_TX                 0
-
-/* EPOCH1 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH1_RX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH1_END_IDX_RX                 0
-#define TF_RSVD_EPOCH1_TX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH1_END_IDX_TX                 0
-
-/* METADATA */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_METADATA_RX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
-#define TF_RSVD_METADATA_END_IDX_RX               0
-#define TF_RSVD_METADATA_TX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
-#define TF_RSVD_METADATA_END_IDX_TX               0
-
-/* CT_STATE */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_CT_STATE_RX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
-#define TF_RSVD_CT_STATE_END_IDX_RX               0
-#define TF_RSVD_CT_STATE_TX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
-#define TF_RSVD_CT_STATE_END_IDX_TX               0
-
-/* RANGE_PROF */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_PROF_RX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
-#define TF_RSVD_RANGE_PROF_TX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
-
-/* RANGE_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_ENTRY_RX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
-#define TF_RSVD_RANGE_ENTRY_TX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
-
-/* LAG_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_LAG_ENTRY_RX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
-#define TF_RSVD_LAG_ENTRY_TX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
-
-
-/* SRAM - Resources
- * Limited to the types that CFA provides.
- */
-#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
-
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SRAM_MCG_RX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
-/* Multicast Group on TX is not supported */
-#define TF_RSVD_SRAM_MCG_TX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
-
-/* First encap of 8B RX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
-/* First encap of 8B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
-
-#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
-/* First encap of 16B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
-
-/* Encap of 64B on RX is not supported */
-#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
-/* First encap of 64B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_SP_SMAC_RX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
-#define TF_RSVD_SRAM_SP_SMAC_TX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
-
-/* SRAM SP IPV4 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
-
-/* SRAM SP IPV6 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
-/* Not yet supported fully in infra */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
-
-#define TF_RSVD_SRAM_COUNTER_64B_RX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_COUNTER_64B_TX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
-
-#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
-
-#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
-
-/* HW Resource Pool names */
-
-#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
-#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
-#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
-
-#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
-#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
-#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
-
-#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
-#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
-#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
-
-#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
-#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
-#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
-
-#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
-
-#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
-#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
-#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
-
-#define TF_METER_PROF_POOL_NAME           meter_prof_pool
-#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
-#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
-
-#define TF_METER_INST_POOL_NAME           meter_inst_pool
-#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
-#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
-
-#define TF_MIRROR_POOL_NAME               mirror_pool
-#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
-#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
-
-#define TF_UPAR_POOL_NAME                 upar_pool
-#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
-#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
-
-#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
-#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
-#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
-
-#define TF_FKB_POOL_NAME                  fkb_pool
-#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
-#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
-
-#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
-#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
-#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
-
-#define TF_L2_FUNC_POOL_NAME              l2_func_pool
-#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
-#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
-
-#define TF_EPOCH0_POOL_NAME               epoch0_pool
-#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
-#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
-
-#define TF_EPOCH1_POOL_NAME               epoch1_pool
-#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
-#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
-
-#define TF_METADATA_POOL_NAME             metadata_pool
-#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
-#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
-
-#define TF_CT_STATE_POOL_NAME             ct_state_pool
-#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
-#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
-
-#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
-#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
-#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
-
-#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
-#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
-#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
-
-#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
-#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
-#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
-
-/* SRAM Resource Pool names */
-#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
-#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
-#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
-
-#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
-#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
-#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
-
-#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
-#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
-#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
-
-#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
-#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
-#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
-
-#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
-#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
-#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
-
-#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
-#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
-#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
-
-#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
-#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
-#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
-
-#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
-#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
-#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
-
-#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
-#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
-#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
-
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
-
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
-
-/* Sw Resource Pool Names */
-
-#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
-#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
-#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
-
-
-/** HW Resource types
- */
-enum tf_resource_type_hw {
-	/* Common HW resources for all chip variants */
-	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-	TF_RESC_TYPE_HW_PROF_FUNC,
-	TF_RESC_TYPE_HW_PROF_TCAM,
-	TF_RESC_TYPE_HW_EM_PROF_ID,
-	TF_RESC_TYPE_HW_EM_REC,
-	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-	TF_RESC_TYPE_HW_WC_TCAM,
-	TF_RESC_TYPE_HW_METER_PROF,
-	TF_RESC_TYPE_HW_METER_INST,
-	TF_RESC_TYPE_HW_MIRROR,
-	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/SR specific HW resources */
-	TF_RESC_TYPE_HW_SP_TCAM,
-	/* SR/SR2 specific HW resources */
-	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Thor, SR2 common HW resources */
-	TF_RESC_TYPE_HW_FKB,
-	/* SR2 specific HW resources */
-	TF_RESC_TYPE_HW_TBL_SCOPE,
-	TF_RESC_TYPE_HW_EPOCH0,
-	TF_RESC_TYPE_HW_EPOCH1,
-	TF_RESC_TYPE_HW_METADATA,
-	TF_RESC_TYPE_HW_CT_STATE,
-	TF_RESC_TYPE_HW_RANGE_PROF,
-	TF_RESC_TYPE_HW_RANGE_ENTRY,
-	TF_RESC_TYPE_HW_LAG_ENTRY,
-	TF_RESC_TYPE_HW_MAX
-};
-
-/** HW Resource types
- */
-enum tf_resource_type_sram {
-	TF_RESC_TYPE_SRAM_FULL_ACTION,
-	TF_RESC_TYPE_SRAM_MCG,
-	TF_RESC_TYPE_SRAM_ENCAP_8B,
-	TF_RESC_TYPE_SRAM_ENCAP_16B,
-	TF_RESC_TYPE_SRAM_ENCAP_64B,
-	TF_RESC_TYPE_SRAM_SP_SMAC,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-	TF_RESC_TYPE_SRAM_COUNTER_64B,
-	TF_RESC_TYPE_SRAM_NAT_SPORT,
-	TF_RESC_TYPE_SRAM_NAT_DPORT,
-	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-	TF_RESC_TYPE_SRAM_MAX
-};
 
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0a84e64d..e0469b653 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -7,3171 +7,916 @@
 
 #include <rte_common.h>
 
+#include <cfa_resource_types.h>
+
 #include "tf_rm.h"
-#include "tf_core.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
-#include "tf_resources.h"
-#include "tf_msg.h"
-#include "bnxt.h"
+#include "tf_device.h"
 #include "tfp.h"
+#include "tf_msg.h"
 
 /**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
- *   enum tf_dir               dir         - Direction to process
- *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
- *                                           in the hw query structure
- *   define                    def_value   - Define value to check against
- *   uint32_t                 *eflag       - Result of the check
- */
-#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
-	if ((dir) == TF_DIR_RX) {					      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	} else {							      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	}								      \
-} while (0)
-
-/**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
- *   enum tf_dir                dir         - Direction to process
- *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
- *                                            in the hw query structure
- *   define                     def_value   - Define value to check against
- *   uint32_t                  *eflag       - Result of the check
- */
-#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
-	if ((dir) == TF_DIR_RX) {					       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	} else {							       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	}								       \
-} while (0)
-
-/**
- * Internal macro to convert a reserved resource define name to be
- * direction specific.
- *
- * Parameters:
- *   enum tf_dir    dir         - Direction to process
- *   string         type        - Type name to append RX or TX to
- *   string         dtype       - Direction specific type
- *
- *
+ * Generic RM Element data type that an RM DB is build upon.
  */
-#define TF_RESC_RSVD(dir, type, dtype) do {	\
-		if ((dir) == TF_DIR_RX)		\
-			(dtype) = type ## _RX;	\
-		else				\
-			(dtype) = type ## _TX;	\
-	} while (0)
-
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
-{
-	switch (hw_type) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		return "L2 ctxt tcam";
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		return "Profile Func";
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		return "Profile tcam";
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		return "EM profile id";
-	case TF_RESC_TYPE_HW_EM_REC:
-		return "EM record";
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		return "WC tcam profile id";
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		return "WC tcam";
-	case TF_RESC_TYPE_HW_METER_PROF:
-		return "Meter profile";
-	case TF_RESC_TYPE_HW_METER_INST:
-		return "Meter instance";
-	case TF_RESC_TYPE_HW_MIRROR:
-		return "Mirror";
-	case TF_RESC_TYPE_HW_UPAR:
-		return "UPAR";
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		return "Source properties tcam";
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		return "L2 Function";
-	case TF_RESC_TYPE_HW_FKB:
-		return "FKB";
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		return "Table scope";
-	case TF_RESC_TYPE_HW_EPOCH0:
-		return "EPOCH0";
-	case TF_RESC_TYPE_HW_EPOCH1:
-		return "EPOCH1";
-	case TF_RESC_TYPE_HW_METADATA:
-		return "Metadata";
-	case TF_RESC_TYPE_HW_CT_STATE:
-		return "Connection tracking state";
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		return "Range profile";
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		return "Range entry";
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		return "LAG";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
-{
-	switch (sram_type) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		return "Full action";
-	case TF_RESC_TYPE_SRAM_MCG:
-		return "MCG";
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		return "Encap 8B";
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		return "Encap 16B";
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		return "Encap 64B";
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		return "Source properties SMAC";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		return "Source properties SMAC IPv4";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		return "Source properties IPv6";
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		return "Counter 64B";
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		return "NAT source port";
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		return "NAT destination port";
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		return "NAT source IPv4";
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		return "NAT destination IPv4";
-	default:
-		return "Invalid identifier";
-	}
-}
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
 
-/**
- * Helper function to perform a HW HCAPI resource type lookup against
- * the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
- */
-static int
-tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
-{
-	uint32_t value = -EOPNOTSUPP;
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
 
-	switch (index) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_REC:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_INST:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
-		break;
-	case TF_RESC_TYPE_HW_MIRROR:
-		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
-		break;
-	case TF_RESC_TYPE_HW_UPAR:
-		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
-		break;
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_FKB:
-		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
-		break;
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH0:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH1:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
-		break;
-	case TF_RESC_TYPE_HW_METADATA:
-		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
-		break;
-	case TF_RESC_TYPE_HW_CT_STATE:
-		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
-		break;
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
-		break;
-	default:
-		break;
-	}
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
 
-	return value;
-}
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
 
 /**
- * Helper function to perform a SRAM HCAPI resource type lookup
- * against the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
+ * TF RM DB definition
  */
-static int
-tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
-{
-	uint32_t value = -EOPNOTSUPP;
-
-	switch (index) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
-		break;
-	case TF_RESC_TYPE_SRAM_MCG:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
-		break;
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
-		break;
-	default:
-		break;
-	}
-
-	return value;
-}
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
 
-/**
- * Helper function to print all the HW resource qcaps errors reported
- * in the error_flag.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] error_flag
- *   Pointer to the hw error flags created at time of the query check
- */
-static void
-tf_rm_print_hw_qcaps_error(enum tf_dir dir,
-			   struct tf_rm_hw_query *hw_query,
-			   uint32_t *error_flag)
-{
-	int i;
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
 
-	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
 
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_hw_2_str(i),
-				    hw_query->hw_query[i].max,
-				    tf_rm_rsvd_hw_value(dir, i));
-	}
-}
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
 
 /**
- * Helper function to print all the SRAM resource qcaps errors
- * reported in the error_flag.
+ * Adjust an index according to the allocation information.
  *
- * [in] dir
- *   Receive or transmit direction
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] error_flag
- *   Pointer to the sram error flags created at time of the query check
- */
-static void
-tf_rm_print_sram_qcaps_error(enum tf_dir dir,
-			     struct tf_rm_sram_query *sram_query,
-			     uint32_t *error_flag)
-{
-	int i;
-
-	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_sram_2_str(i),
-				    sram_query->sram_query[i].max,
-				    tf_rm_rsvd_sram_value(dir, i));
-	}
-}
-
-/**
- * Performs a HW resource check between what firmware capability
- * reports and what the core expects is available.
+ * [in] cfg
+ *   Pointer to the DB configuration
  *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
  *
- * [in] query
- *   Pointer to HW Query data structure. Query holds what the firmware
- *   offers of the HW resources.
+ * [in] count
+ *   Number of DB configuration elements
  *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
  *
  * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
-			    enum tf_dir dir,
-			    uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-			     TF_RSVD_L2_CTXT_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_FUNC,
-			     TF_RSVD_PROF_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_TCAM,
-			     TF_RSVD_PROF_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_PROF_ID,
-			     TF_RSVD_EM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_REC,
-			     TF_RSVD_EM_REC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-			     TF_RSVD_WC_TCAM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM,
-			     TF_RSVD_WC_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_PROF,
-			     TF_RSVD_METER_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_INST,
-			     TF_RSVD_METER_INST,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_MIRROR,
-			     TF_RSVD_MIRROR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_UPAR,
-			     TF_RSVD_UPAR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_SP_TCAM,
-			     TF_RSVD_SP_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_FUNC,
-			     TF_RSVD_L2_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_FKB,
-			     TF_RSVD_FKB,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_TBL_SCOPE,
-			     TF_RSVD_TBL_SCOPE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH0,
-			     TF_RSVD_EPOCH0,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH1,
-			     TF_RSVD_EPOCH1,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METADATA,
-			     TF_RSVD_METADATA,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_CT_STATE,
-			     TF_RSVD_CT_STATE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_PROF,
-			     TF_RSVD_RANGE_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_ENTRY,
-			     TF_RSVD_RANGE_ENTRY,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_LAG_ENTRY,
-			     TF_RSVD_LAG_ENTRY,
-			     error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Performs a SRAM resource check between what firmware capability
- * reports and what the core expects is available.
- *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
- *
- * [in] query
- *   Pointer to SRAM Query data structure. Query holds what the
- *   firmware offers of the SRAM resources.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
- *
- * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
-			      enum tf_dir dir,
-			      uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_FULL_ACTION,
-			       TF_RSVD_SRAM_FULL_ACTION,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_MCG,
-			       TF_RSVD_SRAM_MCG,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_8B,
-			       TF_RSVD_SRAM_ENCAP_8B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_16B,
-			       TF_RSVD_SRAM_ENCAP_16B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_64B,
-			       TF_RSVD_SRAM_ENCAP_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC,
-			       TF_RSVD_SRAM_SP_SMAC,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			       TF_RSVD_SRAM_SP_SMAC_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			       TF_RSVD_SRAM_SP_SMAC_IPV6,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_COUNTER_64B,
-			       TF_RSVD_SRAM_COUNTER_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_SPORT,
-			       TF_RSVD_SRAM_NAT_SPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_DPORT,
-			       TF_RSVD_SRAM_NAT_DPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-			       TF_RSVD_SRAM_NAT_S_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-			       TF_RSVD_SRAM_NAT_D_IPV4,
-			       error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Internal function to mark pool entries used.
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_reserve_range(uint32_t count,
-		    uint32_t rsv_begin,
-		    uint32_t rsv_end,
-		    uint32_t max,
-		    struct bitalloc *pool)
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
 {
-	uint32_t i;
+	int i;
+	uint16_t cnt = 0;
 
-	/* If no resources has been requested we mark everything
-	 * 'used'
-	 */
-	if (count == 0)	{
-		for (i = 0; i < max; i++)
-			ba_alloc_index(pool, i);
-	} else {
-		/* Support 2 main modes
-		 * Reserved range starts from bottom up (with
-		 * pre-reserved value or not)
-		 * - begin = 0 to end xx
-		 * - begin = 1 to end xx
-		 *
-		 * Reserved range starts from top down
-		 * - begin = yy to end max
-		 */
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
 
-		/* Bottom up check, start from 0 */
-		if (rsv_begin == 0) {
-			for (i = rsv_end + 1; i < max; i++)
-				ba_alloc_index(pool, i);
-		}
-
-		/* Bottom up check, start from 1 or higher OR
-		 * Top Down
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
 		 */
-		if (rsv_begin >= 1) {
-			/* Allocate from 0 until start */
-			for (i = 0; i < rsv_begin; i++)
-				ba_alloc_index(pool, i);
-
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
-}
-
-/**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
- */
-static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
-	uint32_t end = 0;
-
-	/* l2 ctxt rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
-
-	/* l2 ctxt tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the profile tcam and profile func
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
-	uint32_t end = 0;
-
-	/* profile func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
-
-	/* profile func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_PROF_TCAM;
-
-	/* profile tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
-
-	/* profile tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the em profile id allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_em_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
-	uint32_t end = 0;
-
-	/* em prof id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
-
-	/* em prof id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the wildcard tcam and profile id
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_wc(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
-	uint32_t end = 0;
-
-	/* wc profile id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
-
-	/* wc profile id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_WC_TCAM;
-
-	/* wc tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_RX);
-
-	/* wc tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the meter resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_meter(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
-	uint32_t end = 0;
-
-	/* meter profiles rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_RX);
-
-	/* meter profiles tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_METER_INST;
-
-	/* meter rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_RX);
-
-	/* meter tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the mirror resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_mirror(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
-	uint32_t end = 0;
-
-	/* mirror rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_RX);
-
-	/* mirror tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the upar resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_upar(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_UPAR;
-	uint32_t end = 0;
-
-	/* upar rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_RX);
-
-	/* upar tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp tcam resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
-	uint32_t end = 0;
-
-	/* sp tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_RX);
-
-	/* sp tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 func resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
-	uint32_t end = 0;
-
-	/* l2 func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
-
-	/* l2 func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the fkb resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_fkb(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_FKB;
-	uint32_t end = 0;
-
-	/* fkb rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_RX);
-
-	/* fkb tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the tbld scope resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
-	uint32_t end = 0;
-
-	/* tbl scope rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
-
-	/* tbl scope tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 epoch resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_epoch(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
-	uint32_t end = 0;
-
-	/* epoch0 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_RX);
-
-	/* epoch0 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_EPOCH1;
-
-	/* epoch1 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_RX);
-
-	/* epoch1 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the metadata resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_metadata(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METADATA;
-	uint32_t end = 0;
-
-	/* metadata rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_RX);
-
-	/* metadata tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the ct state resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_ct_state(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
-	uint32_t end = 0;
-
-	/* ct state rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_RX);
-
-	/* ct state tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the range resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_range(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
-	uint32_t end = 0;
-
-	/* range profile rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
-
-	/* range profile tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
-
-	/* range entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
-
-	/* range entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the lag resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_lag_entry(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
-	uint32_t end = 0;
-
-	/* lag entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
-
-	/* lag entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the full action resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
-	uint16_t end = 0;
-
-	/* full action rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_RX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
-
-	/* full action tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_TX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the multicast group resources
- * allocated that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
-	uint16_t end = 0;
-
-	/* multicast group rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_MCG_RX,
-			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
-
-	/* Multicast Group on TX is not supported */
-}
-
-/**
- * Internal function to mark all the encap resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_encap(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
-	uint16_t end = 0;
-
-	/* encap 8b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_RX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
-
-	/* encap 8b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_TX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
-
-	/* encap 16b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_RX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
-
-	/* encap 16b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_TX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
-
-	/* Encap 64B not supported on RX */
-
-	/* Encap 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_64B_TX,
-			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_sp(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
-	uint16_t end = 0;
-
-	/* sp smac rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_RX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
-
-	/* sp smac tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_TX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-
-	/* SP SMAC IPv4 not supported on RX */
-
-	/* sp smac ipv4 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-
-	/* SP SMAC IPv6 not supported on RX */
-
-	/* sp smac ipv6 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the stat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_stats(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
-	uint16_t end = 0;
-
-	/* counter 64b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_RX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
-
-	/* counter 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_TX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the nat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_nat(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
-	uint16_t end = 0;
-
-	/* nat source port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_RX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
-
-	/* nat source port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_TX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
-
-	/* nat destination port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_RX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
-
-	/* nat destination port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_TX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-
-	/* nat source port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
-
-	/* nat source ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-
-	/* nat destination port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
-
-	/* nat destination ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
-}
-
-/**
- * Internal function used to validate the HW allocated resources
- * against the requested values.
- */
-static int
-tf_rm_hw_alloc_validate(enum tf_dir dir,
-			struct tf_rm_hw_alloc *hw_alloc,
-			struct tf_rm_entry *hw_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				hw_alloc->hw_num[i],
-				hw_entry[i].stride);
-			error = -1;
-		}
-	}
-
-	return error;
-}
-
-/**
- * Internal function used to validate the SRAM allocated resources
- * against the requested values.
- */
-static int
-tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
-			  struct tf_rm_sram_alloc *sram_alloc,
-			  struct tf_rm_entry *sram_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				sram_alloc->sram_num[i],
-				sram_entry[i].stride);
-			error = -1;
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+
 		}
 	}
 
-	return error;
+	*valid_count = cnt;
 }
 
 /**
- * Internal function used to mark all the HW resources allocated that
- * Truflow does not own.
+ * Resource Manager Adjust of base index definitions.
  */
-static void
-tf_rm_reserve_hw(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_l2_ctxt(tfs);
-	tf_rm_rsvd_prof(tfs);
-	tf_rm_rsvd_em_prof(tfs);
-	tf_rm_rsvd_wc(tfs);
-	tf_rm_rsvd_mirror(tfs);
-	tf_rm_rsvd_meter(tfs);
-	tf_rm_rsvd_upar(tfs);
-	tf_rm_rsvd_sp_tcam(tfs);
-	tf_rm_rsvd_l2_func(tfs);
-	tf_rm_rsvd_fkb(tfs);
-	tf_rm_rsvd_tbl_scope(tfs);
-	tf_rm_rsvd_epoch(tfs);
-	tf_rm_rsvd_metadata(tfs);
-	tf_rm_rsvd_ct_state(tfs);
-	tf_rm_rsvd_range(tfs);
-	tf_rm_rsvd_lag_entry(tfs);
-}
-
-/**
- * Internal function used to mark all the SRAM resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_reserve_sram(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_sram_full_action(tfs);
-	tf_rm_rsvd_sram_mcg(tfs);
-	tf_rm_rsvd_sram_encap(tfs);
-	tf_rm_rsvd_sram_sp(tfs);
-	tf_rm_rsvd_sram_stats(tfs);
-	tf_rm_rsvd_sram_nat(tfs);
-}
-
-/**
- * Internal function used to allocate and validate all HW resources.
- */
-static int
-tf_rm_allocate_validate_hw(struct tf *tfp,
-			   enum tf_dir dir)
-{
-	int rc;
-	int i;
-	struct tf_rm_hw_query hw_query;
-	struct tf_rm_hw_alloc hw_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *hw_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		hw_entries = tfs->resc.rx.hw_entry;
-	else
-		hw_entries = tfs->resc.tx.hw_entry;
-
-	/* Query for Session HW Resources */
-
-	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
-	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed,"
-			"error_flag:0x%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
-		goto cleanup;
-	}
-
-	/* Post process HW capability */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
-		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
-
-	/* Allocate Session HW Resources */
-	/* Perform HW allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-
- cleanup:
-
-	return -1;
-}
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
 
 /**
- * Internal function used to allocate and validate all SRAM resources.
+ * Adjust an index according to the allocation information.
  *
- * [in] tfp
- *   Pointer to TF handle
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
  *
  * Returns:
- *   0  - Success
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_allocate_validate_sram(struct tf *tfp,
-			     enum tf_dir dir)
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
 {
-	int rc;
-	int i;
-	struct tf_rm_sram_query sram_query;
-	struct tf_rm_sram_alloc sram_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *sram_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
-
-	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed,"
-			"error_flag:%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
-	}
+	int rc = 0;
+	uint32_t base_index;
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	base_index = db[db_index].alloc.entry.start;
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed,"
-			    " rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
 	}
 
-	return 0;
-
- cleanup:
-
-	return -1;
+	return rc;
 }
 
 /**
- * Helper function used to prune a HW resource array to only hold
- * elements that needs to be flushed.
- *
- * [in] tfs
- *   Session handle
+ * Logs an array of found residual entries to the console.
  *
  * [in] dir
  *   Receive or transmit direction
  *
- * [in] hw_entries
- *   Master HW Resource database
+ * [in] type
+ *   Type of Device Module
  *
- * [in/out] flush_entries
- *   Pruned HW Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master HW
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in] count
+ *   Number of entries in the residual array
  *
- * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
  */
-static int
-tf_rm_hw_to_flush(struct tf_session *tfs,
-		  enum tf_dir dir,
-		  struct tf_rm_entry *hw_entries,
-		  struct tf_rm_entry *flush_entries)
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
 {
-	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
+	int i;
 
-	/* Check all the hw resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
 	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_CTXT_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_INST_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_MIRROR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_UPAR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SP_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_FKB_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
-		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_TBL_SCOPE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
-	} else {
-		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
-			    tf_dir_2_str(dir),
-			    free_cnt,
-			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH0_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH1_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METADATA_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_CT_STATE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_LAG_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
 	}
-
-	return flush_rc;
 }
 
 /**
- * Helper function used to prune a SRAM resource array to only hold
- * elements that needs to be flushed.
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
  *
- * [in] tfs
- *   Session handle
- *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
  *
- * [in] hw_entries
- *   Master SRAM Resource data base
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
  *
- * [in/out] flush_entries
- *   Pruned SRAM Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master SRAM
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
  *
  * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_sram_to_flush(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    struct tf_rm_entry *sram_entries,
-		    struct tf_rm_entry *flush_entries)
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
 {
 	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
-
-	/* Check all the sram resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
-	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_FULL_ACTION_POOL_NAME,
-			rc);
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
-	} else {
-		flush_rc = 1;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
 	}
 
-	/* Only pools for RX direction */
-	if (dir == TF_DIR_RX) {
-		TF_RM_GET_POOLS_RX(tfs, &pool,
-				   TF_SRAM_MCG_POOL_NAME);
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
 		if (rc)
 			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
-		} else {
-			flush_rc = 1;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
 		}
-	} else {
-		/* Always prune TX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		*resv_size = found;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_8B_POOL_NAME,
-			rc);
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
+int
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
+{
+	int rc;
+	int i;
+	int j;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_16B_POOL_NAME,
-			rc);
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-	}
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_SP_SMAC_POOL_NAME,
-			rc);
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
-	}
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
+
+	/* Handle the case where a DB create request really ends up
+	 * being empty. Unsupported (if not rare) case but possible
+	 * that no resources are necessary for a 'direction'.
+	 */
+	if (hcapi_items == 0) {
+		TFP_DRV_LOG(ERR,
+			"%s: DB create request for Zero elements, DB Type:%s\n",
+			tf_dir_2_str(parms->dir),
+			tf_device_module_type_2_str(parms->type));
+
+		parms->rm_db = NULL;
+		return -ENOMEM;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_STATS_64B_POOL_NAME,
-			rc);
+	/* Alloc request, alignment already set */
+	cparms.nitems = (size_t)hcapi_items;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_SPORT_POOL_NAME,
-			rc);
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
-	} else {
-		flush_rc = 1;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_cnt[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		}
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_DPORT_POOL_NAME,
-			rc);
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       hcapi_items,
+				       req,
+				       resv);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_S_IPV4_POOL_NAME,
-			rc);
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db = (void *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_D_IPV4_POOL_NAME,
-			rc);
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
-	return flush_rc;
-}
+	db = rm_db->db;
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
 
-/**
- * Helper function used to generate an error log for the HW types that
- * needs to be flushed. The types should have been cleaned up ahead of
- * invoking tf_close_session.
- *
- * [in] hw_entries
- *   HW Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_hw_flush(enum tf_dir dir,
-		   struct tf_rm_entry *hw_entries)
-{
-	int i;
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
 
-	/* Walk the hw flush array and log the types that wasn't
-	 * cleaned up.
-	 */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entries[i].stride != 0)
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
+		 */
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_hw_2_str(i));
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    db[i].cfg_type);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
+			goto fail;
+		}
 	}
+
+	rm_db->num_entries = parms->num_elements;
+	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
+	*parms->rm_db = (void *)rm_db;
+
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
+	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
-/**
- * Helper function used to generate an error log for the SRAM types
- * that needs to be flushed. The types should have been cleaned up
- * ahead of invoking tf_close_session.
- *
- * [in] sram_entries
- *   SRAM Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_sram_flush(enum tf_dir dir,
-		     struct tf_rm_entry *sram_entries)
+int
+tf_rm_free_db(struct tf *tfp,
+	      struct tf_rm_free_db_parms *parms)
 {
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
+	 */
 
-	/* Walk the sram flush array and log the types that wasn't
-	 * cleaned up.
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
 	 */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entries[i].stride != 0)
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_sram_2_str(i));
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
 	}
-}
 
-void
-tf_rm_init(struct tf *tfp __rte_unused)
-{
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db[i].pool);
 
-	/* This version is host specific and should be checked against
-	 * when attaching as there is no guarantee that a secondary
-	 * would run from same image version.
-	 */
-	tfs->ver.major = TF_SESSION_VER_MAJOR;
-	tfs->ver.minor = TF_SESSION_VER_MINOR;
-	tfs->ver.update = TF_SESSION_VER_UPDATE;
-
-	tfs->session_id.id = 0;
-	tfs->ref_count = 0;
-
-	/* Initialization of Table Scopes */
-	/* ll_init(&tfs->tbl_scope_ll); */
-
-	/* Initialization of HW and SRAM resource DB */
-	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
-
-	/* Initialization of HW Resource Pools */
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records should not be controlled by way of a pool */
-
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Initialization of SRAM Resource Pools
-	 * These pools are set to the TFLIB defined MAX sizes not
-	 * AFM's HW max as to limit the memory consumption
-	 */
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		TF_RSVD_SRAM_FULL_ACTION_RX);
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		TF_RSVD_SRAM_FULL_ACTION_TX);
-	/* Only Multicast Group on RX is supported */
-	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
-		TF_RSVD_SRAM_MCG_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_8B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_8B_TX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_16B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_16B_TX);
-	/* Only Encap 64B on TX is supported */
-	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_64B_TX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		TF_RSVD_SRAM_SP_SMAC_RX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_TX);
-	/* Only SP SMAC IPv4 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-	/* Only SP SMAC IPv6 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
-		TF_RSVD_SRAM_COUNTER_64B_RX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_COUNTER_64B_TX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_SPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_SPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_DPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_DPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_S_IPV4_TX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/* Initialization of pools local to TF Core */
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate_validate(struct tf *tfp)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc;
-	int i;
+	int id;
+	uint32_t index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		rc = tf_rm_allocate_validate_hw(tfp, i);
-		if (rc)
-			return rc;
-		rc = tf_rm_allocate_validate_sram(tfp, i);
-		if (rc)
-			return rc;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	/* With both HW and SRAM allocated and validated we can
-	 * 'scrub' the reservation on the pools.
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
 	 */
-	tf_rm_reserve_hw(tfp);
-	tf_rm_reserve_sram(tfp);
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		rc = -ENOMEM;
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				&index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -EINVAL;
+	}
+
+	*parms->index = index;
 
 	return rc;
 }
 
 int
-tf_rm_close(struct tf *tfp)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
 	int rc;
-	int rc_close = 0;
-	int i;
-	struct tf_rm_entry *hw_entries;
-	struct tf_rm_entry *hw_flush_entries;
-	struct tf_rm_entry *sram_entries;
-	struct tf_rm_entry *sram_flush_entries;
-	struct tf_session *tfs __rte_unused =
-		(struct tf_session *)(tfp->session->core_data);
-
-	struct tf_rm_db flush_resc = tfs->resc;
-
-	/* On close it is assumed that the session has already cleaned
-	 * up all its resources, individually, while destroying its
-	 * flows. No checking is performed thus the behavior is as
-	 * follows.
-	 *
-	 * Session RM will signal FW to release session resources. FW
-	 * will perform invalidation of all the allocated entries
-	 * (assures any outstanding resources has been cleared, then
-	 * free the FW RM instance.
-	 *
-	 * Session will then be freed by tf_close_session() thus there
-	 * is no need to clean each resource pool as the whole session
-	 * is going away.
-	 */
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		if (i == TF_DIR_RX) {
-			hw_entries = tfs->resc.rx.hw_entry;
-			hw_flush_entries = flush_resc.rx.hw_entry;
-			sram_entries = tfs->resc.rx.sram_entry;
-			sram_flush_entries = flush_resc.rx.sram_entry;
-		} else {
-			hw_entries = tfs->resc.tx.hw_entry;
-			hw_flush_entries = flush_resc.tx.hw_entry;
-			sram_entries = tfs->resc.tx.sram_entry;
-			sram_flush_entries = flush_resc.tx.sram_entry;
-		}
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-		/* Check for any not previously freed HW resources and
-		 * flush if required.
-		 */
-		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering HW resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-			/* Log the entries to be flushed */
-			tf_rm_log_hw_flush(i, hw_flush_entries);
-		}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-		/* Check for any not previously freed SRAM resources
-		 * and flush if required.
-		 */
-		rc = tf_rm_sram_to_flush(tfs,
-					 i,
-					 sram_entries,
-					 sram_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
 
-			/* Log the entries to be flushed */
-			tf_rm_log_sram_flush(i, sram_flush_entries);
-		}
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	return rc_close;
-}
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
 
-#if (TF_SHADOW == 1)
-int
-tf_rm_shadow_db_init(struct tf_session *tfs)
-{
-	rc = 1;
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
 
 	return rc;
 }
-#endif /* TF_SHADOW */
 
 int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	int rc;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_L2_CTXT_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_PROF_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_WC_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
 		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
 		return rc;
 	}
 
-	return 0;
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_FULL_ACTION_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_TX)
-			break;
-		TF_RM_GET_POOLS_RX(tfs, pool,
-				   TF_SRAM_MCG_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_8B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_16B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		/* No pools for RX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_SP_SMAC_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_STATS_64B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_SPORT_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_S_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_D_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_INST_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_MIRROR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_UPAR:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_UPAR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH0_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH1_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METADATA:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METADATA_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_CT_STATE_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_ENTRY_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_LAG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_LAG_ENTRY_POOL_NAME,
-				rc);
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-		break;
-	/* No bitalloc pools for these types */
-	case TF_TBL_TYPE_EXT:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	}
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
 	return 0;
 }
 
 int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
-		break;
-	case TF_TBL_TYPE_UPAR:
-		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
-		break;
-	case TF_TBL_TYPE_METADATA:
-		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
-		break;
-	case TF_TBL_TYPE_LAG:
-		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		*hcapi_type = -1;
-		rc = -EOPNOTSUPP;
-	}
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	return rc;
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return 0;
 }
 
 int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index)
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 {
-	int rc;
-	struct tf_rm_resc *resc;
-	uint32_t hcapi_type;
-	uint32_t base_index;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (dir == TF_DIR_RX)
-		resc = &tfs->resc.rx;
-	else if (dir == TF_DIR_TX)
-		resc = &tfs->resc.tx;
-	else
-		return -EOPNOTSUPP;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
-	if (rc)
-		return -1;
-
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-	case TF_TBL_TYPE_MCAST_GROUPS:
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-	case TF_TBL_TYPE_ACT_STATS_64:
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		base_index = resc->sram_entry[hcapi_type].start;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-	case TF_TBL_TYPE_METER_PROF:
-	case TF_TBL_TYPE_METER_INST:
-	case TF_TBL_TYPE_UPAR:
-	case TF_TBL_TYPE_EPOCH0:
-	case TF_TBL_TYPE_EPOCH1:
-	case TF_TBL_TYPE_METADATA:
-	case TF_TBL_TYPE_CT_STATE:
-	case TF_TBL_TYPE_RANGE_PROF:
-	case TF_TBL_TYPE_RANGE_ENTRY:
-	case TF_TBL_TYPE_LAG:
-		base_index = resc->hw_entry[hcapi_type].start;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		return -EOPNOTSUPP;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	switch (c_type) {
-	case TF_RM_CONVERT_RM_BASE:
-		*convert_index = index - base_index;
-		break;
-	case TF_RM_CONVERT_ADD_BASE:
-		*convert_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
 	}
 
-	return 0;
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
+	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 1a09f13a7..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -3,301 +3,444 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
-#include "tf_resources.h"
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
-struct tf_session;
 
-/* Internal macro to determine appropriate allocation pools based on
- * DIRECTION parm, also performs error checking for DIRECTION parm. The
- * SESSION_POOL and SESSION pointers are set appropriately upon
- * successful return (the GLOBAL_POOL is used to globally manage
- * resource allocation and the SESSION_POOL is used to track the
- * resources that have been allocated to the session)
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
  *
- * parameters:
- *   struct tfp        *tfp
- *   enum tf_dir        direction
- *   struct bitalloc  **session_pool
- *   string             base_pool_name - used to form pointers to the
- *					 appropriate bit allocation
- *					 pools, both directions of the
- *					 session pools must have same
- *					 base name, for example if
- *					 POOL_NAME is feat_pool: - the
- *					 ptr's to the session pools
- *					 are feat_pool_rx feat_pool_tx
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
  *
- *  int                  rc - return code
- *			      0 - Success
- *			     -1 - invalid DIRECTION parm
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
-#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
-		(rc) = 0;						\
-		if ((direction) == TF_DIR_RX) {				\
-			*(session_pool) = (tfs)->pool_name ## _RX;	\
-		} else if ((direction) == TF_DIR_TX) {			\
-			*(session_pool) = (tfs)->pool_name ## _TX;	\
-		} else {						\
-			rc = -1;					\
-		}							\
-	} while (0)
 
-#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _RX)
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_new_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
 
-#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _TX)
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled', uses a Pool for internal storage */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
+	TF_RM_TYPE_MAX
+};
 
 /**
- * Resource query single entry
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
  */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
 };
 
 /**
- * Resource single entry
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
  */
-struct tf_rm_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
 };
 
 /**
- * Resource query array of HW entities
+ * Allocation information for a single element.
  */
-struct tf_rm_hw_query {
-	/** array of HW resource entries */
-	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_new_entry entry;
 };
 
 /**
- * Resource allocation array of HW entities
+ * Create RM DB parameters
  */
-struct tf_rm_hw_alloc {
-	/** array of HW resource entries */
-	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements.
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
+	 */
+	uint16_t *alloc_cnt;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void **rm_db;
 };
 
 /**
- * Resource query array of SRAM entities
+ * Free RM DB parameters
  */
-struct tf_rm_sram_query {
-	/** array of SRAM resource entries */
-	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
 };
 
 /**
- * Resource allocation array of SRAM entities
+ * Allocate RM parameters for a single element
  */
-struct tf_rm_sram_alloc {
-	/** array of SRAM resource entries */
-	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
- * Resource Manager arrays for a single direction
+ * Free RM parameters for a single element
  */
-struct tf_rm_resc {
-	/** array of HW resource entries */
-	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
-	/** array of SRAM resource entries */
-	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t index;
 };
 
 /**
- * Resource Manager Database
+ * Is Allocated parameters for a single element
  */
-struct tf_rm_db {
-	struct tf_rm_resc rx;
-	struct tf_rm_resc tx;
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	int *allocated;
 };
 
 /**
- * Helper function used to convert HW HCAPI resource type to a string.
+ * Get Allocation information for a single element
  */
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
 
 /**
- * Helper function used to convert SRAM HCAPI resource type to a string.
+ * Get HCAPI type parameters for a single element
  */
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
 
 /**
- * Initializes the Resource Manager and the associated database
- * entries for HW and SRAM resources. Must be called before any other
- * Resource Manager functions.
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
+/**
+ * @page rm Resource Manager
  *
- * [in] tfp
- *   Pointer to TF handle
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
-void tf_rm_init(struct tf *tfp);
 
 /**
- * Allocates and validates both HW and SRAM resources per the NVM
- * configuration. If any allocation fails all resources for the
- * session is deallocated.
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_rm_allocate_validate(struct tf *tfp);
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
 
 /**
- * Closes the Resource Manager and frees all allocated resources per
- * the associated database.
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
- *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
-int tf_rm_close(struct tf *tfp);
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
 
-#if (TF_SHADOW == 1)
 /**
- * Initializes Shadow DB of configuration elements
+ * Allocates a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to allocate parameters
  *
- * Returns:
- *  0  - Success
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
-int tf_rm_shadow_db_init(struct tf_session *tfs);
-#endif /* TF_SHADOW */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Tcam table type.
- *
- * Function will print error msg if tcam type is unsupported or lookup
- * failed.
+ * Free's a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to free parameters
  *
- * [in] type
- *   Type of the object
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool);
+int tf_rm_free(struct tf_rm_free_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Table type.
- *
- * Function will print error msg if table type is unsupported or
- * lookup failed.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] type
- *   Type of the object
+ * Performs an allocation verification check on a specified element.
  *
- * [in] dir
- *    Receive or transmit direction
+ * [in] parms
+ *   Pointer to is allocated parameters
  *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool);
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
 
 /**
- * Converts the TF Table Type to internal HCAPI_TYPE
- *
- * [in] type
- *   Type to be converted
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
  *
- * [in/out] hcapi_type
- *   Converted type
+ * [in] parms
+ *   Pointer to get info parameters
  *
- * Returns:
- *  0           - Success will set the *hcapi_type
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type);
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
- * TF RM Convert of index methods.
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-enum tf_rm_convert_type {
-	/** Adds the base of the Session Pool to the index */
-	TF_RM_CONVERT_ADD_BASE,
-	/** Removes the Session Pool base from the index */
-	TF_RM_CONVERT_RM_BASE
-};
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
 /**
- * Provides conversion of the Table Type index in relation to the
- * Session Pool base.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in] type
- *   Type of the object
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
  *
- * [in] c_type
- *   Type of conversion to perform
+ * [in] parms
+ *   Pointer to get inuse parameters
  *
- * [in] index
- *   Index to be converted
- *
- * [in/out]  convert_index
- *   Pointer to the converted index
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index);
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
deleted file mode 100644
index 2d9be654a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ /dev/null
@@ -1,907 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-
-#include <rte_common.h>
-
-#include <cfa_resource_types.h>
-
-#include "tf_rm_new.h"
-#include "tf_common.h"
-#include "tf_util.h"
-#include "tf_session.h"
-#include "tf_device.h"
-#include "tfp.h"
-#include "tf_msg.h"
-
-/**
- * Generic RM Element data type that an RM DB is build upon.
- */
-struct tf_rm_element {
-	/**
-	 * RM Element configuration type. If Private then the
-	 * hcapi_type can be ignored. If Null then the element is not
-	 * valid for the device.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/**
-	 * HCAPI RM Type for the element.
-	 */
-	uint16_t hcapi_type;
-
-	/**
-	 * HCAPI RM allocated range information for the element.
-	 */
-	struct tf_rm_alloc_info alloc;
-
-	/**
-	 * Bit allocator pool for the element. Pool size is controlled
-	 * by the struct tf_session_resources at time of session creation.
-	 * Null indicates that the element is not used for the device.
-	 */
-	struct bitalloc *pool;
-};
-
-/**
- * TF RM DB definition
- */
-struct tf_rm_new_db {
-	/**
-	 * Number of elements in the DB
-	 */
-	uint16_t num_entries;
-
-	/**
-	 * Direction this DB controls.
-	 */
-	enum tf_dir dir;
-
-	/**
-	 * Module type, used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-
-	/**
-	 * The DB consists of an array of elements
-	 */
-	struct tf_rm_element *db;
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] cfg
- *   Pointer to the DB configuration
- *
- * [in] reservations
- *   Pointer to the allocation values associated with the module
- *
- * [in] count
- *   Number of DB configuration elements
- *
- * [out] valid_count
- *   Number of HCAPI entries with a reservation value greater than 0
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static void
-tf_rm_count_hcapi_reservations(enum tf_dir dir,
-			       enum tf_device_module_type type,
-			       struct tf_rm_element_cfg *cfg,
-			       uint16_t *reservations,
-			       uint16_t count,
-			       uint16_t *valid_count)
-{
-	int i;
-	uint16_t cnt = 0;
-
-	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
-		    reservations[i] > 0)
-			cnt++;
-
-		/* Only log msg if a type is attempted reserved and
-		 * not supported. We ignore EM module as its using a
-		 * split configuration array thus it would fail for
-		 * this type of check.
-		 */
-		if (type != TF_DEVICE_MODULE_TYPE_EM &&
-		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
-		    reservations[i] > 0) {
-			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-		}
-	}
-
-	*valid_count = cnt;
-}
-
-/**
- * Resource Manager Adjust of base index definitions.
- */
-enum tf_rm_adjust_type {
-	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
-	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [in] action
- *   Adjust action
- *
- * [in] db_index
- *   DB index for the element type
- *
- * [in] index
- *   Index to convert
- *
- * [out] adj_index
- *   Adjusted index
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_adjust_index(struct tf_rm_element *db,
-		   enum tf_rm_adjust_type action,
-		   uint32_t db_index,
-		   uint32_t index,
-		   uint32_t *adj_index)
-{
-	int rc = 0;
-	uint32_t base_index;
-
-	base_index = db[db_index].alloc.entry.start;
-
-	switch (action) {
-	case TF_RM_ADJUST_RM_BASE:
-		*adj_index = index - base_index;
-		break;
-	case TF_RM_ADJUST_ADD_BASE:
-		*adj_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	return rc;
-}
-
-/**
- * Logs an array of found residual entries to the console.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] type
- *   Type of Device Module
- *
- * [in] count
- *   Number of entries in the residual array
- *
- * [in] residuals
- *   Pointer to an array of residual entries. Array is index same as
- *   the DB in which this function is used. Each entry holds residual
- *   value for that entry.
- */
-static void
-tf_rm_log_residuals(enum tf_dir dir,
-		    enum tf_device_module_type type,
-		    uint16_t count,
-		    uint16_t *residuals)
-{
-	int i;
-
-	/* Walk the residual array and log the types that wasn't
-	 * cleaned up to the console.
-	 */
-	for (i = 0; i < count; i++) {
-		if (residuals[i] != 0)
-			TFP_DRV_LOG(ERR,
-				"%s, %s was not cleaned up, %d outstanding\n",
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i),
-				residuals[i]);
-	}
-}
-
-/**
- * Performs a check of the passed in DB for any lingering elements. If
- * a resource type was found to not have been cleaned up by the caller
- * then its residual values are recorded, logged and passed back in an
- * allocate reservation array that the caller can pass to the FW for
- * cleanup.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [out] resv_size
- *   Pointer to the reservation size of the generated reservation
- *   array.
- *
- * [in/out] resv
- *   Pointer Pointer to a reservation array. The reservation array is
- *   allocated after the residual scan and holds any found residual
- *   entries. Thus it can be smaller than the DB that the check was
- *   performed on. Array must be freed by the caller.
- *
- * [out] residuals_present
- *   Pointer to a bool flag indicating if residual was present in the
- *   DB
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
-		      uint16_t *resv_size,
-		      struct tf_rm_resc_entry **resv,
-		      bool *residuals_present)
-{
-	int rc;
-	int i;
-	int f;
-	uint16_t count;
-	uint16_t found;
-	uint16_t *residuals = NULL;
-	uint16_t hcapi_type;
-	struct tf_rm_get_inuse_count_parms iparms;
-	struct tf_rm_get_alloc_info_parms aparms;
-	struct tf_rm_get_hcapi_parms hparms;
-	struct tf_rm_alloc_info info;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_entry *local_resv = NULL;
-
-	/* Create array to hold the entries that have residuals */
-	cparms.nitems = rm_db->num_entries;
-	cparms.size = sizeof(uint16_t);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	residuals = (uint16_t *)cparms.mem_va;
-
-	/* Traverse the DB and collect any residual elements */
-	iparms.rm_db = rm_db;
-	iparms.count = &count;
-	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
-		iparms.db_index = i;
-		rc = tf_rm_get_inuse_count(&iparms);
-		/* Not a device supported entry, just skip */
-		if (rc == -ENOTSUP)
-			continue;
-		if (rc)
-			goto cleanup_residuals;
-
-		if (count) {
-			found++;
-			residuals[i] = count;
-			*residuals_present = true;
-		}
-	}
-
-	if (*residuals_present) {
-		/* Populate a reduced resv array with only the entries
-		 * that have residuals.
-		 */
-		cparms.nitems = found;
-		cparms.size = sizeof(struct tf_rm_resc_entry);
-		cparms.alignment = 0;
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-
-		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-		aparms.rm_db = rm_db;
-		hparms.rm_db = rm_db;
-		hparms.hcapi_type = &hcapi_type;
-		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
-			if (residuals[i] == 0)
-				continue;
-			aparms.db_index = i;
-			aparms.info = &info;
-			rc = tf_rm_get_info(&aparms);
-			if (rc)
-				goto cleanup_all;
-
-			hparms.db_index = i;
-			rc = tf_rm_get_hcapi_type(&hparms);
-			if (rc)
-				goto cleanup_all;
-
-			local_resv[f].type = hcapi_type;
-			local_resv[f].start = info.entry.start;
-			local_resv[f].stride = info.entry.stride;
-			f++;
-		}
-		*resv_size = found;
-	}
-
-	tf_rm_log_residuals(rm_db->dir,
-			    rm_db->type,
-			    rm_db->num_entries,
-			    residuals);
-
-	tfp_free((void *)residuals);
-	*resv = local_resv;
-
-	return 0;
-
- cleanup_all:
-	tfp_free((void *)local_resv);
-	*resv = NULL;
- cleanup_residuals:
-	tfp_free((void *)residuals);
-
-	return rc;
-}
-
-int
-tf_rm_create_db(struct tf *tfp,
-		struct tf_rm_create_db_parms *parms)
-{
-	int rc;
-	int i;
-	int j;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	uint16_t max_types;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_req_entry *query;
-	enum tf_rm_resc_resv_strategy resv_strategy;
-	struct tf_rm_resc_req_entry *req;
-	struct tf_rm_resc_entry *resv;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_element *db;
-	uint32_t pool_size;
-	uint16_t hcapi_items;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc)
-		return rc;
-
-	/* Retrieve device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc)
-		return rc;
-
-	/* Need device max number of elements for the RM QCAPS */
-	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
-	if (rc)
-		return rc;
-
-	cparms.nitems = max_types;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Get Firmware Capabilities */
-	rc = tf_msg_session_resc_qcaps(tfp,
-				       parms->dir,
-				       max_types,
-				       query,
-				       &resv_strategy);
-	if (rc)
-		return rc;
-
-	/* Process capabilities against DB requirements. However, as a
-	 * DB can hold elements that are not HCAPI we can reduce the
-	 * req msg content by removing those out of the request yet
-	 * the DB holds them all as to give a fast lookup. We can also
-	 * remove entries where there are no request for elements.
-	 */
-	tf_rm_count_hcapi_reservations(parms->dir,
-				       parms->type,
-				       parms->cfg,
-				       parms->alloc_cnt,
-				       parms->num_elements,
-				       &hcapi_items);
-
-	/* Alloc request, alignment already set */
-	cparms.nitems = (size_t)hcapi_items;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Alloc reservation, alignment and nitems already set */
-	cparms.size = sizeof(struct tf_rm_resc_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-	/* Build the request */
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			/* Only perform reservation for entries that
-			 * has been requested
-			 */
-			if (parms->alloc_cnt[i] == 0)
-				continue;
-
-			/* Verify that we can get the full amount
-			 * allocated per the qcaps availability.
-			 */
-			if (parms->alloc_cnt[i] <=
-			    query[parms->cfg[i].hcapi_type].max) {
-				req[j].type = parms->cfg[i].hcapi_type;
-				req[j].min = parms->alloc_cnt[i];
-				req[j].max = parms->alloc_cnt[i];
-				j++;
-			} else {
-				TFP_DRV_LOG(ERR,
-					    "%s: Resource failure, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    parms->cfg[i].hcapi_type);
-				TFP_DRV_LOG(ERR,
-					"req:%d, avail:%d\n",
-					parms->alloc_cnt[i],
-					query[parms->cfg[i].hcapi_type].max);
-				return -EINVAL;
-			}
-		}
-	}
-
-	rc = tf_msg_session_resc_alloc(tfp,
-				       parms->dir,
-				       hcapi_items,
-				       req,
-				       resv);
-	if (rc)
-		return rc;
-
-	/* Build the RM DB per the request */
-	cparms.nitems = 1;
-	cparms.size = sizeof(struct tf_rm_new_db);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db = (void *)cparms.mem_va;
-
-	/* Build the DB within RM DB */
-	cparms.nitems = parms->num_elements;
-	cparms.size = sizeof(struct tf_rm_element);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
-
-	db = rm_db->db;
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-
-		/* Skip any non HCAPI types as we didn't include them
-		 * in the reservation request.
-		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
-			continue;
-
-		/* If the element didn't request an allocation no need
-		 * to create a pool nor verify if we got a reservation.
-		 */
-		if (parms->alloc_cnt[i] == 0)
-			continue;
-
-		/* If the element had requested an allocation and that
-		 * allocation was a success (full amount) then
-		 * allocate the pool.
-		 */
-		if (parms->alloc_cnt[i] == resv[j].stride) {
-			db[i].alloc.entry.start = resv[j].start;
-			db[i].alloc.entry.stride = resv[j].stride;
-
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			j++;
-		} else {
-			/* Bail out as we want what we requested for
-			 * all elements, not any less.
-			 */
-			TFP_DRV_LOG(ERR,
-				    "%s: Alloc failed, type:%d\n",
-				    tf_dir_2_str(parms->dir),
-				    db[i].cfg_type);
-			TFP_DRV_LOG(ERR,
-				    "req:%d, alloc:%d\n",
-				    parms->alloc_cnt[i],
-				    resv[j].stride);
-			goto fail;
-		}
-	}
-
-	rm_db->num_entries = parms->num_elements;
-	rm_db->dir = parms->dir;
-	rm_db->type = parms->type;
-	*parms->rm_db = (void *)rm_db;
-
-	printf("%s: type:%d num_entries:%d\n",
-	       tf_dir_2_str(parms->dir),
-	       parms->type,
-	       i);
-
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-
-	return 0;
-
- fail:
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-	tfp_free((void *)db->pool);
-	tfp_free((void *)db);
-	tfp_free((void *)rm_db);
-	parms->rm_db = NULL;
-
-	return -EINVAL;
-}
-
-int
-tf_rm_free_db(struct tf *tfp,
-	      struct tf_rm_free_db_parms *parms)
-{
-	int rc;
-	int i;
-	uint16_t resv_size = 0;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_resc_entry *resv;
-	bool residuals_found = false;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	/* Device unbind happens when the TF Session is closed and the
-	 * session ref count is 0. Device unbind will cleanup each of
-	 * its support modules, i.e. Identifier, thus we're ending up
-	 * here to close the DB.
-	 *
-	 * On TF Session close it is assumed that the session has already
-	 * cleaned up all its resources, individually, while
-	 * destroying its flows.
-	 *
-	 * To assist in the 'cleanup checking' the DB is checked for any
-	 * remaining elements and logged if found to be the case.
-	 *
-	 * Any such elements will need to be 'cleared' ahead of
-	 * returning the resources to the HCAPI RM.
-	 *
-	 * RM will signal FW to flush the DB resources. FW will
-	 * perform the invalidation. TF Session close will return the
-	 * previous allocated elements to the RM and then close the
-	 * HCAPI RM registration. That then saves several 'free' msgs
-	 * from being required.
-	 */
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-
-	/* Check for residuals that the client didn't clean up */
-	rc = tf_rm_check_residuals(rm_db,
-				   &resv_size,
-				   &resv,
-				   &residuals_found);
-	if (rc)
-		return rc;
-
-	/* Invalidate any residuals followed by a DB traversal for
-	 * pool cleanup.
-	 */
-	if (residuals_found) {
-		rc = tf_msg_session_resc_flush(tfp,
-					       parms->dir,
-					       resv_size,
-					       resv);
-		tfp_free((void *)resv);
-		/* On failure we still have to cleanup so we can only
-		 * log that FW failed.
-		 */
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s: Internal Flush error, module:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_device_module_type_2_str(rm_db->type));
-	}
-
-	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db[i].pool);
-
-	tfp_free((void *)parms->rm_db);
-
-	return rc;
-}
-
-int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority)
-		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
-	else
-		id = ba_alloc(rm_db->db[parms->db_index].pool);
-	if (id == BA_FAIL) {
-		rc = -ENOMEM;
-		TFP_DRV_LOG(ERR,
-			    "%s: Allocation failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_ADD_BASE,
-				parms->db_index,
-				id,
-				&index);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Alloc adjust of base index failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return -EINVAL;
-	}
-
-	*parms->index = index;
-
-	return rc;
-}
-
-int
-tf_rm_free(struct tf_rm_free_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
-	/* No logging direction matters and that is not available here */
-	if (rc)
-		return rc;
-
-	return rc;
-}
-
-int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
-				     adj_index);
-
-	return rc;
-}
-
-int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	memcpy(parms->info,
-	       &rm_db->db[parms->db_index].alloc,
-	       sizeof(struct tf_rm_alloc_info));
-
-	return 0;
-}
-
-int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
-
-	return 0;
-}
-
-int
-tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
-{
-	int rc = 0;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail silently (no logging), if the pool is not valid there
-	 * was no elements allocated for it.
-	 */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		*parms->count = 0;
-		return 0;
-	}
-
-	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
-
-	return rc;
-
-}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
deleted file mode 100644
index 5cb68892a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ /dev/null
@@ -1,446 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_RM_NEW_H_
-#define TF_RM_NEW_H_
-
-#include "tf_core.h"
-#include "bitalloc.h"
-#include "tf_device.h"
-
-struct tf;
-
-/**
- * The Resource Manager (RM) module provides basic DB handling for
- * internal resources. These resources exists within the actual device
- * and are controlled by the HCAPI Resource Manager running on the
- * firmware.
- *
- * The RM DBs are all intended to be indexed using TF types there for
- * a lookup requires no additional conversion. The DB configuration
- * specifies the TF Type to HCAPI Type mapping and it becomes the
- * responsibility of the DB initialization to handle this static
- * mapping.
- *
- * Accessor functions are providing access to the DB, thus hiding the
- * implementation.
- *
- * The RM DB will work on its initial allocated sizes so the
- * capability of dynamically growing a particular resource is not
- * possible. If this capability later becomes a requirement then the
- * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
- * structure and several new APIs would need to be added to allow for
- * growth of a single TF resource type.
- *
- * The access functions does not check for NULL pointers as it's a
- * support module, not called directly.
- */
-
-/**
- * Resource reservation single entry result. Used when accessing HCAPI
- * RM on the firmware.
- */
-struct tf_rm_new_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
-};
-
-/**
- * RM Element configuration enumeration. Used by the Device to
- * indicate how the RM elements the DB consists off, are to be
- * configured at time of DB creation. The TF may present types to the
- * ULP layer that is not controlled by HCAPI within the Firmware.
- */
-enum tf_rm_elem_cfg_type {
-	/** No configuration */
-	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
-	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
-	/**
-	 * Shared element thus it belongs to a shared FW Session and
-	 * is not controlled by the Host.
-	 */
-	TF_RM_ELEM_CFG_SHARED,
-	TF_RM_TYPE_MAX
-};
-
-/**
- * RM Reservation strategy enumeration. Type of strategy comes from
- * the HCAPI RM QCAPS handshake.
- */
-enum tf_rm_resc_resv_strategy {
-	TF_RM_RESC_RESV_STATIC_PARTITION,
-	TF_RM_RESC_RESV_STRATEGY_1,
-	TF_RM_RESC_RESV_STRATEGY_2,
-	TF_RM_RESC_RESV_STRATEGY_3,
-	TF_RM_RESC_RESV_MAX
-};
-
-/**
- * RM Element configuration structure, used by the Device to configure
- * how an individual TF type is configured in regard to the HCAPI RM
- * of same type.
- */
-struct tf_rm_element_cfg {
-	/**
-	 * RM Element config controls how the DB for that element is
-	 * processed.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/* If a HCAPI to TF type conversion is required then TF type
-	 * can be added here.
-	 */
-
-	/**
-	 * HCAPI RM Type for the element. Used for TF to HCAPI type
-	 * conversion.
-	 */
-	uint16_t hcapi_type;
-};
-
-/**
- * Allocation information for a single element.
- */
-struct tf_rm_alloc_info {
-	/**
-	 * HCAPI RM allocated range information.
-	 *
-	 * NOTE:
-	 * In case of dynamic allocation support this would have
-	 * to be changed to linked list of tf_rm_entry instead.
-	 */
-	struct tf_rm_new_entry entry;
-};
-
-/**
- * Create RM DB parameters
- */
-struct tf_rm_create_db_parms {
-	/**
-	 * [in] Device module type. Used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-	/**
-	 * [in] Receive or transmit direction.
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Number of elements.
-	 */
-	uint16_t num_elements;
-	/**
-	 * [in] Parameter structure array. Array size is num_elements.
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Resource allocation count array. This array content
-	 * originates from the tf_session_resources that is passed in
-	 * on session open.
-	 * Array size is num_elements.
-	 */
-	uint16_t *alloc_cnt;
-	/**
-	 * [out] RM DB Handle
-	 */
-	void **rm_db;
-};
-
-/**
- * Free RM DB parameters
- */
-struct tf_rm_free_db_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-};
-
-/**
- * Allocate RM parameters for a single element
- */
-struct tf_rm_allocate_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Pointer to the allocated index in normalized
-	 * form. Normalized means the index has been adjusted,
-	 * i.e. Full Action Record offsets.
-	 */
-	uint32_t *index;
-	/**
-	 * [in] Priority, indicates the prority of the entry
-	 * priority  0: allocate from top of the tcam (from index 0
-	 *              or lowest available index)
-	 * priority !0: allocate from bottom of the tcam (from highest
-	 *              available index)
-	 */
-	uint32_t priority;
-};
-
-/**
- * Free RM parameters for a single element
- */
-struct tf_rm_free_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint16_t index;
-};
-
-/**
- * Is Allocated parameters for a single element
- */
-struct tf_rm_is_allocated_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t index;
-	/**
-	 * [in] Pointer to flag that indicates the state of the query
-	 */
-	int *allocated;
-};
-
-/**
- * Get Allocation information for a single element
- */
-struct tf_rm_get_alloc_info_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the requested allocation information for
-	 * the specified db_index
-	 */
-	struct tf_rm_alloc_info *info;
-};
-
-/**
- * Get HCAPI type parameters for a single element
- */
-struct tf_rm_get_hcapi_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the hcapi type for the specified db_index
-	 */
-	uint16_t *hcapi_type;
-};
-
-/**
- * Get InUse count parameters for single element
- */
-struct tf_rm_get_inuse_count_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the inuse count for the specified db_index
-	 */
-	uint16_t *count;
-};
-
-/**
- * @page rm Resource Manager
- *
- * @ref tf_rm_create_db
- *
- * @ref tf_rm_free_db
- *
- * @ref tf_rm_allocate
- *
- * @ref tf_rm_free
- *
- * @ref tf_rm_is_allocated
- *
- * @ref tf_rm_get_info
- *
- * @ref tf_rm_get_hcapi_type
- *
- * @ref tf_rm_get_inuse_count
- */
-
-/**
- * Creates and fills a Resource Manager (RM) DB with requested
- * elements. The DB is indexed per the parms structure.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to create parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- * - Fail on parameter check
- * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
- * - Fail on DB creation if DB already exist
- *
- * - Allocs local DB
- * - Does hcapi qcaps
- * - Does hcapi reservation
- * - Populates the pool with allocated elements
- * - Returns handle to the created DB
- */
-int tf_rm_create_db(struct tf *tfp,
-		    struct tf_rm_create_db_parms *parms);
-
-/**
- * Closes the Resource Manager (RM) DB and frees all allocated
- * resources per the associated database.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free_db(struct tf *tfp,
-		  struct tf_rm_free_db_parms *parms);
-
-/**
- * Allocates a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to allocate parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- *   - (-ENOMEM) if pool is empty
- */
-int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
-
-/**
- * Free's a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free(struct tf_rm_free_parms *parms);
-
-/**
- * Performs an allocation verification check on a specified element.
- *
- * [in] parms
- *   Pointer to is allocated parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- *  - If pool is set to Chip MAX, then the query index must be checked
- *    against the allocated range and query index must be allocated as well.
- *  - If pool is allocated size only, then check if query index is allocated.
- */
-int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
-
-/**
- * Retrieves an elements allocation information from the Resource
- * Manager (RM) DB.
- *
- * [in] parms
- *   Pointer to get info parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type.
- *
- * [in] parms
- *   Pointer to get hcapi parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type inuse count.
- *
- * [in] parms
- *   Pointer to get inuse parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
-
-#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 705bb0955..e4472ed7f 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -14,6 +14,7 @@
 #include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "tf_resources.h"
 #include "stack.h"
 
 /**
@@ -43,7 +44,8 @@
 #define TF_SESSION_EM_POOL_SIZE \
 	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
 
-/** Session
+/**
+ * Session
  *
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
@@ -99,216 +101,6 @@ struct tf_session {
 	/** Device handle */
 	struct tf_dev_info dev;
 
-	/** Session HW and SRAM resources */
-	struct tf_rm_db resc;
-
-	/* Session HW resource pools */
-
-	/** RX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** RX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	/** TX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-
-	/** RX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	/** TX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-
-	/** RX EM Profile ID Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	/** TX EM Key Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/** RX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	/** TX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records are not controlled by way of a pool */
-
-	/** RX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	/** TX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-
-	/** RX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	/** TX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-
-	/** RX Meter Instance Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	/** TX Meter Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-
-	/** RX Mirror Configuration Pool*/
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	/** RX Mirror Configuration Pool */
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-
-	/** RX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	/** TX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	/** RX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	/** TX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	/** RX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	/** TX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	/** RX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	/** TX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-
-	/** RX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	/** TX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-
-	/** RX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	/** TX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-
-	/** RX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	/** TX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-
-	/** RX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	/** TX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-
-	/** RX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	/** TX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-
-	/** RX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	/** TX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-
-	/** RX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	/** TX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Session SRAM pools */
-
-	/** RX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		      TF_RSVD_SRAM_FULL_ACTION_RX);
-	/** TX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		      TF_RSVD_SRAM_FULL_ACTION_TX);
-
-	/** RX Multicast Group Pool, only RX is supported */
-	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
-		      TF_RSVD_SRAM_MCG_RX);
-
-	/** RX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_8B_RX);
-	/** TX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_8B_TX);
-
-	/** RX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_16B_RX);
-	/** TX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_16B_TX);
-
-	/** TX Encap 64B Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_64B_TX);
-
-	/** RX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		      TF_RSVD_SRAM_SP_SMAC_RX);
-	/** TX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_TX);
-
-	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-
-	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-
-	/** RX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_COUNTER_64B_RX);
-	/** TX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_COUNTER_64B_TX);
-
-	/** RX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_SPORT_RX);
-	/** TX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_SPORT_TX);
-
-	/** RX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_DPORT_RX);
-	/** TX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_DPORT_TX);
-
-	/** RX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	/** TX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
-
-	/** RX NAT Destination IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	/** TX NAT IPv4 Destination Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/**
-	 * Pools not allocated from HCAPI RM
-	 */
-
-	/** RX L2 Ctx Remap ID  Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 Ctx Remap ID Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** CRC32 seed table */
-#define TF_LKUP_SEED_MEM_SIZE 512
-	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
-
-	/** Lookup3 init values */
-	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d7f5de4c4..05e866dc6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -5,175 +5,413 @@
 
 /* Truflow Table APIs and supporting code */
 
-#include <stdio.h>
-#include <string.h>
-#include <stdbool.h>
-#include <math.h>
-#include <sys/param.h>
 #include <rte_common.h>
-#include <rte_errno.h>
-#include "hsi_struct_def_dpdk.h"
 
-#include "tf_core.h"
+#include "tf_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
 #include "tf_util.h"
-#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
-#include "hwrm_tf.h"
-#include "bnxt.h"
-#include "tf_resources.h"
-#include "tf_rm.h"
-#include "stack.h"
-#include "tf_common.h"
+
+
+struct tf;
+
+/**
+ * Table DBs.
+ */
+static void *tbl_db[TF_DIR_MAX];
+
+/**
+ * Table Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
 
 /**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
+ * Shadow init flag, set on bind and cleared on unbind
  */
-static int
-tf_bulk_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms)
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table DB already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_tbl_unbind(struct tf *tfp)
 {
 	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms)
+{
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
 	if (rc)
 		return rc;
 
-	index = parms->starting_idx;
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
 
-	/*
-	 * Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->starting_idx,
-			    &index);
+			    parms->idx);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
+{
+	int rc;
+	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
 		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   parms->type,
-		   index);
+		   parms->idx);
 		return -EINVAL;
 	}
 
-	/* Get the entry */
-	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
 }
 
-#if (TF_SHADOW == 1)
-/**
- * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is incremente and
- * returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count incremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
-			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+int
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Alloc with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	int rc;
+	uint16_t hcapi_type;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	return -EOPNOTSUPP;
-}
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
-/**
- * Free Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is decremente and
- * new ref_count returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_free_tbl_entry_shadow(struct tf_session *tfs,
-			 struct tf_free_tbl_entry_parms *parms)
-{
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Free with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
-	return -EOPNOTSUPP;
-}
-#endif /* TF_SHADOW */
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
 
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
 
- /* API defined in tf_core.h */
 int
-tf_bulk_get_tbl_entry(struct tf *tfp,
-		 struct tf_bulk_get_tbl_entry_parms *parms)
+tf_tbl_bulk_get(struct tf *tfp,
+		struct tf_tbl_get_bulk_parms *parms)
 {
-	int rc = 0;
+	int rc;
+	int i;
+	uint16_t hcapi_type;
+	uint32_t idx;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
+	if (!init) {
 		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
+			    "%s: No Table DBs created\n",
 			    tf_dir_2_str(parms->dir));
 
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
+		return -EINVAL;
+	}
+	/* Verify that the entries has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.allocated = &allocated;
+	idx = parms->starting_idx;
+	for (i = 0; i < parms->num_entries; i++) {
+		aparms.index = idx;
+		rc = tf_rm_is_allocated(&aparms);
 		if (rc)
+			return rc;
+
+		if (!allocated) {
 			TFP_DRV_LOG(ERR,
-				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    strerror(-rc));
+				    idx);
+			return -EINVAL;
+		}
+		idx++;
+	}
+
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entries */
+	rc = tf_msg_bulk_get_tbl_entry(tfp,
+				       parms->dir,
+				       hcapi_type,
+				       parms->starting_idx,
+				       parms->num_entries,
+				       parms->entry_sz_in_bytes,
+				       parms->physical_mem_addr);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 2b7456faa..eb560ffa7 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -3,17 +3,21 @@
  * All rights reserved.
  */
 
-#ifndef _TF_TBL_H_
-#define _TF_TBL_H_
-
-#include <stdint.h>
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
 
 #include "tf_core.h"
 #include "stack.h"
 
-struct tf_session;
+struct tf;
+
+/**
+ * The Table module provides processing of Internal TF table types.
+ */
 
-/** table scope control block content */
+/**
+ * Table scope control block content
+ */
 struct tf_em_caps {
 	uint32_t flags;
 	uint32_t supported;
@@ -35,66 +39,364 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
-	struct tf_em_caps          em_caps[TF_DIR_MAX];
-	struct stack               ext_act_pool[TF_DIR_MAX];
-	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps em_caps[TF_DIR_MAX];
+	struct stack ext_act_pool[TF_DIR_MAX];
+	uint32_t *ext_act_pool_mem[TF_DIR_MAX];
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+};
+
+/**
+ * Table allocation parameters
+ */
+struct tf_tbl_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t *idx;
+};
+
+/**
+ * Table free parameters
+ */
+struct tf_tbl_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
- */
-#define TF_EM_PAGE_SIZE_4K 12
-#define TF_EM_PAGE_SIZE_8K 13
-#define TF_EM_PAGE_SIZE_64K 16
-#define TF_EM_PAGE_SIZE_256K 18
-#define TF_EM_PAGE_SIZE_1M 20
-#define TF_EM_PAGE_SIZE_2M 21
-#define TF_EM_PAGE_SIZE_4M 22
-#define TF_EM_PAGE_SIZE_1G 30
-
-/* Set page size */
-#define PAGE_SIZE TF_EM_PAGE_SIZE_2M
-
-#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
-#else
-#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
-#endif
-
-#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
-#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
-
-/**
- * Initialize table pool structure to indicate
- * no table scope has been associated with the
- * external pool of indexes.
- *
- * [in] session
- */
-void
-tf_init_tbl_pool(struct tf_session *session);
-
-#endif /* _TF_TBL_H_ */
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table set parameters
+ */
+struct tf_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get parameters
+ */
+struct tf_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get bulk parameters
+ */
+struct tf_tbl_get_bulk_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * @page tbl Table
+ *
+ * @ref tf_tbl_bind
+ *
+ * @ref tf_tbl_unbind
+ *
+ * @ref tf_tbl_alloc
+ *
+ * @ref tf_tbl_free
+ *
+ * @ref tf_tbl_alloc_search
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_bulk_get
+ */
+
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table allocation parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
+
+/**
+ * Retrieves bulk block of elements by sending a firmware request to
+ * get the elements.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get bulk parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bulk_get(struct tf *tfp,
+		    struct tf_tbl_get_bulk_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
deleted file mode 100644
index 2f5af6060..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ /dev/null
@@ -1,342 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <rte_common.h>
-
-#include "tf_tbl_type.h"
-#include "tf_common.h"
-#include "tf_rm_new.h"
-#include "tf_util.h"
-#include "tf_msg.h"
-#include "tfp.h"
-
-struct tf;
-
-/**
- * Table DBs.
- */
-static void *tbl_db[TF_DIR_MAX];
-
-/**
- * Table Shadow DBs
- */
-/* static void *shadow_tbl_db[TF_DIR_MAX]; */
-
-/**
- * Init flag, set on bind and cleared on unbind
- */
-static uint8_t init;
-
-/**
- * Shadow init flag, set on bind and cleared on unbind
- */
-/* static uint8_t shadow_init; */
-
-int
-tf_tbl_bind(struct tf *tfp,
-	    struct tf_tbl_cfg_parms *parms)
-{
-	int rc;
-	int i;
-	struct tf_rm_create_db_parms db_cfg = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (init) {
-		TFP_DRV_LOG(ERR,
-			    "Table already initialized\n");
-		return -EINVAL;
-	}
-
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.cfg = parms->cfg;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		db_cfg.dir = i;
-		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
-		db_cfg.rm_db = &tbl_db[i];
-		rc = tf_rm_create_db(tfp, &db_cfg);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Table DB creation failed\n",
-				    tf_dir_2_str(i));
-
-			return rc;
-		}
-	}
-
-	init = 1;
-
-	printf("Table Type - initialized\n");
-
-	return 0;
-}
-
-int
-tf_tbl_unbind(struct tf *tfp __rte_unused)
-{
-	int rc;
-	int i;
-	struct tf_rm_free_db_parms fparms = { 0 };
-
-	TF_CHECK_PARMS1(tfp);
-
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No Table DBs created\n");
-		return -EINVAL;
-	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		fparms.dir = i;
-		fparms.rm_db = tbl_db[i];
-		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
-
-		tbl_db[i] = NULL;
-	}
-
-	init = 0;
-
-	return 0;
-}
-
-int
-tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms)
-{
-	int rc;
-	uint32_t idx;
-	struct tf_rm_allocate_parms aparms = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Allocate requested element */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = &idx;
-	rc = tf_rm_allocate(&aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed allocate, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return rc;
-	}
-
-	*parms->idx = idx;
-
-	return 0;
-}
-
-int
-tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms)
-{
-	int rc;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_free_parms fparms = { 0 };
-	int allocated = 0;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Check if element is in use */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Entry already free, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	/* Free requested element */
-	fparms.rm_db = tbl_db[parms->dir];
-	fparms.db_index = parms->type;
-	fparms.index = parms->idx;
-	rc = tf_rm_free(&fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Free failed, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_alloc_search(struct tf *tfp __rte_unused,
-		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
-{
-	return 0;
-}
-
-int
-tf_tbl_set(struct tf *tfp,
-	   struct tf_tbl_set_parms *parms)
-{
-	int rc;
-	int allocated = 0;
-	uint16_t hcapi_type;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_get(struct tf *tfp,
-	   struct tf_tbl_get_parms *parms)
-{
-	int rc;
-	uint16_t hcapi_type;
-	int allocated = 0;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
deleted file mode 100644
index 3474489a6..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ /dev/null
@@ -1,318 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_TBL_TYPE_H_
-#define TF_TBL_TYPE_H_
-
-#include "tf_core.h"
-
-struct tf;
-
-/**
- * The Table module provides processing of Internal TF table types.
- */
-
-/**
- * Table configuration parameters
- */
-struct tf_tbl_cfg_parms {
-	/**
-	 * Number of table types in each of the configuration arrays
-	 */
-	uint16_t num_elements;
-	/**
-	 * Table Type element configuration array
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Shadow table type configuration array
-	 */
-	struct tf_shadow_tbl_cfg *shadow_cfg;
-	/**
-	 * Boolean controlling the request shadow copy.
-	 */
-	bool shadow_copy;
-	/**
-	 * Session resource allocations
-	 */
-	struct tf_session_resources *resources;
-};
-
-/**
- * Table allocation parameters
- */
-struct tf_tbl_alloc_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t *idx;
-};
-
-/**
- * Table free parameters
- */
-struct tf_tbl_free_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation type
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t idx;
-	/**
-	 * [out] Reference count after free, only valid if session has been
-	 * created with shadow_copy.
-	 */
-	uint16_t ref_cnt;
-};
-
-/**
- * Table allocate search parameters
- */
-struct tf_tbl_alloc_search_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
-	 */
-	uint32_t tbl_scope_id;
-	/**
-	 * [in] Enable search for matching entry. If the table type is
-	 * internal the shadow copy will be searched before
-	 * alloc. Session must be configured with shadow copy enabled.
-	 */
-	uint8_t search_enable;
-	/**
-	 * [in] Result data to search for (if search_enable)
-	 */
-	uint8_t *result;
-	/**
-	 * [in] Result data size in bytes (if search_enable)
-	 */
-	uint16_t result_sz_in_bytes;
-	/**
-	 * [out] If search_enable, set if matching entry found
-	 */
-	uint8_t hit;
-	/**
-	 * [out] Current ref count after allocation (if search_enable)
-	 */
-	uint16_t ref_cnt;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table set parameters
- */
-struct tf_tbl_set_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to set
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [in] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to write to
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table get parameters
- */
-struct tf_tbl_get_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to get
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [out] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to read
-	 */
-	uint32_t idx;
-};
-
-/**
- * @page tbl Table
- *
- * @ref tf_tbl_bind
- *
- * @ref tf_tbl_unbind
- *
- * @ref tf_tbl_alloc
- *
- * @ref tf_tbl_free
- *
- * @ref tf_tbl_alloc_search
- *
- * @ref tf_tbl_set
- *
- * @ref tf_tbl_get
- */
-
-/**
- * Initializes the Table module with the requested DBs. Must be
- * invoked as the first thing before any of the access functions.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table configuration parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_bind(struct tf *tfp,
-		struct tf_tbl_cfg_parms *parms);
-
-/**
- * Cleans up the private DBs and releases all the data.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_unbind(struct tf *tfp);
-
-/**
- * Allocates the requested table type from the internal RM DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table allocation parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc(struct tf *tfp,
-		 struct tf_tbl_alloc_parms *parms);
-
-/**
- * Free's the requested table type and returns it to the DB. If shadow
- * DB is enabled its searched first and if found the element refcount
- * is decremented. If refcount goes to 0 then its returned to the
- * table type DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_free(struct tf *tfp,
-		struct tf_tbl_free_parms *parms);
-
-/**
- * Supported if Shadow DB is configured. Searches the Shadow DB for
- * any matching element. If found the refcount in the shadow DB is
- * updated accordingly. If not found a new element is allocated and
- * installed into the shadow DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc_search(struct tf *tfp,
-			struct tf_tbl_alloc_search_parms *parms);
-
-/**
- * Configures the requested element by sending a firmware request which
- * then installs it into the device internal structures.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_set(struct tf *tfp,
-	       struct tf_tbl_set_parms *parms);
-
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table get parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_get(struct tf *tfp,
-	       struct tf_tbl_get_parms *parms);
-
-#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index a1761ad56..fc047f8f8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -9,7 +9,7 @@
 #include "tf_tcam.h"
 #include "tf_common.h"
 #include "tf_util.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_device.h"
 #include "tfp.h"
 #include "tf_session.h"
@@ -49,7 +49,7 @@ tf_tcam_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "TCAM already initialized\n");
+			    "TCAM DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -86,11 +86,12 @@ tf_tcam_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init)
-		return -EINVAL;
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No TCAM DBs created\n");
+		return 0;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 24/51] net/bnxt: update RM to support HCAPI only
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (22 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 23/51] net/bnxt: update table get to use new design Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 25/51] net/bnxt: remove table scope from session Ajit Khaparde
                     ` (27 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- For the EM Module there is a need to only allocate the EM Records in
  HCAPI RM but the storage control is requested to be outside of the RM DB.
- Add TF_RM_ELEM_CFG_HCAPI_BA.
- Return error when the number of reserved entries for wc tcam is odd
  number in tf_tcam_bind.
- Remove em_pool from session
- Use RM provided start offset and size
- HCAPI returns entry index instead of row index for WC TCAM.
- Move resource type conversion to hrwm set/free tcam functions.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   2 +
 drivers/net/bnxt/tf_core/tf_device_p4.h   |  54 ++++-----
 drivers/net/bnxt/tf_core/tf_em_internal.c | 131 ++++++++++++++--------
 drivers/net/bnxt/tf_core/tf_msg.c         |   6 +-
 drivers/net/bnxt/tf_core/tf_rm.c          |  81 ++++++-------
 drivers/net/bnxt/tf_core/tf_rm.h          |  14 ++-
 drivers/net/bnxt/tf_core/tf_session.h     |   5 -
 drivers/net/bnxt/tf_core/tf_tcam.c        |  21 ++++
 8 files changed, 190 insertions(+), 124 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index e3526672f..1eaf18212 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -68,6 +68,8 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
 		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
 			return -ENOTSUP;
+
+		*num_slices_per_row = 1;
 	} else { /* for other type of tcam */
 		*num_slices_per_row = 1;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 473e4eae5..8fae18012 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,19 +12,19 @@
 #include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
 	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_TCAM },
 	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
@@ -32,26 +32,26 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
 	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
 	/* CFA_RESOURCE_TYPE_P4_UPAR */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_EPOC */
@@ -79,7 +79,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EM_REC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
 };
 
 struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 1c514747d..3129fbe31 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -23,20 +23,28 @@
  */
 static void *em_db[TF_DIR_MAX];
 
+#define TF_EM_DB_EM_REC 0
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
 static uint8_t init;
 
+
+/**
+ * EM Pool
+ */
+static struct stack em_pool[TF_DIR_MAX];
+
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  * [in] num_entries
  *   number of entries to write
+ * [in] start
+ *   starting offset
  *
  * Return:
  *  0       - Success, entry allocated - no search support
@@ -44,54 +52,66 @@ static uint8_t init;
  *          - Failure, entry not allocated, out of resources
  */
 static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
+tf_create_em_pool(enum tf_dir dir,
+		  uint32_t num_entries,
+		  uint32_t start)
 {
 	struct tfp_calloc_parms parms;
 	uint32_t i, j;
 	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 
-	parms.nitems = num_entries;
+	/* Assumes that num_entries has been checked before we get here */
+	parms.nitems = num_entries / TF_SESSION_EM_ENTRY_SIZE;
 	parms.size = sizeof(uint32_t);
 	parms.alignment = 0;
 
 	rc = tfp_calloc(&parms);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool allocation failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+	rc = stack_init(num_entries / TF_SESSION_EM_ENTRY_SIZE,
+			(uint32_t *)parms.mem_va,
+			pool);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack init failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
 
 	/* Fill pool with indexes
 	 */
-	j = num_entries - 1;
+	j = start + num_entries - TF_SESSION_EM_ENTRY_SIZE;
 
-	for (i = 0; i < num_entries; i++) {
+	for (i = 0; i < (num_entries / TF_SESSION_EM_ENTRY_SIZE); i++) {
 		rc = stack_push(pool, j);
 		if (rc) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, EM pool stack push failure %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup;
 		}
-		j--;
+
+		j -= TF_SESSION_EM_ENTRY_SIZE;
 	}
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
@@ -105,18 +125,15 @@ tf_create_em_pool(struct tf_session *session,
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  *
  * Return:
  */
 static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
+tf_free_em_pool(enum tf_dir dir)
 {
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 	uint32_t *ptr;
 
 	ptr = stack_items(pool);
@@ -140,22 +157,19 @@ tf_em_insert_int_entry(struct tf *tfp,
 	uint16_t rptr_index = 0;
 	uint8_t rptr_entry = 0;
 	uint8_t num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 	uint32_t index;
 
 	rc = stack_pop(pool, &index);
 
 	if (rc) {
-		PMD_DRV_LOG
-		  (ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
+		PMD_DRV_LOG(ERR,
+			    "%s, EM entry index allocation failed\n",
+			    tf_dir_2_str(parms->dir));
 		return rc;
 	}
 
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rptr_index = index;
 	rc = tf_msg_insert_em_internal_entry(tfp,
 					     parms,
 					     &rptr_index,
@@ -166,8 +180,9 @@ tf_em_insert_int_entry(struct tf *tfp,
 
 	PMD_DRV_LOG
 		  (ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   "%s, Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   index,
 		   rptr_index,
 		   rptr_entry,
 		   num_of_entries);
@@ -204,15 +219,13 @@ tf_em_delete_int_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	int rc = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 
 	rc = tf_msg_delete_em_entry(tfp, parms);
 
 	/* Return resource to pool */
 	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+		stack_push(pool, parms->index);
 
 	return rc;
 }
@@ -224,8 +237,9 @@ tf_em_int_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
-	struct tf_session *session;
 	uint8_t db_exists = 0;
+	struct tf_rm_get_alloc_info_parms iparms;
+	struct tf_rm_alloc_info info;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -235,14 +249,6 @@ tf_em_int_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		tf_create_em_pool(session,
-				  i,
-				  TF_SESSION_EM_POOL_SIZE);
-	}
-
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -257,6 +263,18 @@ tf_em_int_bind(struct tf *tfp,
 		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
 			continue;
 
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] %
+		    TF_SESSION_EM_ENTRY_SIZE != 0) {
+			rc = -ENOMEM;
+			TFP_DRV_LOG(ERR,
+				    "%s, EM Allocation must be in blocks of %d, failure %s\n",
+				    tf_dir_2_str(i),
+				    TF_SESSION_EM_ENTRY_SIZE,
+				    strerror(-rc));
+
+			return rc;
+		}
+
 		db_cfg.rm_db = &em_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
@@ -272,6 +290,28 @@ tf_em_int_bind(struct tf *tfp,
 	if (db_exists)
 		init = 1;
 
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		iparms.rm_db = em_db[i];
+		iparms.db_index = TF_EM_DB_EM_REC;
+		iparms.info = &info;
+
+		rc = tf_rm_get_info(&iparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB get info failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+
+		rc = tf_create_em_pool(i,
+				       iparms.info->entry.stride,
+				       iparms.info->entry.start);
+		/* Logging handled in tf_create_em_pool */
+		if (rc)
+			return rc;
+	}
+
+
 	return 0;
 }
 
@@ -281,7 +321,6 @@ tf_em_int_unbind(struct tf *tfp)
 	int rc;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
-	struct tf_session *session;
 
 	TF_CHECK_PARMS1(tfp);
 
@@ -292,10 +331,8 @@ tf_em_int_unbind(struct tf *tfp)
 		return 0;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	for (i = 0; i < TF_DIR_MAX; i++)
-		tf_free_em_pool(session, i);
+		tf_free_em_pool(i);
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 02d8a4971..7fffb6baf 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -857,12 +857,12 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < size)
+	if (tfp_le_to_cpu_32(resp.size) != size)
 		return -EINVAL;
 
 	tfp_memcpy(data,
 		   &resp.data,
-		   resp.size);
+		   size);
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
@@ -919,7 +919,7 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < data_size)
+	if (tfp_le_to_cpu_32(resp.size) != data_size)
 		return -EINVAL;
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0469b653..e7af9eb84 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -106,7 +106,8 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 	uint16_t cnt = 0;
 
 	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		if ((cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		     cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) &&
 		    reservations[i] > 0)
 			cnt++;
 
@@ -467,7 +468,8 @@ tf_rm_create_db(struct tf *tfp,
 	/* Build the request */
 	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		    parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 			/* Only perform reservation for entries that
 			 * has been requested
 			 */
@@ -529,7 +531,8 @@ tf_rm_create_db(struct tf *tfp,
 		/* Skip any non HCAPI types as we didn't include them
 		 * in the reservation request.
 		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+		    parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 			continue;
 
 		/* If the element didn't request an allocation no need
@@ -551,29 +554,32 @@ tf_rm_create_db(struct tf *tfp,
 			       resv[j].start,
 			       resv[j].stride);
 
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
+			/* Only allocate BA pool if so requested */
+			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
+				/* Create pool */
+				pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+					     sizeof(struct bitalloc));
+				/* Alloc request, alignment already set */
+				cparms.nitems = pool_size;
+				cparms.size = sizeof(struct bitalloc);
+				rc = tfp_calloc(&cparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool alloc failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
+				db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+				rc = ba_init(db[i].pool, resv[j].stride);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool init failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
 			}
 			j++;
 		} else {
@@ -682,6 +688,9 @@ tf_rm_free_db(struct tf *tfp,
 				    tf_device_module_type_2_str(rm_db->type));
 	}
 
+	/* No need to check for configuration type, even if we do not
+	 * have a BA pool we just delete on a null ptr, no harm
+	 */
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -705,8 +714,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -770,8 +778,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -816,8 +823,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -857,9 +863,9 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	memcpy(parms->info,
@@ -880,9 +886,9 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
@@ -903,8 +909,7 @@ tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail silently (no logging), if the pool is not valid there
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5cb68892a..f44fcca70 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -56,12 +56,18 @@ struct tf_rm_new_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	/** No configuration */
+	/**
+	 * No configuration
+	 */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
+	/** HCAPI 'controlled', no RM storage thus the Device Module
+	 *  using the RM can chose to handle storage locally.
+	 */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
+	/** HCAPI 'controlled', uses a Bit Allocator Pool for internal
+	 *  storage in the RM.
+	 */
+	TF_RM_ELEM_CFG_HCAPI_BA,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
 	 * is not controlled by the Host.
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index e4472ed7f..ebee4db8c 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -103,11 +103,6 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
-
-	/**
-	 * EM Pools
-	 */
-	struct stack em_pool[TF_DIR_MAX];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index fc047f8f8..d5bb4eec1 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -43,6 +43,7 @@ tf_tcam_bind(struct tf *tfp,
 {
 	int rc;
 	int i;
+	struct tf_tcam_resources *tcam_cnt;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -53,6 +54,14 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	tcam_cnt = parms->resources->tcam_cnt;
+	if ((tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2) ||
+	    (tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2)) {
+		TFP_DRV_LOG(ERR,
+			    "Number of WC TCAM entries cannot be odd num\n");
+		return -EINVAL;
+	}
+
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -168,6 +177,18 @@ tf_tcam_alloc(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM &&
+	    (parms->idx % 2) != 0) {
+		rc = tf_rm_allocate(&aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Failed tcam, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type);
+			return rc;
+		}
+	}
+
 	parms->idx *= num_slice_per_row;
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 25/51] net/bnxt: remove table scope from session
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (23 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
                     ` (26 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Remove table scope data from session. Added to EEM.
- Complete move to RM of table scope base and range.
- Fix some err messaging strings.
- Fix the tcam logging message.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      |  2 +-
 drivers/net/bnxt/tf_core/tf_em.h        |  1 -
 drivers/net/bnxt/tf_core/tf_em_common.c | 16 +++++++----
 drivers/net/bnxt/tf_core/tf_em_common.h |  5 +---
 drivers/net/bnxt/tf_core/tf_em_host.c   | 38 ++++++++++---------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 12 +++-----
 drivers/net/bnxt/tf_core/tf_session.h   |  3 --
 drivers/net/bnxt/tf_core/tf_tcam.c      |  6 ++--
 8 files changed, 35 insertions(+), 48 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8727900c4..6410843f6 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -573,7 +573,7 @@ tf_free_tcam_entry(struct tf *tfp,
 	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: TCAM allocation failed, rc:%s\n",
+			    "%s: TCAM free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 7042f44e9..c3c712fb9 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,7 +9,6 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
-#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index d0d80daeb..e31a63b46 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,6 +29,8 @@
  */
 void *eem_db[TF_DIR_MAX];
 
+#define TF_EEM_DB_TBL_SCOPE 1
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -39,10 +41,12 @@ static uint8_t init;
  */
 static enum tf_mem_type mem_type;
 
+/** Table scope array */
+struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
+tbl_scope_cb_find(uint32_t tbl_scope_id)
 {
 	int i;
 	struct tf_rm_is_allocated_parms parms;
@@ -50,8 +54,8 @@ tbl_scope_cb_find(struct tf_session *session,
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
@@ -60,8 +64,8 @@ tbl_scope_cb_find(struct tf_session *session,
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
+		if (tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &tbl_scopes[i];
 	}
 
 	return NULL;
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index 45699a7c3..bf01df9b8 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -14,8 +14,6 @@
  * Function to search for table scope control block structure
  * with specified table scope ID.
  *
- * [in] session
- *   Session to use for the search of the table scope control block
  * [in] tbl_scope_id
  *   Table scope ID to search for
  *
@@ -23,8 +21,7 @@
  *  Pointer to the found table scope control block struct or NULL if
  *   table scope control block struct not found
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+struct tf_tbl_scope_cb *tbl_scope_cb_find(uint32_t tbl_scope_id);
 
 /**
  * Create and initialize a stack to use for action entries
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 8be39afdd..543edb54a 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,6 +48,9 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
+#define TF_EEM_DB_TBL_SCOPE 1
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
 /**
  * Function to free a page table
@@ -934,14 +937,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_entry(struct tf *tfp,
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -957,14 +958,12 @@ tf_em_insert_ext_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_entry(struct tf *tfp,
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -981,16 +980,13 @@ tf_em_ext_host_alloc(struct tf *tfp,
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct hcapi_cfa_em_table *em_tables;
-	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 	struct tf_rm_allocate_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -999,8 +995,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 		return rc;
 	}
 
-	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
-	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
 	tbl_scope_cb->index = parms->tbl_scope_id;
 	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
 
@@ -1092,8 +1087,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
 }
@@ -1105,13 +1100,9 @@ tf_em_ext_host_free(struct tf *tfp,
 	int rc = 0;
 	enum tf_dir  dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
 	struct tf_rm_free_parms aparms = { 0 };
 
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Table scope error\n");
@@ -1120,8 +1111,8 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1142,5 +1133,6 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index ee18a0c70..6dd115470 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -63,14 +63,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_sys_entry(struct tf *tfp,
+tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -87,14 +85,12 @@ tf_em_insert_ext_sys_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_sys_entry(struct tf *tfp,
+tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index ebee4db8c..a303fde51 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -100,9 +100,6 @@ struct tf_session {
 
 	/** Device handle */
 	struct tf_dev_info dev;
-
-	/** Table scope array */
-	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index d5bb4eec1..b67159a54 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -287,7 +287,8 @@ tf_tcam_free(struct tf *tfp,
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
@@ -382,7 +383,8 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d set failed, rc:%s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 26/51] net/bnxt: add external action alloc and free
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (24 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 25/51] net/bnxt: remove table scope from session Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
                     ` (25 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Link external action alloc and free to new hcapi interface
- Add parameter range checking
- Fix issues with index allocation check

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 163 ++++++++++++++++-------
 drivers/net/bnxt/tf_core/tf_core.h       |   4 -
 drivers/net/bnxt/tf_core/tf_device.h     |  58 ++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   6 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   2 -
 drivers/net/bnxt/tf_core/tf_em.h         |  95 +++++++++++++
 drivers/net/bnxt/tf_core/tf_em_common.c  | 120 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  80 ++++++++++-
 drivers/net/bnxt/tf_core/tf_em_system.c  |   6 +
 drivers/net/bnxt/tf_core/tf_identifier.c |   4 +-
 drivers/net/bnxt/tf_core/tf_rm.h         |   5 +
 drivers/net/bnxt/tf_core/tf_tbl.c        |  10 +-
 drivers/net/bnxt/tf_core/tf_tbl.h        |  12 ++
 drivers/net/bnxt/tf_core/tf_tcam.c       |   8 +-
 drivers/net/bnxt/tf_core/tf_util.c       |   4 -
 15 files changed, 499 insertions(+), 78 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6410843f6..45accb0ab 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -617,25 +617,48 @@ tf_alloc_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_alloc_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	aparms.dir = parms->dir;
 	aparms.type = parms->type;
 	aparms.idx = &idx;
-	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table allocation failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	aparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_alloc_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_ext_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: External table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+
+	} else {
+		if (dev->ops->tf_dev_alloc_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	parms->idx = idx;
@@ -677,25 +700,47 @@ tf_free_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_free_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	fparms.dir = parms->dir;
 	fparms.type = parms->type;
 	fparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table free failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	fparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_free_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_ext_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_free_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return 0;
@@ -735,27 +780,49 @@ tf_set_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_set_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	sparms.dir = parms->dir;
 	sparms.type = parms->type;
 	sparms.data = parms->data;
 	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
 	sparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table set failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	sparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_set_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_ext_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_set_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index a7a7bd38a..e898f19a0 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -211,10 +211,6 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
 	/** Wh+/SR Action _Modify L4 Dest Port */
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
 	/** Meter Profiles */
 	TF_TBL_TYPE_METER_PROF,
 	/** Meter Instance */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 93f3627d4..58b7a4ab2 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -216,6 +216,26 @@ struct tf_dev_ops {
 	int (*tf_dev_alloc_tbl)(struct tf *tfp,
 				struct tf_tbl_alloc_parms *parms);
 
+	/**
+	 * Allocation of a external table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ext_tbl)(struct tf *tfp,
+				    struct tf_tbl_alloc_parms *parms);
+
 	/**
 	 * Free of a table type element.
 	 *
@@ -235,6 +255,25 @@ struct tf_dev_ops {
 	int (*tf_dev_free_tbl)(struct tf *tfp,
 			       struct tf_tbl_free_parms *parms);
 
+	/**
+	 * Free of a external table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ext_tbl)(struct tf *tfp,
+				   struct tf_tbl_free_parms *parms);
+
 	/**
 	 * Searches for the specified table type element in a shadow DB.
 	 *
@@ -276,6 +315,25 @@ struct tf_dev_ops {
 	int (*tf_dev_set_tbl)(struct tf *tfp,
 			      struct tf_tbl_set_parms *parms);
 
+	/**
+	 * Sets the specified external table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_ext_tbl)(struct tf *tfp,
+				  struct tf_tbl_set_parms *parms);
+
 	/**
 	 * Retrieves the specified table type element.
 	 *
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 1eaf18212..9a3230787 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -85,10 +85,13 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = NULL,
 	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_ext_tbl = NULL,
 	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_ext_tbl = NULL,
 	.tf_dev_free_tbl = NULL,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
+	.tf_dev_set_ext_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
 	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
@@ -113,9 +116,12 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_alloc_ext_tbl = tf_tbl_ext_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 8fae18012..298e100f3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -47,8 +47,6 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index c3c712fb9..1c2369c7b 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -456,4 +456,99 @@ int tf_em_ext_common_free(struct tf *tfp,
  */
 int tf_em_ext_common_alloc(struct tf *tfp,
 			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_system_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
+
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index e31a63b46..39a8412b3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,8 +29,6 @@
  */
 void *eem_db[TF_DIR_MAX];
 
-#define TF_EEM_DB_TBL_SCOPE 1
-
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -54,13 +52,13 @@ tbl_scope_cb_find(uint32_t tbl_scope_id)
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
 
-	if (i < 0 || !allocated)
+	if (i < 0 || allocated != TF_RM_ALLOCATED_ENTRY_IN_USE)
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
@@ -158,6 +156,111 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type);
+		return rc;
+	}
+
+	*parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms)
+{
+	int rc = 0;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
 uint32_t
 tf_em_get_key_mask(int num_entries)
 {
@@ -273,6 +376,15 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_tbl_ext_host_set(tfp, parms);
+	else
+		return tf_tbl_ext_system_set(tfp, parms);
+}
+
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 543edb54a..d7c147a15 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,7 +48,6 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
-#define TF_EEM_DB_TBL_SCOPE 1
 
 extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
@@ -986,7 +985,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -1087,7 +1086,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
@@ -1111,7 +1110,7 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
@@ -1133,6 +1132,77 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
-	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
+	return rc;
+}
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 6dd115470..10768df03 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -112,3 +112,9 @@ tf_em_ext_system_free(struct tf *tfp __rte_unused,
 {
 	return 0;
 }
+
+int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
+			  struct tf_tbl_set_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 211371081..219839272 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -159,13 +159,13 @@ tf_ident_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->id);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index f44fcca70..fd044801f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -12,6 +12,11 @@
 
 struct tf;
 
+/** RM return codes */
+#define TF_RM_ALLOCATED_ENTRY_FREE        0
+#define TF_RM_ALLOCATED_ENTRY_IN_USE      1
+#define TF_RM_ALLOCATED_NO_ENTRY_FOUND   -1
+
 /**
  * The Resource Manager (RM) module provides basic DB handling for
  * internal resources. These resources exists within the actual device
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 05e866dc6..3a3277329 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -172,13 +172,13 @@ tf_tbl_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -233,7 +233,7 @@ tf_tbl_set(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -301,7 +301,7 @@ tf_tbl_get(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -374,7 +374,7 @@ tf_tbl_bulk_get(struct tf *tfp,
 		if (rc)
 			return rc;
 
-		if (!allocated) {
+		if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 			TFP_DRV_LOG(ERR,
 				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index eb560ffa7..2a10b47ce 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -83,6 +83,10 @@ struct tf_tbl_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
@@ -101,6 +105,10 @@ struct tf_tbl_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Index to free
 	 */
@@ -168,6 +176,10 @@ struct tf_tbl_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Entry data
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b67159a54..b1092cd9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -252,13 +252,13 @@ tf_tcam_free(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -362,13 +362,13 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry is not allocated, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Convert TF type to HCAPI RM type */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 5472a9aac..85f6e25f4 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -92,10 +92,6 @@ tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 		return "NAT IPv4 Source";
 	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
 		return "NAT IPv4 Destination";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-		return "NAT IPv6 Source";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-		return "NAT IPv6 Destination";
 	case TF_TBL_TYPE_METER_PROF:
 		return "Meter Profile";
 	case TF_TBL_TYPE_METER_INST:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 27/51] net/bnxt: align CFA resources with RM
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (25 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
                     ` (24 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru

From: Randy Schacher <stuart.schacher@broadcom.com>

- HCAPI resources need to align for Resource Manager
- Clean up unnecessary debug messages

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 250 +++++++++---------
 drivers/net/bnxt/tf_core/tf_identifier.c      |   3 +-
 drivers/net/bnxt/tf_core/tf_msg.c             |  37 ++-
 drivers/net/bnxt/tf_core/tf_rm.c              |  21 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             |   3 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  28 +-
 6 files changed, 197 insertions(+), 145 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 6e79facec..6d6651fde 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -48,232 +48,246 @@
 #define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
 #define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
-/* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
+/* EPOCH 0 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH0          0xfUL
+/* EPOCH 1 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH1          0x10UL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x11UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x12UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x13UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x14UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x15UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x16UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P58_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6         0x6UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT           0x7UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT           0x8UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4          0x9UL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
-/* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
-/* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4          0xaUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER               0xbUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE          0xcUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION         0xdUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION     0xeUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P58_EXT_FORMAT_0_ACTION 0xfUL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_1_ACTION     0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION     0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION     0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION     0x13UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_5_ACTION     0x14UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_6_ACTION     0x15UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM        0x16UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP       0x17UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC           0x18UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM           0x19UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID     0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P58_EM_REC              0x1cUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM             0x1dUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF          0x1eUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR              0x1fUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM             0x20UL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB              0x21UL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB              0x22UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
-#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM            0x23UL
+#define CFA_RESOURCE_TYPE_P58_LAST               CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P45_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P45_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P45_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM             0x21UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM            0x22UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE           0x23UL
+#define CFA_RESOURCE_TYPE_P45_LAST               CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P4_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P4_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P4_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM             0x21UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE           0x22UL
+#define CFA_RESOURCE_TYPE_P4_LAST               CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 219839272..2cc43b40f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -59,7 +59,8 @@ tf_ident_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Identifier - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Identifier - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 7fffb6baf..659065de3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,9 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
+/* Logging defines */
+#define TF_RM_MSG_DEBUG  0
+
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -215,7 +218,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -225,31 +228,39 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nQCAPS\n");
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("type: %d(0x%x) %d %d\n",
 		       query[i].type,
 		       query[i].type,
 		       query[i].min,
 		       query[i].max);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	}
 
 	*resv_strategy = resp.flags &
 	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
 
+cleanup:
 	tf_msg_free_dma_buf(&qcaps_buf);
 
 	return rc;
@@ -293,8 +304,10 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	dma_size = size * sizeof(struct tf_rm_resc_entry);
 	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
-	if (rc)
+	if (rc) {
+		tf_msg_free_dma_buf(&req_buf);
 		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
@@ -320,7 +333,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -330,11 +343,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nRESV\n");
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
@@ -343,14 +359,17 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
 		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
 		       resv[i].type,
 		       resv[i].type,
 		       resv[i].start,
 		       resv[i].stride);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	}
 
+cleanup:
 	tf_msg_free_dma_buf(&req_buf);
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -412,8 +431,6 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	parms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp, &parms);
-	if (rc)
-		return rc;
 
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -434,7 +451,7 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
-	uint32_t flags;
+	uint16_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
@@ -480,7 +497,7 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
+	uint16_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
@@ -726,8 +743,6 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
-	if (rc)
-		goto cleanup;
 
 cleanup:
 	tf_msg_free_dma_buf(&buf);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e7af9eb84..30313e2ea 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -17,6 +17,9 @@
 #include "tfp.h"
 #include "tf_msg.h"
 
+/* Logging defines */
+#define TF_RM_DEBUG  0
+
 /**
  * Generic RM Element data type that an RM DB is build upon.
  */
@@ -120,16 +123,11 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
 		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
+				"%s, %s, %s allocation of %d not supported\n",
 				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-
+				tf_device_module_type_subtype_2_str(type, i),
+				reservations[i]);
 		}
 	}
 
@@ -549,11 +547,6 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
 			/* Only allocate BA pool if so requested */
 			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 				/* Create pool */
@@ -603,10 +596,12 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+#if (TF_RM_DEBUG == 1)
 	printf("%s: type:%d num_entries:%d\n",
 	       tf_dir_2_str(parms->dir),
 	       parms->type,
 	       i);
+#endif /* (TF_RM_DEBUG == 1) */
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 3a3277329..7d4daaf2d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -74,7 +74,8 @@ tf_tbl_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Table Type - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b1092cd9d..1c48b5363 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -81,7 +81,8 @@ tf_tcam_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("TCAM - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "TCAM - initialized\n");
 
 	return 0;
 }
@@ -275,6 +276,31 @@ tf_tcam_free(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		int i;
+
+		for (i = -1; i < 3; i += 3) {
+			aparms.index += i;
+			rc = tf_rm_is_allocated(&aparms);
+			if (rc)
+				return rc;
+
+			if (allocated == TF_RM_ALLOCATED_ENTRY_IN_USE) {
+				/* Free requested element */
+				fparms.index = aparms.index;
+				rc = tf_rm_free(&fparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+						    "%s: Free failed, type:%d, index:%d\n",
+						    tf_dir_2_str(parms->dir),
+						    parms->type,
+						    fparms.index);
+					return rc;
+				}
+			}
+		}
+	}
+
 	/* Convert TF type to HCAPI RM type */
 	hparms.rm_db = tcam_db[parms->dir];
 	hparms.db_index = parms->type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 28/51] net/bnxt: implement IF tables set and get
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (26 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
                     ` (23 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Implement set/get for PROF_SPIF_CTXT, LKUP_PF_DFLT_ARP,
  PROF_PF_ERR_ARP with tunneled HWRM messages
- Add IF table for PROF_PARIF_DFLT_ARP
- Fix page size offset in the HCAPI code
- Fix Entry offset calculation

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h     |  53 +++++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h  |  12 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c    |   8 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h    |  18 +-
 drivers/net/bnxt/meson.build             |   2 +-
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  63 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 116 +++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       | 104 ++++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  21 ++
 drivers/net/bnxt/tf_core/tf_device.h     |  39 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   5 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |  10 +
 drivers/net/bnxt/tf_core/tf_em_common.c  |   5 +-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  12 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |   3 +-
 drivers/net/bnxt/tf_core/tf_if_tbl.c     | 178 +++++++++++++++++
 drivers/net/bnxt/tf_core/tf_if_tbl.h     | 236 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 186 +++++++++++++++---
 drivers/net/bnxt/tf_core/tf_msg.h        |  30 +++
 drivers/net/bnxt/tf_core/tf_session.c    |  14 +-
 21 files changed, 1060 insertions(+), 57 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
index c30e4f49c..3243b3f2b 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -127,6 +127,11 @@ const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
 	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS},
 };
 
 const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
@@ -247,4 +252,52 @@ const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
 	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
 
 };
+
+const struct hcapi_cfa_field cfa_p40_mirror_tbl_layout[] = {
+	{CFA_P40_MIRROR_TBL_SP_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS,
+	 CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_COPY_BITPOS,
+	 CFA_P40_MIRROR_TBL_COPY_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_EN_BITPOS,
+	 CFA_P40_MIRROR_TBL_EN_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_AR_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS},
+};
+
+/* P45 Defines */
+
+const struct hcapi_cfa_field cfa_p45_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
 #endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
index ea8d99d01..53a284887 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -35,10 +35,6 @@
 
 #define CFA_GLOBAL_CFG_DATA_SZ (100)
 
-#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
-#define SUPPORT_CFA_HW_ALL (1)
-#endif
-
 #include "hcapi_cfa_p4.h"
 #define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
 #define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
@@ -121,6 +117,8 @@ struct hcapi_cfa_layout {
 	const struct hcapi_cfa_field *field_array;
 	/** [out] number of HW field entries in the HW layout field array */
 	uint32_t array_sz;
+	/** [out] layout_id - layout id associated with the layout */
+	uint16_t layout_id;
 };
 
 /**
@@ -247,6 +245,8 @@ struct hcapi_cfa_key_tbl {
 	 *  applicable for newer chip
 	 */
 	uint8_t *base1;
+	/** [in] Page size for EEM tables */
+	uint32_t page_size;
 };
 
 /**
@@ -267,7 +267,7 @@ struct hcapi_cfa_key_obj {
 struct hcapi_cfa_key_data {
 	/** [in] For on-chip key table, it is the offset in unit of smallest
 	 *  key. For off-chip key table, it is the byte offset relative
-	 *  to the key record memory base.
+	 *  to the key record memory base and adjusted for page and entry size.
 	 */
 	uint32_t offset;
 	/** [in] HW key data buffer pointer */
@@ -668,5 +668,5 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 			struct hcapi_cfa_key_loc *key_loc);
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset);
+			      uint32_t page);
 #endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index 42b37da0f..a01bbdbbb 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -13,7 +13,6 @@
 #include "hcapi_cfa_defs.h"
 
 #define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
-#define TF_EM_PAGE_SIZE (1 << 21)
 uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
 uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
 bool hcapi_cfa_lkup_init;
@@ -199,10 +198,9 @@ static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
 
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset)
+			      uint32_t page)
 {
 	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
 	uint64_t addr;
 
 	if (mem == NULL)
@@ -362,7 +360,9 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 	op->hw.base_addr =
 		hcapi_get_table_page((struct hcapi_cfa_em_table *)
 				     key_tbl->base0,
-				     key_obj->offset);
+				     key_obj->offset / key_tbl->page_size);
+	/* Offset is adjusted to be the offset into the page */
+	key_obj->offset = key_obj->offset % key_tbl->page_size;
 
 	if (op->hw.base_addr == 0)
 		return -1;
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
index 0661d6363..c6113707f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -21,6 +21,10 @@ enum cfa_p4_tbl_id {
 	CFA_P4_TBL_WC_TCAM_REMAP,
 	CFA_P4_TBL_VEB_TCAM,
 	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT,
+	CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR,
+	CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR,
+	CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR,
 	CFA_P4_TBL_MAX
 };
 
@@ -333,17 +337,29 @@ enum cfa_p4_action_sram_entry_type {
 	 */
 
 	/** SRAM Action Record */
-	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FULL_ACTION,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_0_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_1_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_2_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_3_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_4_ACTION,
+
 	/** SRAM Action Encap 8 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
 	/** SRAM Action Encap 16 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
 	/** SRAM Action Encap 64 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_SRC,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_DEST,
+
 	/** SRAM Action Modify IPv4 Source */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
 	/** SRAM Action Modify IPv4 Destination */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+
 	/** SRAM Action Source Properties SMAC */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
 	/** SRAM Action Source Properties SMAC IPv4 */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 7f3ec6204..f25a9448d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,7 +43,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_shadow_tcam.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm.c',
+	'tf_core/tf_if_tbl.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 9ba60e1c2..1924bef02 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -25,6 +25,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -33,3 +34,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 26836e488..32f152314 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -16,7 +16,9 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
+	HWRM_TFT_IF_TBL_SET = 827,
+	HWRM_TFT_IF_TBL_GET = 828,
+	TF_SUBTYPE_LAST = HWRM_TFT_IF_TBL_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -46,7 +48,17 @@ typedef enum tf_subtype {
 /* WC DMA Address Type */
 #define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
 /* WC Entry */
-#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY				0x30d1UL
+/* SPIF DFLT L2 CTXT Entry */
+#define TF_DEV_DATA_TYPE_SPIF_DFLT_L2_CTXT		  0x3131UL
+/* PARIF DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_DFLT_ACT_REC		0x3132UL
+/* PARIF ERR DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_ERR_DFLT_ACT_REC	 0x3133UL
+/* ILT Entry */
+#define TF_DEV_DATA_TYPE_ILT				0x3134UL
+/* VNIC SVIF entry */
+#define TF_DEV_DATA_TYPE_VNIC_SVIF			0x3135UL
 /* Action Data */
 #define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
 #define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
@@ -56,6 +68,9 @@ typedef enum tf_subtype {
 
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
+struct tf_if_tbl_set_input;
+struct tf_if_tbl_get_input;
+struct tf_if_tbl_get_output;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
@@ -85,4 +100,48 @@ typedef struct tf_tbl_type_bulk_get_output {
 	uint16_t			 size;
 } tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
+/* Input params for if tbl set */
+typedef struct tf_if_tbl_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* index of table entry */
+	uint16_t			 idx;
+	/* size of the data write to table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* data to write into table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_set_input_t, *ptf_if_tbl_set_input_t;
+
+/* Input params for if tbl get */
+typedef struct tf_if_tbl_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* size of the data get from table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* index of table entry */
+	uint16_t			 idx;
+} tf_if_tbl_get_input_t, *ptf_if_tbl_get_input_t;
+
+/* output params for if tbl get */
+typedef struct tf_if_tbl_get_output {
+	/* Value read from table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_get_output_t, *ptf_if_tbl_get_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 45accb0ab..a980a2056 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -1039,3 +1039,119 @@ tf_free_tbl_scope(struct tf *tfp,
 
 	return rc;
 }
+
+int
+tf_set_if_tbl_entry(struct tf *tfp,
+		    struct tf_set_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_set_parms sparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.idx = parms->idx;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_set_if_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_get_if_tbl_entry(struct tf *tfp,
+		    struct tf_get_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_get_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.idx = parms->idx;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_get_if_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e898f19a0..e3d46bd45 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1556,4 +1556,108 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page if_tbl Interface Table Access
+ *
+ * @ref tf_set_if_tbl_entry
+ *
+ * @ref tf_get_if_tbl_entry
+ *
+ * @ref tf_restore_if_tbl_entry
+ */
+/**
+ * Enumeration of TruFlow interface table types.
+ */
+enum tf_if_tbl_type {
+	/** Default Profile L2 Context Entry */
+	TF_IF_TBL_TYPE_PROF_SPIF_DFLT_L2_CTXT,
+	/** Default Profile TCAM/Lookup Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	/** Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	/** Default Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_LKUP_PARIF_DFLT_ACT_REC_PTR,
+	/** SR2 Ingress lookup table */
+	TF_IF_TBL_TYPE_ILT,
+	/** SR2 VNIC/SVIF Table */
+	TF_IF_TBL_TYPE_VNIC_SVIF,
+	TF_IF_TBL_TYPE_MAX
+};
+
+/**
+ * tf_set_if_tbl_entry parameter definition
+ */
+struct tf_set_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Interface to write
+	 */
+	uint32_t idx;
+};
+
+/**
+ * set interface table entry
+ *
+ * Used to set an interface table. This API is used for managing tables indexed
+ * by SVIF/SPIF/PARIF interfaces. In current implementation only the value is
+ * set.
+ * Returns success or failure code.
+ */
+int tf_set_if_tbl_entry(struct tf *tfp,
+			struct tf_set_if_tbl_entry_parms *parms);
+
+/**
+ * tf_get_if_tbl_entry parameter definition
+ */
+struct tf_get_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of table to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * get interface table entry
+ *
+ * Used to retrieve an interface table entry.
+ *
+ * Reads the interface table entry value
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_if_tbl_entry(struct tf *tfp,
+			struct tf_get_if_tbl_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 20b0c5948..a3073c826 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -44,6 +44,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
+	struct tf_if_tbl_cfg_parms if_tbl_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -114,6 +115,19 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * IF_TBL
+	 */
+	if_tbl_cfg.num_elements = TF_IF_TBL_TYPE_MAX;
+	if_tbl_cfg.cfg = tf_if_tbl_p4;
+	if_tbl_cfg.shadow_copy = shadow_copy;
+	rc = tf_if_tbl_bind(tfp, &if_tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "IF Table initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -186,6 +200,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_if_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, IF Table Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 58b7a4ab2..5a0943ad7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl.h"
 #include "tf_tcam.h"
+#include "tf_if_tbl.h"
 
 struct tf;
 struct tf_session;
@@ -567,6 +568,44 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
 				     struct tf_free_tbl_scope_parms *parms);
+
+	/**
+	 * Sets the specified interface table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to interface table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_set_parms *parms);
+
+	/**
+	 * Retrieves the specified interface table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_get_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9a3230787..2dc34b853 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
+#include "tf_if_tbl.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -105,6 +106,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_delete_ext_em_entry = NULL,
 	.tf_dev_alloc_tbl_scope = NULL,
 	.tf_dev_free_tbl_scope = NULL,
+	.tf_dev_set_if_tbl = NULL,
+	.tf_dev_get_if_tbl = NULL,
 };
 
 /**
@@ -135,4 +138,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
 	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
+	.tf_dev_set_if_tbl = tf_if_tbl_set,
+	.tf_dev_get_if_tbl = tf_if_tbl_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 298e100f3..3b03a7c4e 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -10,6 +10,7 @@
 
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_if_tbl.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -86,4 +87,13 @@ struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 };
 
+struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 39a8412b3..23a7fc9c2 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -337,11 +337,10 @@ tf_em_ext_common_bind(struct tf *tfp,
 		db_exists = 1;
 	}
 
-	if (db_exists) {
-		mem_type = parms->mem_type;
+	if (db_exists)
 		init = 1;
-	}
 
+	mem_type = parms->mem_type;
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index d7c147a15..2626a59fe 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -831,7 +831,8 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	op.opcode = HCAPI_CFA_HWOPS_ADD;
 	key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = (uint8_t *)&key_entry;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -847,8 +848,7 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 		key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 
 		rc = hcapi_cfa_key_hw_op(&op,
 					 &key_tbl,
@@ -914,7 +914,8 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
 							  TF_KEY0_TABLE :
 							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = NULL;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -1195,7 +1196,8 @@ int tf_tbl_ext_host_set(struct tf *tfp,
 	op.opcode = HCAPI_CFA_HWOPS_PUT;
 	key_tbl.base0 =
 		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
 	key_obj.data = parms->data;
 	key_obj.size = parms->data_sz_in_bytes;
 
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 2cc43b40f..90aeaa468 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -68,7 +68,7 @@ tf_ident_bind(struct tf *tfp,
 int
 tf_ident_unbind(struct tf *tfp)
 {
-	int rc;
+	int rc = 0;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
 
@@ -89,7 +89,6 @@ tf_ident_unbind(struct tf *tfp)
 			TFP_DRV_LOG(ERR,
 				    "rm free failed on unbind\n");
 		}
-
 		ident_db[i] = NULL;
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.c b/drivers/net/bnxt/tf_core/tf_if_tbl.c
new file mode 100644
index 000000000..dc73ba2d0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.c
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_if_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+
+/**
+ * IF Table DBs.
+ */
+static void *if_tbl_db[TF_DIR_MAX];
+
+/**
+ * IF Table Shadow DBs
+ */
+/* static void *shadow_if_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+/**
+ * Convert if_tbl_type to hwrm type.
+ *
+ * [in] if_tbl_type
+ *   Interface table type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_if_tbl_get_hcapi_type(struct tf_if_tbl_get_hcapi_parms *parms)
+{
+	struct tf_if_tbl_cfg *tbl_cfg;
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	tbl_cfg = (struct tf_if_tbl_cfg *)parms->tbl_db;
+	cfg_type = tbl_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_IF_TBL_CFG)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = tbl_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_if_tbl_bind(struct tf *tfp __rte_unused,
+	       struct tf_if_tbl_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "IF TBL DB already initialized\n");
+		return -EINVAL;
+	}
+
+	if_tbl_db[TF_DIR_RX] = parms->cfg;
+	if_tbl_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_if_tbl_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	if_tbl_db[TF_DIR_RX] = NULL;
+	if_tbl_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_if_tbl_set(struct tf *tfp,
+	      struct tf_if_tbl_set_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	rc = tf_msg_set_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_if_tbl_get(struct tf *tfp,
+	      struct tf_if_tbl_get_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	/* Get the entry */
+	rc = tf_msg_get_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
new file mode 100644
index 000000000..54d4c37f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_IF_TBL_TYPE_H_
+#define TF_IF_TBL_TYPE_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/*
+ * This is the constant used to define invalid CFA
+ * types across all devices.
+ */
+#define CFA_IF_TBL_TYPE_INVALID 65535
+
+struct tf;
+
+/**
+ * The IF Table module provides processing of Internal TF interface table types.
+ */
+
+/**
+ * IF table configuration enumeration.
+ */
+enum tf_if_tbl_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_IF_TBL_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_IF_TBL_CFG,
+};
+
+/**
+ * IF table configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI type.
+ */
+struct tf_if_tbl_cfg {
+	/**
+	 * IF table config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_if_tbl_get_hcapi_parms {
+	/**
+	 * [in] IF Tbl DB Handle
+	 */
+	void *tbl_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_if_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_if_tbl_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_if_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+};
+
+/**
+ * IF Table set parameters
+ */
+struct tf_if_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * IF Table get parameters
+ */
+struct tf_if_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page if tbl Table
+ *
+ * @ref tf_if_tbl_bind
+ *
+ * @ref tf_if_tbl_unbind
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_restore
+ */
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_bind(struct tf *tfp,
+		   struct tf_if_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_unbind(struct tf *tfp);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Interface Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_set(struct tf *tfp,
+		  struct tf_if_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_get(struct tf *tfp,
+		  struct tf_if_tbl_get_parms *parms);
+
+#endif /* TF_IF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 659065de3..6600a14c8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -125,12 +125,19 @@ tf_msg_session_close(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_close_input req = { 0 };
 	struct hwrm_tf_session_close_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_CLOSE;
 	parms.req_data = (uint32_t *)&req;
@@ -150,12 +157,19 @@ tf_msg_session_qcfg(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_QCFG,
 	parms.req_data = (uint32_t *)&req;
@@ -448,13 +462,22 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_insert_input req = { 0 };
 	struct hwrm_tf_em_insert_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
 	uint16_t flags;
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	tfp_memcpy(req.em_key,
 		   em_parms->key,
 		   ((em_parms->key_sz_in_bits + 7) / 8));
@@ -498,11 +521,19 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
 	uint16_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
@@ -789,21 +820,19 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_set_input req = { 0 };
 	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
@@ -840,21 +869,19 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_get_input req = { 0 };
 	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
@@ -897,22 +924,20 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.start_index = tfp_cpu_to_le_32(starting_idx);
@@ -939,3 +964,102 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+int
+tf_msg_get_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_get_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_get_input_t req = { 0 };
+	tf_if_tbl_get_output_t resp;
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_16(params->data_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_IF_TBL_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	if (parms.tf_resp_code != 0)
+		return tfp_le_to_cpu_32(parms.tf_resp_code);
+
+	tfp_memcpy(&params->data[0], resp.data, req.data_sz_in_bytes);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_set_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_set_input_t req = { 0 };
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_32(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_32(params->data_sz_in_bytes);
+	tfp_memcpy(&req.data[0], params->data, params->data_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_IF_TBL_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 7432873d7..37f291016 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -428,4 +428,34 @@ int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			      uint16_t entry_sz_in_bytes,
 			      uint64_t physical_mem_addr);
 
+/**
+ * Sends Set message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_set_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_set_parms *params);
+
+/**
+ * Sends get message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table get parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_get_parms *params);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index ab9e05f2d..6c07e4745 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -70,14 +70,24 @@ tf_session_open_session(struct tf *tfp,
 		goto cleanup;
 	}
 	tfp->session->core_data = cparms.mem_va;
+	session_id = &parms->open_cfg->session_id;
+
+	/* Update Session Info, which is what is visible to the caller */
+	tfp->session->ver.major = 0;
+	tfp->session->ver.minor = 0;
+	tfp->session->ver.update = 0;
 
-	/* Initialize Session and Device */
+	tfp->session->session_id.internal.domain = session_id->internal.domain;
+	tfp->session->session_id.internal.bus = session_id->internal.bus;
+	tfp->session->session_id.internal.device = session_id->internal.device;
+	tfp->session->session_id.internal.fw_session_id = fw_session_id;
+
+	/* Initialize Session and Device, which is private */
 	session = (struct tf_session *)tfp->session->core_data;
 	session->ver.major = 0;
 	session->ver.minor = 0;
 	session->ver.update = 0;
 
-	session_id = &parms->open_cfg->session_id;
 	session->session_id.internal.domain = session_id->internal.domain;
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 29/51] net/bnxt: add TF register and unregister
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (27 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
                     ` (22 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TF register/unregister support. Session got session clients to
  keep track of the ctrl-channels/function.
- Add support code to tfp layer

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_core/Makefile     |   1 +
 drivers/net/bnxt/tf_core/ll.c         |  52 +++
 drivers/net/bnxt/tf_core/ll.h         |  46 +++
 drivers/net/bnxt/tf_core/tf_core.c    |  26 +-
 drivers/net/bnxt/tf_core/tf_core.h    | 105 +++--
 drivers/net/bnxt/tf_core/tf_msg.c     |  84 +++-
 drivers/net/bnxt/tf_core/tf_msg.h     |  42 +-
 drivers/net/bnxt/tf_core/tf_rm.c      |   2 +-
 drivers/net/bnxt/tf_core/tf_session.c | 569 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h | 201 ++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.c     |   2 +
 drivers/net/bnxt/tf_core/tf_tcam.c    |   8 +-
 drivers/net/bnxt/tf_core/tfp.c        |  17 +
 drivers/net/bnxt/tf_core/tfp.h        |  15 +
 15 files changed, 1075 insertions(+), 96 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index f25a9448d..54564e02e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -44,6 +44,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
+	'tf_core/ll.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 1924bef02..6210bc70e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -8,6 +8,7 @@
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/ll.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/ll.c b/drivers/net/bnxt/tf_core/ll.c
new file mode 100644
index 000000000..6f58662f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Functions */
+
+#include <stdio.h>
+#include "ll.h"
+
+/* init linked list */
+void ll_init(struct ll *ll)
+{
+	ll->head = NULL;
+	ll->tail = NULL;
+}
+
+/* insert entry in linked list */
+void ll_insert(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == NULL) {
+		ll->head = entry;
+		ll->tail = entry;
+		entry->next = NULL;
+		entry->prev = NULL;
+	} else {
+		entry->next = ll->head;
+		entry->prev = NULL;
+		entry->next->prev = entry;
+		ll->head = entry->next->prev;
+	}
+}
+
+/* delete entry from linked list */
+void ll_delete(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == entry && ll->tail == entry) {
+		ll->head = NULL;
+		ll->tail = NULL;
+	} else if (ll->head == entry) {
+		ll->head = entry->next;
+		ll->head->prev = NULL;
+	} else if (ll->tail == entry) {
+		ll->tail = entry->prev;
+		ll->tail->next = NULL;
+	} else {
+		entry->prev->next = entry->next;
+		entry->next->prev = entry->prev;
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/ll.h b/drivers/net/bnxt/tf_core/ll.h
new file mode 100644
index 000000000..d70917850
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Header File */
+
+#ifndef _LL_H_
+#define _LL_H_
+
+/* linked list entry */
+struct ll_entry {
+	struct ll_entry *prev;
+	struct ll_entry *next;
+};
+
+/* linked list */
+struct ll {
+	struct ll_entry *head;
+	struct ll_entry *tail;
+};
+
+/**
+ * Linked list initialization.
+ *
+ * [in] ll, linked list to be initialized
+ */
+void ll_init(struct ll *ll);
+
+/**
+ * Linked list insert
+ *
+ * [in] ll, linked list where element is inserted
+ * [in] entry, entry to be added
+ */
+void ll_insert(struct ll *ll, struct ll_entry *entry);
+
+/**
+ * Linked list delete
+ *
+ * [in] ll, linked list where element is removed
+ * [in] entry, entry to be deleted
+ */
+void ll_delete(struct ll *ll, struct ll_entry *entry);
+
+#endif /* _LL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a980a2056..489c461d1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -58,21 +58,20 @@ tf_open_session(struct tf *tfp,
 	parms->session_id.internal.device = device;
 	oparms.open_cfg = parms;
 
+	/* Session vs session client is decided in
+	 * tf_session_open_session()
+	 */
+	printf("TF_OPEN, %s\n", parms->ctrl_chan_name);
 	rc = tf_session_open_session(tfp, &oparms);
 	/* Logging handled by tf_session_open_session */
 	if (rc)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
+		    parms->session_id.internal.device);
 
 	return 0;
 }
@@ -152,6 +151,9 @@ tf_close_session(struct tf *tfp)
 
 	cparms.ref_count = &ref_count;
 	cparms.session_id = &session_id;
+	/* Session vs session client is decided in
+	 * tf_session_close_session()
+	 */
 	rc = tf_session_close_session(tfp,
 				      &cparms);
 	/* Logging handled by tf_session_close_session */
@@ -159,16 +161,10 @@ tf_close_session(struct tf *tfp)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Closed session, session_id:%d, ref_count:%d\n",
-		    cparms.session_id->id,
-		    *cparms.ref_count);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    cparms.session_id->internal.domain,
 		    cparms.session_id->internal.bus,
-		    cparms.session_id->internal.device,
-		    cparms.session_id->internal.fw_session_id);
+		    cparms.session_id->internal.device);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e3d46bd45..fea222bee 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -72,7 +72,6 @@ enum tf_mem {
  * @ref tf_close_session
  */
 
-
 /**
  * Session Version defines
  *
@@ -113,6 +112,21 @@ union tf_session_id {
 	} internal;
 };
 
+/**
+ * Session Client Identifier
+ *
+ * Unique identifier for a client within a session. Session Client ID
+ * is constructed from the passed in session and a firmware allocated
+ * fw_session_client_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_client_id {
+	uint16_t id;
+	struct {
+		uint8_t fw_session_id;
+		uint8_t fw_session_client_id;
+	} internal;
+};
+
 /**
  * Session Version
  *
@@ -368,8 +382,8 @@ struct tf_session_info {
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
- * session info at that time. Additional 'opens' can be done using
- * same session_info by using tf_attach_session().
+ * session info at that time. A TruFlow Session can be used by more
+ * than one PF/VF by using the tf_open_session().
  *
  * It is expected that ULP allocates this memory as shared memory.
  *
@@ -506,36 +520,62 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
+	/**
+	 * [in/out] session_client_id
+	 *
+	 * Session_client_id is unique per client.
+	 *
+	 * Session_client_id is composed of session_id and the
+	 * fw_session_client_id fw_session_id. The construction is
+	 * done by parsing the ctrl_chan_name together with allocation
+	 * of a fw_session_client_id during tf_open_session().
+	 *
+	 * A reference count will be incremented in the session on
+	 * which a client is created.
+	 *
+	 * A session can first be closed if there is one Session
+	 * Client left. Session Clients should closed using
+	 * tf_close_session().
+	 */
+	union tf_session_client_id session_client_id;
 	/**
 	 * [in] device type
 	 *
-	 * Device type is passed, one of Wh+, SR, Thor, SR2
+	 * Device type for the session.
 	 */
 	enum tf_device_type device_type;
-	/** [in] resources
+	/**
+	 * [in] resources
 	 *
-	 * Resource allocation
+	 * Resource allocation for the session.
 	 */
 	struct tf_session_resources resources;
 };
 
 /**
- * Opens a new TruFlow management session.
+ * Opens a new TruFlow Session or session client.
+ *
+ * What gets created depends on the passed in tfp content. If the tfp
+ * does not have prior session data a new session with associated
+ * session client. If tfp has a session already a session client will
+ * be created. In both cases the session client is created using the
+ * provided ctrl_chan_name.
  *
- * TruFlow will allocate session specific memory, shared memory, to
- * hold its session data. This data is private to TruFlow.
+ * In case of session creation TruFlow will allocate session specific
+ * memory, shared memory, to hold its session data. This data is
+ * private to TruFlow.
  *
- * Multiple PFs can share the same session. An association, refcount,
- * between session and PFs is maintained within TruFlow. Thus, a PF
- * can attach to an existing session, see tf_attach_session().
+ * No other TruFlow APIs will succeed unless this API is first called
+ * and succeeds.
  *
- * No other TruFlow APIs will succeed unless this API is first called and
- * succeeds.
+ * tf_open_session() returns a session id and session client id that
+ * is used on all other TF APIs.
  *
- * tf_open_session() returns a session id that can be used on attach.
+ * A Session or session client can be closed using tf_close_session().
  *
  * [in] tfp
  *   Pointer to TF handle
+ *
  * [in] parms
  *   Pointer to open parameters
  *
@@ -546,6 +586,11 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+/**
+ * Experimental
+ *
+ * tf_attach_session parameters definition.
+ */
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -595,15 +640,18 @@ struct tf_attach_session_parms {
 };
 
 /**
- * Attaches to an existing session. Used when more than one PF wants
- * to share a single session. In that case all TruFlow management
- * traffic will be sent to the TruFlow firmware using the 'PF' that
- * did the attach not the session ctrl channel.
+ * Experimental
+ *
+ * Allows a 2nd application instance to attach to an existing
+ * session. Used when a session is to be shared between two processes.
  *
  * Attach will increment a ref count as to manage the shared session data.
  *
- * [in] tfp, pointer to TF handle
- * [in] parms, pointer to attach parameters
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to attach parameters
  *
  * Returns
  *   - (0) if successful.
@@ -613,9 +661,15 @@ int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
 
 /**
- * Closes an existing session. Cleans up all hardware and firmware
- * state associated with the TruFlow application session when the last
- * PF associated with the session results in refcount to be zero.
+ * Closes an existing session client or the session it self. The
+ * session client is default closed and if the session reference count
+ * is 0 then the session is closed as well.
+ *
+ * On session close all hardware and firmware state associated with
+ * the TruFlow application is cleaned up.
+ *
+ * The session client is extracted from the tfp. Thus tf_close_session()
+ * cannot close a session client on behalf of another function.
  *
  * Returns success or failure code.
  */
@@ -1056,9 +1110,10 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_set_tbl_entry
  *
  * @ref tf_get_tbl_entry
+ *
+ * @ref tf_bulk_get_tbl_entry
  */
 
-
 /**
  * tf_alloc_tbl_entry parameter definition
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 6600a14c8..8c2dff8ad 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -84,7 +84,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
-		    uint8_t *fw_session_id)
+		    uint8_t *fw_session_id,
+		    uint8_t *fw_session_client_id)
 {
 	int rc;
 	struct hwrm_tf_session_open_input req = { 0 };
@@ -106,7 +107,8 @@ tf_msg_session_open(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	*fw_session_id = resp.fw_session_id;
+	*fw_session_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
+	*fw_session_client_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
 
 	return rc;
 }
@@ -119,6 +121,84 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_msg_session_client_register(struct tf *tfp,
+			       char *ctrl_channel_name,
+			       uint8_t *fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_register_input req = { 0 };
+	struct hwrm_tf_session_register_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	tfp_memcpy(&req.session_client_name,
+		   ctrl_channel_name,
+		   TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_REGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_client_id =
+		(uint8_t)tfp_le_to_cpu_32(resp.fw_session_client_id);
+
+	return rc;
+}
+
+int
+tf_msg_session_client_unregister(struct tf *tfp,
+				 uint8_t fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_unregister_input req = { 0 };
+	struct hwrm_tf_session_unregister_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.fw_session_client_id = tfp_cpu_to_le_32(fw_session_client_id);
+
+	parms.tf_type = HWRM_TF_SESSION_UNREGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+
+	return rc;
+}
+
 int
 tf_msg_session_close(struct tf *tfp)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 37f291016..c02a5203c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -34,7 +34,8 @@ struct tf;
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
-			uint8_t *fw_session_id);
+			uint8_t *fw_session_id,
+			uint8_t *fw_session_client_id);
 
 /**
  * Sends session close request to Firmware
@@ -42,6 +43,9 @@ int tf_msg_session_open(struct tf *tfp,
  * [in] session
  *   Pointer to session handle
  *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
  * [in] fw_session_id
  *   Pointer to the fw_session_id that is assigned to the session at
  *   time of session open
@@ -53,6 +57,42 @@ int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
 			  uint8_t tf_fw_session_id);
 
+/**
+ * Sends session client register request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_register(struct tf *tfp,
+				   char *ctrl_channel_name,
+				   uint8_t *fw_session_client_id);
+
+/**
+ * Sends session client unregister request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_unregister(struct tf *tfp,
+				     uint8_t fw_session_client_id);
+
 /**
  * Sends session close request to Firmware
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 30313e2ea..fdb87ecb8 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -389,7 +389,7 @@ tf_rm_create_db(struct tf *tfp,
 	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 6c07e4745..fa580a6a0 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -12,14 +12,49 @@
 #include "tf_msg.h"
 #include "tfp.h"
 
-int
-tf_session_open_session(struct tf *tfp,
-			struct tf_session_open_session_parms *parms)
+struct tf_session_client_create_parms {
+	/**
+	 * [in] Pointer to the control channel name string
+	 */
+	char *ctrl_chan_name;
+
+	/**
+	 * [out] Firmware Session Client ID
+	 */
+	union tf_session_client_id *session_client_id;
+};
+
+struct tf_session_client_destroy_parms {
+	/**
+	 * FW Session Client Identifier
+	 */
+	union tf_session_client_id session_client_id;
+};
+
+/**
+ * Creates a Session and the associated client.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_create(struct tf *tfp,
+		  struct tf_session_open_session_parms *parms)
 {
 	int rc;
 	struct tf_session *session;
+	struct tf_session_client *client;
 	struct tfp_calloc_parms cparms;
 	uint8_t fw_session_id;
+	uint8_t fw_session_client_id;
 	union tf_session_id *session_id;
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -27,7 +62,8 @@ tf_session_open_session(struct tf *tfp,
 	/* Open FW session and get a new session_id */
 	rc = tf_msg_session_open(tfp,
 				 parms->open_cfg->ctrl_chan_name,
-				 &fw_session_id);
+				 &fw_session_id,
+				 &fw_session_client_id);
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
@@ -92,15 +128,46 @@ tf_session_open_session(struct tf *tfp,
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
 	session->session_id.internal.fw_session_id = fw_session_id;
-	/* Return the allocated fw session id */
-	session_id->internal.fw_session_id = fw_session_id;
+	/* Return the allocated session id */
+	session_id->id = session->session_id.id;
 
 	session->shadow_copy = parms->open_cfg->shadow_copy;
 
-	tfp_memcpy(session->ctrl_chan_name,
+	/* Init session client list */
+	ll_init(&session->client_ll);
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	client->session_client_id.internal.fw_session_id = fw_session_id;
+	client->session_client_id.internal.fw_session_client_id =
+		fw_session_client_id;
+
+	tfp_memcpy(client->ctrl_chan_name,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
+	ll_insert(&session->client_ll, &client->ll_entry);
+	session->ref_count++;
+
 	rc = tf_dev_bind(tfp,
 			 parms->open_cfg->device_type,
 			 session->shadow_copy,
@@ -110,7 +177,7 @@ tf_session_open_session(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	session->ref_count++;
+	session->dev_init = true;
 
 	return 0;
 
@@ -121,6 +188,235 @@ tf_session_open_session(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Creates a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_client_create(struct tf *tfp,
+			 struct tf_session_client_create_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tf_session_client *client;
+	struct tfp_calloc_parms cparms;
+	union tf_session_client_id session_client_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &session);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	client = tf_session_find_session_client_by_name(session,
+							parms->ctrl_chan_name);
+	if (client) {
+		TFP_DRV_LOG(ERR,
+			    "Client %s, already registered with this session\n",
+			    parms->ctrl_chan_name);
+		return -EOPNOTSUPP;
+	}
+
+	rc = tf_msg_session_client_register
+		    (tfp,
+		    parms->ctrl_chan_name,
+		    &session_client_id.internal.fw_session_client_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to create client on session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	/* Build the Session Client ID by adding the fw_session_id */
+	rc = tf_session_get_fw_session_id
+			(tfp,
+			&session_client_id.internal.fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session Firmware id lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tfp_memcpy(client->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	client->session_client_id.id = session_client_id.id;
+
+	ll_insert(&session->client_ll, &client->ll_entry);
+
+	session->ref_count++;
+
+	/* Build the return value */
+	parms->session_client_id->id = session_client_id.id;
+
+ cleanup:
+	/* TBD - Add code to unregister newly create client from fw */
+
+	return rc;
+}
+
+
+/**
+ * Destroys a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session client destroy parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTFOUND) error, client not owned by the session.
+ *   - (-ENOTSUPP) error, unable to destroy client as its the last
+ *                 client. Please use the tf_session_close().
+ */
+static int
+tf_session_client_destroy(struct tf *tfp,
+			  struct tf_session_client_destroy_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_session_client *client;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Check session owns this client and that we're not the last client */
+	client = tf_session_get_session_client(tfs,
+					       parms->session_client_id);
+	if (client == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "Client %d, not found within this session\n",
+			    parms->session_client_id.id);
+		return -EINVAL;
+	}
+
+	/* If last client the request is rejected and cleanup should
+	 * be done by session close.
+	 */
+	if (tfs->ref_count == 1)
+		return -EOPNOTSUPP;
+
+	rc = tf_msg_session_client_unregister
+			(tfp,
+			parms->session_client_id.internal.fw_session_client_id);
+
+	/* Log error, but continue. If FW fails we do not really have
+	 * a way to fix this but the client would no longer be valid
+	 * thus we remove from the session.
+	 */
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Client destroy on FW Failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+
+	/* Decrement the session ref_count */
+	tfs->ref_count--;
+
+	tfp_free(client);
+
+	return rc;
+}
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session_client_create_parms scparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Decide if we're creating a new session or session client */
+	if (tfp->session == NULL) {
+		rc = tf_session_create(tfp, parms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to create session, ctrl_chan_name:%s, rc:%s\n",
+				    parms->open_cfg->ctrl_chan_name,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+		       "Session created, session_client_id:%d, session_id:%d\n",
+		       parms->open_cfg->session_client_id.id,
+		       parms->open_cfg->session_id.id);
+	} else {
+		scparms.ctrl_chan_name = parms->open_cfg->ctrl_chan_name;
+		scparms.session_client_id = &parms->open_cfg->session_client_id;
+
+		/* Create the new client and get it associated with
+		 * the session.
+		 */
+		rc = tf_session_client_create(tfp, &scparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+			      "Failed to create client on session %d, rc:%s\n",
+			      parms->open_cfg->session_id.id,
+			      strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Session Client:%d created on session:%d\n",
+			    parms->open_cfg->session_client_id.id,
+			    parms->open_cfg->session_id.id);
+	}
+
+	return 0;
+}
+
 int
 tf_session_attach_session(struct tf *tfp __rte_unused,
 			  struct tf_session_attach_session_parms *parms __rte_unused)
@@ -141,7 +437,10 @@ tf_session_close_session(struct tf *tfp,
 {
 	int rc;
 	struct tf_session *tfs;
+	struct tf_session_client *client;
 	struct tf_dev_info *tfd;
+	struct tf_session_client_destroy_parms scdparms;
+	uint16_t fid;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -161,7 +460,49 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	tfs->ref_count--;
+	/* Get the client, we need it independently of the closure
+	 * type (client or session closure).
+	 *
+	 * We find the client by way of the fid. Thus one cannot close
+	 * a client on behalf of someone else.
+	 */
+	rc = tfp_get_fid(tfp, &fid);
+	if (rc)
+		return rc;
+
+	client = tf_session_find_session_client_by_fid(tfs,
+						       fid);
+	/* In case multiple clients we chose to close those first */
+	if (tfs->ref_count > 1) {
+		/* Linaro gcc can't static init this structure */
+		memset(&scdparms,
+		       0,
+		       sizeof(struct tf_session_client_destroy_parms));
+
+		scdparms.session_client_id = client->session_client_id;
+		/* Destroy requested client so its no longer
+		 * registered with this session.
+		 */
+		rc = tf_session_client_destroy(tfp, &scdparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to unregister Client %d, rc:%s\n",
+				    client->session_client_id.id,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Closed session client, session_client_id:%d\n",
+			    client->session_client_id.id);
+
+		TFP_DRV_LOG(INFO,
+			    "session_id:%d, ref_count:%d\n",
+			    tfs->session_id.id,
+			    tfs->ref_count);
+
+		return 0;
+	}
 
 	/* Record the session we're closing so the caller knows the
 	 * details.
@@ -176,23 +517,6 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	if (tfs->ref_count > 0) {
-		/* In case we're attached only the session client gets
-		 * closed.
-		 */
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "FW Session close failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		return 0;
-	}
-
-	/* Final cleanup as we're last user of the session */
-
 	/* Unbind the device */
 	rc = tf_dev_unbind(tfp, tfd);
 	if (rc) {
@@ -202,7 +526,6 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
 		/* Log error */
@@ -211,6 +534,21 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
+	/* Final cleanup as we're last user of the session thus we
+	 * also delete the last client.
+	 */
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+	tfp_free(client);
+
+	tfs->ref_count--;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    tfs->session_id.id,
+		    tfs->ref_count);
+
+	tfs->dev_init = false;
+
 	tfp_free(tfp->session->core_data);
 	tfp_free(tfp->session);
 	tfp->session = NULL;
@@ -218,12 +556,31 @@ tf_session_close_session(struct tf *tfp,
 	return 0;
 }
 
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return true;
+	}
+
+	return false;
+}
+
 int
-tf_session_get_session(struct tf *tfp,
-		       struct tf_session **tfs)
+tf_session_get_session_internal(struct tf *tfp,
+				struct tf_session **tfs)
 {
-	int rc;
+	int rc = 0;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -234,7 +591,113 @@ tf_session_get_session(struct tf *tfp,
 
 	*tfs = (struct tf_session *)(tfp->session->core_data);
 
-	return 0;
+	return rc;
+}
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session **tfs)
+{
+	int rc;
+	uint16_t fw_fid;
+	bool supported = false;
+
+	rc = tf_session_get_session_internal(tfp,
+					     tfs);
+	/* Logging done by tf_session_get_session_internal */
+	if (rc)
+		return rc;
+
+	/* As session sharing among functions aka 'individual clients'
+	 * is supported we have to assure that the client is indeed
+	 * registered before we get deep in the TruFlow api stack.
+	 */
+	rc = tfp_get_fid(tfp, &fw_fid);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Internal FID lookup\n, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	supported = tf_session_is_fid_supported(*tfs, fw_fid);
+	if (!supported) {
+		TFP_DRV_LOG
+			(ERR,
+			"Ctrl channel not registered with session\n, rc:%s\n",
+			strerror(-rc));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->session_client_id.id == session_client_id.id)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL || ctrl_chan_name == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (strncmp(client->ctrl_chan_name,
+			    ctrl_chan_name,
+			    TF_SESSION_NAME_MAX) == 0)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return client;
+	}
+
+	return NULL;
 }
 
 int
@@ -253,6 +716,7 @@ tf_session_get_fw_session_id(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -261,7 +725,15 @@ tf_session_get_fw_session_id(struct tf *tfp,
 		return rc;
 	}
 
-	rc = tf_session_get_session(tfp, &tfs);
+	if (fw_session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -269,3 +741,36 @@ tf_session_get_fw_session_id(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_session_get_session_id(struct tf *tfp,
+			  union tf_session_id *session_id)
+{
+	int rc;
+	struct tf_session *tfs;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*session_id = tfs->session_id;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index a303fde51..aa7a27877 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -16,6 +16,7 @@
 #include "tf_tbl.h"
 #include "tf_resources.h"
 #include "stack.h"
+#include "ll.h"
 
 /**
  * The Session module provides session control support. A session is
@@ -29,7 +30,6 @@
 
 /** Session defines
  */
-#define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
 /**
@@ -50,7 +50,7 @@
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
  * allocations and (if so configured) any shadow copy of flow
- * information.
+ * information. It also holds info about Session Clients.
  *
  * Memory is assigned to the Truflow instance by way of
  * tf_open_session. Memory is allocated and owned by i.e. ULP.
@@ -65,17 +65,10 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Session ID, allocated by FW on tf_open_session() */
-	union tf_session_id session_id;
-
 	/**
-	 * String containing name of control channel interface to be
-	 * used for this session to communicate with firmware.
-	 *
-	 * ctrl_chan_name will be used as part of a name for any
-	 * shared memory allocation.
+	 * Session ID, allocated by FW on tf_open_session()
 	 */
-	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	union tf_session_id session_id;
 
 	/**
 	 * Boolean controlling the use and availability of shadow
@@ -92,14 +85,67 @@ struct tf_session {
 
 	/**
 	 * Session Reference Count. To keep track of functions per
-	 * session the ref_count is incremented. There is also a
+	 * session the ref_count is updated. There is also a
 	 * parallel TruFlow Firmware ref_count in case the TruFlow
 	 * Core goes away without informing the Firmware.
 	 */
 	uint8_t ref_count;
 
-	/** Device handle */
+	/**
+	 * Session Reference Count for attached sessions. To keep
+	 * track of application sharing of a session the
+	 * ref_count_attach is updated.
+	 */
+	uint8_t ref_count_attach;
+
+	/**
+	 * Device handle
+	 */
 	struct tf_dev_info dev;
+	/**
+	 * Device init flag. False if Device is not fully initialized,
+	 * else true.
+	 */
+	bool dev_init;
+
+	/**
+	 * Linked list of clients registered for this session
+	 */
+	struct ll client_ll;
+};
+
+/**
+ * Session Client
+ *
+ * Shared memory for each of the Session Clients. A session can have
+ * one or more clients.
+ */
+struct tf_session_client {
+	/**
+	 * Linked list of clients
+	 */
+	struct ll_entry ll_entry; /* For inserting in link list, must be
+				   * first field of struct.
+				   */
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Firmware FID, learned at time of Session Client create.
+	 */
+	uint16_t fw_fid;
+
+	/**
+	 * Session Client ID, allocated by FW on tf_register_session()
+	 */
+	union tf_session_client_id session_client_id;
 };
 
 /**
@@ -126,7 +172,13 @@ struct tf_session_attach_session_parms {
  * Session close parameter definition
  */
 struct tf_session_close_session_parms {
+	/**
+	 * []
+	 */
 	uint8_t *ref_count;
+	/**
+	 * []
+	 */
 	union tf_session_id *session_id;
 };
 
@@ -139,11 +191,23 @@ struct tf_session_close_session_parms {
  *
  * @ref tf_session_close_session
  *
+ * @ref tf_session_is_fid_supported
+ *
+ * @ref tf_session_get_session_internal
+ *
  * @ref tf_session_get_session
  *
+ * @ref tf_session_get_session_client
+ *
+ * @ref tf_session_find_session_client_by_name
+ *
+ * @ref tf_session_find_session_client_by_fid
+ *
  * @ref tf_session_get_device
  *
  * @ref tf_session_get_fw_session_id
+ *
+ * @ref tf_session_get_session_id
  */
 
 /**
@@ -179,7 +243,8 @@ int tf_session_attach_session(struct tf *tfp,
 			      struct tf_session_attach_session_parms *parms);
 
 /**
- * Closes a previous created session.
+ * Closes a previous created session. Only possible if previous
+ * registered Clients had been unregistered first.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -189,13 +254,53 @@ int tf_session_attach_session(struct tf *tfp,
  *
  * Returns
  *   - (0) if successful.
+ *   - (-EUSERS) if clients are still registered with the session.
  *   - (-EINVAL) on failure.
  */
 int tf_session_close_session(struct tf *tfp,
 			     struct tf_session_close_session_parms *parms);
 
 /**
- * Looks up the private session information from the TF session info.
+ * Verifies that the fid is supported by the session. Used to assure
+ * that a function i.e. client/control channel is registered with the
+ * session.
+ *
+ * [in] tfs
+ *   Pointer to TF Session handle
+ *
+ * [in] fid
+ *   FID value to check
+ *
+ * Returns
+ *   - (true) if successful, else false
+ *   - (-EINVAL) on failure.
+ */
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Does not perform a fid check against the registered
+ * clients. Should be used if tf_session_get_session() was used
+ * previously i.e. at the TF API boundary.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_internal(struct tf *tfp,
+				    struct tf_session **tfs);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Performs a fid check against the clients on the session.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -210,6 +315,53 @@ int tf_session_close_session(struct tf *tfp,
 int tf_session_get_session(struct tf *tfp,
 			   struct tf_session **tfs);
 
+/**
+ * Looks up client within the session.
+ *
+ * [in] tfs
+ *   Pointer pointer to the session
+ *
+ * [in] session_client_id
+ *   Client id to look for within the session
+ *
+ * Returns
+ *   client if successful.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id);
+
+/**
+ * Looks up client using name within the session.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] session_client_name, name of the client to lookup in the session
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name);
+
+/**
+ * Looks up client using the fid.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] fid, fid of the client to find
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid);
+
 /**
  * Looks up the device information from the TF Session.
  *
@@ -227,8 +379,7 @@ int tf_session_get_device(struct tf_session *tfs,
 			  struct tf_dev_info **tfd);
 
 /**
- * Looks up the FW session id of the firmware connection for the
- * requested TF handle.
+ * Looks up the FW Session id the requested TF handle.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -243,4 +394,20 @@ int tf_session_get_device(struct tf_session *tfs,
 int tf_session_get_fw_session_id(struct tf *tfp,
 				 uint8_t *fw_session_id);
 
+/**
+ * Looks up the Session id the requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_id(struct tf *tfp,
+			      union tf_session_id *session_id);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 7d4daaf2d..2b4a7c561 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -269,6 +269,7 @@ tf_tbl_set(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -338,6 +339,7 @@ tf_tbl_get(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 1c48b5363..cbfaa94ee 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -138,7 +138,7 @@ tf_tcam_alloc(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -218,7 +218,7 @@ tf_tcam_free(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -319,6 +319,7 @@ tf_tcam_free(struct tf *tfp,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -353,7 +354,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -415,6 +416,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 69d1c9a1f..426a182a9 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -161,3 +161,20 @@ tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
 {
 	rte_spinlock_unlock(&parms->slock);
 }
+
+int
+tfp_get_fid(struct tf *tfp, uint16_t *fw_fid)
+{
+	struct bnxt *bp = NULL;
+
+	if (tfp == NULL || fw_fid == NULL)
+		return -EINVAL;
+
+	bp = container_of(tfp, struct bnxt, tfp);
+	if (bp == NULL)
+		return -EINVAL;
+
+	*fw_fid = bp->fw_fid;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index fe49b6304..8789eba1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -238,4 +238,19 @@ int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
 #define tfp_bswap_32(val) rte_bswap32(val)
 #define tfp_bswap_64(val) rte_bswap64(val)
 
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 30/51] net/bnxt: add global config set and get APIs
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (28 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
                     ` (21 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Add support to update global configuration for ACT_TECT
  and ACT_ABCR.
- Add support to allow Tunnel and Action global configuration.
- Remove register read and write operations.
- Remove the register read and write support.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/hcapi_cfa.h       |   3 +
 drivers/net/bnxt/meson.build             |   1 +
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  54 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 137 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       |  77 +++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  20 +++
 drivers/net/bnxt/tf_core/tf_device.h     |  33 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   4 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   5 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c | 199 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_global_cfg.h | 170 +++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 109 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |  31 ++++
 14 files changed, 840 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h

diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index 7a67493bd..3d895f088 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -245,6 +245,9 @@ int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
 			     struct hcapi_cfa_data *mirror);
+int hcapi_cfa_p4_global_cfg_hwop(struct hcapi_cfa_hwop *op,
+				 uint32_t type,
+				 struct hcapi_cfa_data *config);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 54564e02e..ace7353be 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -45,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
+	'tf_core/tf_global_cfg.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 6210bc70e..202db4150 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -27,6 +27,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_global_cfg.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -36,3 +37,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_global_cfg.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 32f152314..7ade9927a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -13,8 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_REG_GET = 821,
-	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_GET_GLOBAL_CFG = 821,
+	HWRM_TFT_SET_GLOBAL_CFG = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	HWRM_TFT_IF_TBL_SET = 827,
 	HWRM_TFT_IF_TBL_GET = 828,
@@ -66,18 +66,66 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
+struct tf_set_global_cfg_input;
+struct tf_get_global_cfg_input;
+struct tf_get_global_cfg_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
 struct tf_if_tbl_set_input;
 struct tf_if_tbl_get_input;
 struct tf_if_tbl_get_output;
+/* Input params for global config set */
+typedef struct tf_set_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config type */
+	uint32_t			 type;
+	/* Offset of the type */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_set_global_cfg_input_t, *ptf_set_global_cfg_input_t;
+
+/* Input params for global config to get */
+typedef struct tf_get_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config to retrieve */
+	uint32_t			 type;
+	/* Offset to retrieve */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+} tf_get_global_cfg_input_t, *ptf_get_global_cfg_input_t;
+
+/* Output params for global config */
+typedef struct tf_get_global_cfg_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+	/* Data to get */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_get_global_cfg_output_t, *ptf_get_global_cfg_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
-	uint16_t			 flags;
+	uint32_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
 #define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 489c461d1..0f119b45f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_em.h"
 #include "tf_rm.h"
+#include "tf_global_cfg.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "bitalloc.h"
@@ -277,6 +278,142 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
+/** Get global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_get_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_get_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+/** Set global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_set_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_set_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_identifier(struct tf *tfp,
 		    struct tf_alloc_identifier_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index fea222bee..3f54ab16b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1611,6 +1611,83 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page global Global Configuration
+ *
+ * @ref tf_set_global_cfg
+ *
+ * @ref tf_get_global_cfg
+ */
+/**
+ * Tunnel Encapsulation Offsets
+ */
+enum tf_tunnel_encap_offsets {
+	TF_TUNNEL_ENCAP_L2,
+	TF_TUNNEL_ENCAP_NAT,
+	TF_TUNNEL_ENCAP_MPLS,
+	TF_TUNNEL_ENCAP_VXLAN,
+	TF_TUNNEL_ENCAP_GENEVE,
+	TF_TUNNEL_ENCAP_NVGRE,
+	TF_TUNNEL_ENCAP_GRE,
+	TF_TUNNEL_ENCAP_FULL_GENERIC
+};
+/**
+ * Global Configuration Table Types
+ */
+enum tf_global_config_type {
+	TF_TUNNEL_ENCAP,  /**< Tunnel Encap Config(TECT) */
+	TF_ACTION_BLOCK,  /**< Action Block Config(ABCR) */
+	TF_GLOBAL_CFG_TYPE_MAX
+};
+
+/**
+ * tf_global_cfg parameter definition
+ */
+struct tf_global_cfg_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * Get global configuration
+ *
+ * Retrieve the configuration
+ *
+ * Returns success or failure code.
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
+/**
+ * Update the global configuration table
+ *
+ * Read, modify write the value.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
 /**
  * @page if_tbl Interface Table Access
  *
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index a3073c826..ead958418 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -45,6 +45,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
 	struct tf_if_tbl_cfg_parms if_tbl_cfg;
+	struct tf_global_cfg_cfg_parms global_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -128,6 +129,18 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * GLOBAL_CFG
+	 */
+	global_cfg.num_elements = TF_GLOBAL_CFG_TYPE_MAX;
+	global_cfg.cfg = tf_global_cfg_p4;
+	rc = tf_global_cfg_bind(tfp, &global_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -207,6 +220,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_global_cfg_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Global Cfg Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 5a0943ad7..1740a271f 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 struct tf_session;
@@ -606,6 +607,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_if_tbl)(struct tf *tfp,
 				 struct tf_if_tbl_get_parms *parms);
+
+	/**
+	 * Update global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_set_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
+
+	/**
+	 * Get global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_get_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 2dc34b853..652608264 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -108,6 +108,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_free_tbl_scope = NULL,
 	.tf_dev_set_if_tbl = NULL,
 	.tf_dev_get_if_tbl = NULL,
+	.tf_dev_set_global_cfg = NULL,
+	.tf_dev_get_global_cfg = NULL,
 };
 
 /**
@@ -140,4 +142,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 	.tf_dev_set_if_tbl = tf_if_tbl_set,
 	.tf_dev_get_if_tbl = tf_if_tbl_get,
+	.tf_dev_set_global_cfg = tf_global_cfg_set,
+	.tf_dev_get_global_cfg = tf_global_cfg_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 3b03a7c4e..7fabb4ba8 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -11,6 +11,7 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -96,4 +97,8 @@ struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
 	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
 };
 
+struct tf_global_cfg_cfg tf_global_cfg_p4[TF_GLOBAL_CFG_TYPE_MAX] = {
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_TUNNEL_ENCAP },
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_ACTION_BLOCK },
+};
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.c b/drivers/net/bnxt/tf_core/tf_global_cfg.c
new file mode 100644
index 000000000..4ed4039db
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.c
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_global_cfg.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+/**
+ * Global Cfg DBs.
+ */
+static void *global_cfg_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_global_cfg_get_hcapi_parms {
+	/**
+	 * [in] Global Cfg DB Handle
+	 */
+	void *global_cfg_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Check global_cfg_type and return hwrm type.
+ *
+ * [in] global_cfg_type
+ *   Global Cfg type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_global_cfg_get_hcapi_type(struct tf_global_cfg_get_hcapi_parms *parms)
+{
+	struct tf_global_cfg_cfg *global_cfg;
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	global_cfg = (struct tf_global_cfg_cfg *)parms->global_cfg_db;
+	cfg_type = global_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_GLOBAL_CFG_CFG_HCAPI)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = global_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_global_cfg_bind(struct tf *tfp __rte_unused,
+		   struct tf_global_cfg_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg DB already initialized\n");
+		return -EINVAL;
+	}
+
+	global_cfg_db[TF_DIR_RX] = parms->cfg;
+	global_cfg_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Global Cfg - initialized\n");
+
+	return 0;
+}
+
+int
+tf_global_cfg_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Global Cfg DBs created\n");
+		return 0;
+	}
+
+	global_cfg_db[TF_DIR_RX] = NULL;
+	global_cfg_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_global_cfg_set(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_msg_set_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_global_cfg_get(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.h b/drivers/net/bnxt/tf_core/tf_global_cfg.h
new file mode 100644
index 000000000..5c73bb115
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_GLOBAL_CFG_H_
+#define TF_GLOBAL_CFG_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/**
+ * The global cfg module provides processing of global cfg types.
+ */
+
+struct tf;
+
+/**
+ * Global cfg configuration enumeration.
+ */
+enum tf_global_cfg_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_GLOBAL_CFG_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_GLOBAL_CFG_CFG_HCAPI,
+};
+
+/**
+ * Global cfg configuration structure, used by the Device to configure
+ * how an individual global cfg type is configured in regard to the HCAPI type.
+ */
+struct tf_global_cfg_cfg {
+	/**
+	 * Global cfg config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Global Cfg configuration parameters
+ */
+struct tf_global_cfg_cfg_parms {
+	/**
+	 * Number of table types in the configuration array
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_global_cfg_cfg *cfg;
+};
+
+/**
+ * global cfg parameters
+ */
+struct tf_dev_global_cfg_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * @page global cfg
+ *
+ * @ref tf_global_cfg_bind
+ *
+ * @ref tf_global_cfg_unbind
+ *
+ * @ref tf_global_cfg_set
+ *
+ * @ref tf_global_cfg_get
+ *
+ */
+/**
+ * Initializes the Global Cfg module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_bind(struct tf *tfp,
+		   struct tf_global_cfg_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_unbind(struct tf *tfp);
+
+/**
+ * Updates the global configuration table
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_set(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+/**
+ * Get global configuration
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_get(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+#endif /* TF_GLOBAL_CFG_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 8c2dff8ad..035c0948d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -991,6 +991,111 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+int
+tf_msg_get_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_get_global_cfg_input_t req = { 0 };
+	tf_get_global_cfg_output_t resp = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+	uint16_t resp_size = 0;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_GET_GLOBAL_CFG,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	resp_size = tfp_le_to_cpu_16(resp.size);
+	if (resp_size < params->config_sz_in_bytes)
+		return -EINVAL;
+
+	if (params->config)
+		tfp_memcpy(params->config,
+			   resp.data,
+			   resp_size);
+	else
+		return -EFAULT;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_set_global_cfg_input_t req = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	tfp_memcpy(req.data, params->config,
+		   params->config_sz_in_bytes);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SET_GLOBAL_CFG,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  enum tf_dir dir,
@@ -1066,8 +1171,8 @@ tf_msg_get_if_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
-		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX);
 
 	/* Populate the request */
 	req.fw_session_id =
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index c02a5203c..195710eb8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_tcam.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 
@@ -448,6 +449,36 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+/**
+ * Sends global cfg read request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to read parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_get_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
+/**
+ * Sends global cfg update request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to write parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_set_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 31/51] net/bnxt: add support for EEM System memory
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (29 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
                     ` (20 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Select EEM Host or System memory via config parameter
- Add EEM system memory support for kernel memory
- Dependent on DPDK changes that add support for the HWRM_OEM_CMD.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 config/common_base                      |   1 +
 drivers/net/bnxt/Makefile               |   3 +
 drivers/net/bnxt/bnxt.h                 |   8 +
 drivers/net/bnxt/bnxt_hwrm.c            |  27 +
 drivers/net/bnxt/bnxt_hwrm.h            |   1 +
 drivers/net/bnxt/meson.build            |   2 +-
 drivers/net/bnxt/tf_core/Makefile       |   5 +-
 drivers/net/bnxt/tf_core/tf_core.c      |  13 +-
 drivers/net/bnxt/tf_core/tf_core.h      |   4 +-
 drivers/net/bnxt/tf_core/tf_device.c    |   5 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |   2 +-
 drivers/net/bnxt/tf_core/tf_em.h        | 113 +---
 drivers/net/bnxt/tf_core/tf_em_common.c | 683 ++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_common.h |  30 ++
 drivers/net/bnxt/tf_core/tf_em_host.c   | 689 +-----------------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 541 ++++++++++++++++---
 drivers/net/bnxt/tf_core/tf_if_tbl.h    |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c       |  24 +
 drivers/net/bnxt/tf_core/tf_tbl.h       |   7 +
 drivers/net/bnxt/tf_core/tfp.c          |  12 +
 drivers/net/bnxt/tf_core/tfp.h          |  15 +
 21 files changed, 1319 insertions(+), 870 deletions(-)

diff --git a/config/common_base b/config/common_base
index fe30c515e..370a48f02 100644
--- a/config/common_base
+++ b/config/common_base
@@ -220,6 +220,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
 # Compile burst-oriented Broadcom BNXT PMD driver
 #
 CONFIG_RTE_LIBRTE_BNXT_PMD=y
+CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM=n
 
 #
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 349b09c36..6b9544b5d 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,9 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
 include $(SRCDIR)/hcapi/Makefile
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), y)
+CFLAGS += -DTF_USE_SYSTEM_MEM
+endif
 endif
 
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 65862abdc..43e5e7162 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -563,6 +563,13 @@ struct bnxt_rep_info {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define  MAX_TABLE_SUPPORT 4
+#define  MAX_DIR_SUPPORT   2
+struct bnxt_dmabuf_info {
+	uint32_t entry_num;
+	int      fd[MAX_DIR_SUPPORT][MAX_TABLE_SUPPORT];
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -780,6 +787,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_dmabuf_info dmabuf;
 	struct bnxt_ulp_context	*ulp_ctx;
 	struct bnxt_flow_stat_info *flow_stat;
 	uint8_t			flow_xstat;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index e6a28d07c..2605ef039 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -5506,3 +5506,30 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 
 	return 0;
 }
+
+#ifdef RTE_LIBRTE_BNXT_PMD_SYSTEM
+int
+bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num)
+{
+	struct hwrm_oem_cmd_input req = {0};
+	struct hwrm_oem_cmd_output *resp = bp->hwrm_cmd_resp_addr;
+	struct bnxt_dmabuf_info oem_data;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_OEM_CMD, BNXT_USE_CHIMP_MB);
+	req.IANA = 0x14e4;
+
+	memset(&oem_data, 0, sizeof(struct bnxt_dmabuf_info));
+	oem_data.entry_num = (entry_num);
+	memcpy(&req.oem_data[0], &oem_data, sizeof(struct bnxt_dmabuf_info));
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+	HWRM_CHECK_RESULT();
+
+	bp->dmabuf.entry_num = entry_num;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+#endif /* RTE_LIBRTE_BNXT_PMD_SYSTEM */
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 87cd40779..9e0b79904 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -276,4 +276,5 @@ int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
+int bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num);
 #endif
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index ace7353be..8f6ed419e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -31,7 +31,6 @@ sources = files('bnxt_cpr.c',
         'tf_core/tf_em_common.c',
         'tf_core/tf_em_host.c',
         'tf_core/tf_em_internal.c',
-        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
@@ -46,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
+	'tf_core/tf_em_host.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 202db4150..750c25c5e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -16,8 +16,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), n)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
+else
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM) += tf_core/tf_em_system.c
+endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0f119b45f..00b2775ed 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -540,10 +540,12 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_alloc_parms aparms = { 0 };
+	struct tf_tcam_alloc_parms aparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&aparms, 0, sizeof(struct tf_tcam_alloc_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -598,10 +600,13 @@ tf_set_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_set_parms sparms = { 0 };
+	struct tf_tcam_set_parms sparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&sparms, 0, sizeof(struct tf_tcam_set_parms));
+
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -667,10 +672,12 @@ tf_free_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_free_parms fparms = { 0 };
+	struct tf_tcam_free_parms fparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&fparms, 0, sizeof(struct tf_tcam_free_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3f54ab16b..9e8042606 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1731,7 +1731,7 @@ struct tf_set_if_tbl_entry_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -1768,7 +1768,7 @@ struct tf_get_if_tbl_entry_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index ead958418..f08f7eba7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -92,8 +92,11 @@ tf_dev_bind_p4(struct tf *tfp,
 	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
 	em_cfg.cfg = tf_em_ext_p4;
 	em_cfg.resources = resources;
+#ifdef TF_USE_SYSTEM_MEM
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_SYSTEM;
+#else
 	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
-
+#endif
 	rc = tf_em_ext_common_bind(tfp, &em_cfg);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 652608264..dfe626c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -126,7 +126,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
-	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_common_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 1c2369c7b..45c0e1168 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -16,6 +16,9 @@
 
 #include "hcapi/hcapi_cfa_defs.h"
 
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -69,8 +72,16 @@
 #error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
+/*
+ * System memory always uses 4K pages
+ */
+#ifdef TF_USE_SYSTEM_MEM
+#define TF_EM_PAGE_SIZE (1 << TF_EM_PAGE_SIZE_4K)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SIZE_4K)
+#else
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
 #define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+#endif
 
 /*
  * Used to build GFID:
@@ -168,39 +179,6 @@ struct tf_em_cfg_parms {
  * @ref tf_em_ext_common_alloc
  */
 
-/**
- * Allocates EEM Table scope
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- *   -ENOMEM - Out of memory
- */
-int tf_alloc_eem_tbl_scope(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free's EEM Table scope control block
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			     struct tf_free_tbl_scope_parms *parms);
-
 /**
  * Insert record in to internal EM table
  *
@@ -374,8 +352,8 @@ int tf_em_ext_common_unbind(struct tf *tfp);
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+int tf_em_ext_alloc(struct tf *tfp,
+		    struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using host memory
@@ -390,40 +368,8 @@ int tf_em_ext_host_alloc(struct tf *tfp,
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
-
-/**
- * Alloc for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_alloc(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_free(struct tf *tfp,
-			  struct tf_free_tbl_scope_parms *parms);
+int tf_em_ext_free(struct tf *tfp,
+		   struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
@@ -510,8 +456,8 @@ tf_tbl_ext_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
 
 /**
  * Sets the specified external table type element.
@@ -529,26 +475,11 @@ int tf_tbl_ext_set(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
 
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data by invoking the
- * firmware.
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_system_set(struct tf *tfp,
-			  struct tf_tbl_set_parms *parms);
+int
+tf_em_ext_system_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms);
 
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 23a7fc9c2..8b02b8ba3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -23,6 +23,8 @@
 
 #include "bnxt.h"
 
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
 /**
  * EM DBs.
@@ -281,19 +283,602 @@ tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
 		       struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
+	key_entry->hdr.pointer = result->pointer;
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
 
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
 
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		 uint32_t page_size)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(page_size,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    page_size);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, page_size,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 \
+		    " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * page_size,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    parms->rx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    parms->tx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
 
 #ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
 #endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 =
+			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables
+			[(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
 }
 
+
 int
 tf_em_ext_common_bind(struct tf *tfp,
 		      struct tf_em_cfg_parms *parms)
@@ -341,6 +926,7 @@ tf_em_ext_common_bind(struct tf *tfp,
 		init = 1;
 
 	mem_type = parms->mem_type;
+
 	return 0;
 }
 
@@ -375,31 +961,88 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms)
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_tbl_ext_host_set(tfp, parms);
-	else
-		return tf_tbl_ext_system_set(tfp, parms);
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	return rc;
 }
 
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_alloc(tfp, parms);
-	else
-		return tf_em_ext_system_alloc(tfp, parms);
+	return tf_em_ext_alloc(tfp, parms);
 }
 
 int
 tf_em_ext_common_free(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_free(tfp, parms);
-	else
-		return tf_em_ext_system_free(tfp, parms);
+	return tf_em_ext_free(tfp, parms);
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index bf01df9b8..fa313c458 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -101,4 +101,34 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   uint32_t offset,
 			   enum hcapi_cfa_em_table_type table_type);
 
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		     uint32_t page_size);
+
 #endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 2626a59fe..8cc92c438 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -22,7 +22,6 @@
 
 #include "bnxt.h"
 
-
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
 #define PTU_PTE_NEXT_TO_LAST   0x4UL
@@ -30,20 +29,6 @@
 /* Number of pointers per page_size */
 #define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
 /**
  * EM DBs.
  */
@@ -294,203 +279,6 @@ tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
 /**
  * Unregisters EM Ctx in Firmware
  *
@@ -552,7 +340,7 @@ tf_em_ctx_reg(struct tf *tfp,
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
+			rc = tf_em_size_table(tbl, TF_EM_PAGE_SIZE);
 
 			if (rc)
 				goto cleanup;
@@ -578,403 +366,8 @@ tf_em_ctx_reg(struct tf *tfp,
 	return rc;
 }
 
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	return 0;
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t mask;
-	uint32_t key0_hash;
-	uint32_t key1_hash;
-	uint32_t key0_index;
-	uint32_t key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_insert_eem_entry(tbl_scope_cb, parms);
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
 int
-tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
-}
-
-int
-tf_em_ext_host_alloc(struct tf *tfp,
-		     struct tf_alloc_tbl_scope_parms *parms)
+tf_em_ext_alloc(struct tf *tfp, struct tf_alloc_tbl_scope_parms *parms)
 {
 	int rc;
 	enum tf_dir dir;
@@ -1081,7 +474,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 cleanup_full:
 	free_parms.tbl_scope_id = parms->tbl_scope_id;
-	tf_em_ext_host_free(tfp, &free_parms);
+	tf_em_ext_free(tfp, &free_parms);
 	return -EINVAL;
 
 cleanup:
@@ -1094,8 +487,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 }
 
 int
-tf_em_ext_host_free(struct tf *tfp,
-		    struct tf_free_tbl_scope_parms *parms)
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
 {
 	int rc = 0;
 	enum tf_dir  dir;
@@ -1136,75 +529,3 @@ tf_em_ext_host_free(struct tf *tfp,
 	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
 	return rc;
 }
-
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	uint32_t tbl_scope_id;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	tbl_scope_id = parms->tbl_scope_id;
-
-	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Get the table scope control block associated with the
-	 * external pool
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	op.opcode = HCAPI_CFA_HWOPS_PUT;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = parms->idx;
-	key_obj.data = parms->data;
-	key_obj.size = parms->data_sz_in_bytes;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 10768df03..c47c8b93f 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -4,11 +4,24 @@
  */
 
 #include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <unistd.h>
+#include <string.h>
+
 #include <rte_common.h>
 #include <rte_errno.h>
 #include <rte_log.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
 #include "tf_em.h"
 #include "tf_em_common.h"
 #include "tf_msg.h"
@@ -18,103 +31,503 @@
 
 #include "bnxt.h"
 
+enum tf_em_req_type {
+	TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ = 5,
+};
 
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
+struct tf_em_bnxt_lfc_req_hdr {
+	uint32_t ver;
+	uint32_t bus;
+	uint32_t devfn;
+	enum tf_em_req_type req_type;
+};
+
+struct tf_em_bnxt_lfc_cfa_eem_std_hdr {
+	uint16_t version;
+	uint16_t size;
+	uint32_t flags;
+	#define TF_EM_BNXT_LFC_EEM_CFG_PRIMARY_FUNC     (1 << 0)
+};
+
+struct tf_em_bnxt_lfc_dmabuf_fd {
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+};
+
+#ifndef __user
+#define __user
+#endif
+
+struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req {
+	struct tf_em_bnxt_lfc_cfa_eem_std_hdr std;
+	uint8_t dir;
+	uint32_t flags;
+	void __user *dma_fd;
+};
+
+struct tf_em_bnxt_lfc_req {
+	struct tf_em_bnxt_lfc_req_hdr hdr;
+	union {
+		struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req
+		       eem_dmabuf_export_req;
+		uint64_t hreq;
+	} req;
+};
+
+#define TF_EEM_BNXT_LFC_IOCTL_MAGIC     0x98
+#define BNXT_LFC_REQ    \
+	_IOW(TF_EEM_BNXT_LFC_IOCTL_MAGIC, 1, struct tf_em_bnxt_lfc_req)
+
+/**
+ * EM DBs.
  */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_insert_em_entry_parms *parms __rte_unused)
+extern void *eem_db[TF_DIR_MAX];
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+static void
+tf_em_dmabuf_mem_unmap(struct hcapi_cfa_em_table *tbl)
 {
-	return 0;
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no, pg_count;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			if (tp->pg_va_tbl != NULL &&
+			    tp->pg_va_tbl[page_no] != NULL &&
+			    tp->pg_size != 0) {
+				(void)munmap(tp->pg_va_tbl[page_no],
+					     tp->pg_size);
+			}
+		}
+
+		tfp_free((void *)tp->pg_va_tbl);
+		tfp_free((void *)tp->pg_pa_tbl);
+	}
 }
 
-/** delete EEM hash entry API
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
  *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
  *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
+ * [in] dir
+ *   Receive or transmit direction
  */
+static void
+tf_em_ctx_unreg(struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+			tf_em_dmabuf_mem_unmap(tbl);
+	}
+}
+
+static int tf_export_tbl_scope(int lfc_fd,
+			       int *fd,
+			       int bus,
+			       int devfn)
+{
+	struct tf_em_bnxt_lfc_req tf_lfc_req;
+	struct tf_em_bnxt_lfc_dmabuf_fd *dma_fd;
+	struct tfp_calloc_parms  mparms;
+	int rc;
+
+	memset(&tf_lfc_req, 0, sizeof(struct tf_em_bnxt_lfc_req));
+	tf_lfc_req.hdr.ver = 1;
+	tf_lfc_req.hdr.bus = bus;
+	tf_lfc_req.hdr.devfn = devfn;
+	tf_lfc_req.hdr.req_type = TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ;
+	tf_lfc_req.req.eem_dmabuf_export_req.flags = O_ACCMODE;
+	tf_lfc_req.req.eem_dmabuf_export_req.std.version = 1;
+
+	mparms.nitems = 1;
+	mparms.size = sizeof(struct tf_em_bnxt_lfc_dmabuf_fd);
+	mparms.alignment = 0;
+	tfp_calloc(&mparms);
+	dma_fd = (struct tf_em_bnxt_lfc_dmabuf_fd *)mparms.mem_va;
+	tf_lfc_req.req.eem_dmabuf_export_req.dma_fd = dma_fd;
+
+	rc = ioctl(lfc_fd, BNXT_LFC_REQ, &tf_lfc_req);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EXT EEM export chanel_fd %d, rc=%d\n",
+			    lfc_fd,
+			    rc);
+		tfp_free(dma_fd);
+		return rc;
+	}
+
+	memcpy(fd, dma_fd->fd, sizeof(dma_fd->fd));
+	tfp_free(dma_fd);
+
+	return rc;
+}
+
 static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_delete_em_entry_parms *parms __rte_unused)
+tf_em_dmabuf_mem_map(struct hcapi_cfa_em_table *tbl,
+		     int dmabuf_fd)
 {
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no;
+	uint32_t pg_count;
+	uint32_t offset;
+	struct tfp_calloc_parms parms;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		offset = 0;
+
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0)
+			return -ENOMEM;
+
+		tp->pg_va_tbl = parms.mem_va;
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0) {
+			tfp_free((void *)tp->pg_va_tbl);
+			return -ENOMEM;
+		}
+
+		tp->pg_pa_tbl = parms.mem_va;
+		tp->pg_count = 0;
+		tp->pg_size =  TF_EM_PAGE_SIZE;
+
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			tp->pg_va_tbl[page_no] = mmap(NULL,
+						      TF_EM_PAGE_SIZE,
+						      PROT_READ | PROT_WRITE,
+						      MAP_SHARED,
+						      dmabuf_fd,
+						      offset);
+			if (tp->pg_va_tbl[page_no] == (void *)-1) {
+				TFP_DRV_LOG(ERR,
+		"MMap memory error. level:%d page:%d pg_count:%d - %s\n",
+					    level,
+				     page_no,
+					    pg_count,
+					    strerror(errno));
+				return -ENOMEM;
+			}
+			offset += tp->pg_size;
+			tp->pg_count++;
+		}
+	}
+
 	return 0;
 }
 
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_insert_em_entry_parms *parms)
+static int tf_mmap_tbl_scope(struct tf_tbl_scope_cb *tbl_scope_cb,
+			     enum tf_dir dir,
+			     int tbl_type,
+			     int dmabuf_fd)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *tbl;
+
+	if (tbl_type == TF_EFC_TABLE)
+		return 0;
+
+	tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[tbl_type];
+	return tf_em_dmabuf_mem_map(tbl, dmabuf_fd);
+}
+
+#define TF_LFC_DEVICE "/dev/bnxt_lfc"
+
+static int
+tf_prepare_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	int lfc_fd;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	lfc_fd = open(TF_LFC_DEVICE, O_RDWR);
+	if (!lfc_fd) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: open %s device error\n",
+			    TF_LFC_DEVICE);
+		return -ENOENT;
 	}
 
-	return tf_insert_eem_entry
-		(tbl_scope_cb, parms);
+	tbl_scope_cb->lfc_fd = lfc_fd;
+
+	return 0;
 }
 
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_delete_em_entry_parms *parms)
+static int
+offload_system_mmap(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int rc;
+	int dmabuf_fd;
+	enum tf_dir dir;
+	enum hcapi_cfa_em_table_type tbl_type;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	rc = tf_prepare_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EEM: Prepare bnxt_lfc channel failed\n");
+		return rc;
 	}
 
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
+	rc = tf_export_tbl_scope(tbl_scope_cb->lfc_fd,
+				 (int *)tbl_scope_cb->fd,
+				 tbl_scope_cb->bus,
+				 tbl_scope_cb->devfn);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "export dmabuf fd failed\n");
+		return rc;
+	}
+
+	tbl_scope_cb->valid = true;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (tbl_type = TF_KEY0_TABLE; tbl_type <
+			     TF_MAX_TABLE; tbl_type++) {
+			if (tbl_type == TF_EFC_TABLE)
+				continue;
+
+			dmabuf_fd = tbl_scope_cb->fd[(dir ? 0 : 1)][tbl_type];
+			rc = tf_mmap_tbl_scope(tbl_scope_cb,
+					       dir,
+					       tbl_type,
+					       dmabuf_fd);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "dir:%d tbl:%d mmap failed rc %d\n",
+					    dir,
+					    tbl_type,
+					    rc);
+				break;
+			}
+		}
+	}
+	return 0;
 }
 
-int
-tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
-		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+static int
+tf_destroy_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	close(tbl_scope_cb->lfc_fd);
+
 	return 0;
 }
 
-int
-tf_em_ext_system_free(struct tf *tfp __rte_unused,
-		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+static int
+tf_dmabuf_alloc(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp,
+		tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries);
+	if (rc)
+		PMD_DRV_LOG(ERR, "EEM: Failed to prepare system memory rc:%d\n",
+			    rc);
+
 	return 0;
 }
 
-int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
-			  struct tf_tbl_set_parms *parms __rte_unused)
+static int
+tf_dmabuf_free(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp, 0);
+	if (rc)
+		TFP_DRV_LOG(ERR, "EEM: Failed to cleanup system memory\n");
+
+	tf_destroy_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+
 	return 0;
 }
+
+int
+tf_em_ext_alloc(struct tf *tfp,
+		struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_free_parms fparms = { 0 };
+	int dir;
+	int i;
+	struct hcapi_cfa_em_table *em_tables;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+	tbl_scope_cb->bus = tfs->session_id.internal.bus;
+	tbl_scope_cb->devfn = tfs->session_id.internal.device;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	rc = tf_dmabuf_alloc(tfp, tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System DMA buff alloc failed\n");
+		return -EIO;
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+			if (i == TF_EFC_TABLE)
+				continue;
+
+			em_tables =
+				&tbl_scope_cb->em_ctx_info[dir].em_tables[i];
+
+			rc = tf_em_size_table(em_tables, TF_EM_PAGE_SIZE);
+			if (rc) {
+				TFP_DRV_LOG(ERR, "Size table failed\n");
+				goto cleanup;
+			}
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_create_tbl_pool_external(dir,
+					tbl_scope_cb,
+					em_tables[TF_RECORD_TABLE].num_entries,
+					em_tables[TF_RECORD_TABLE].entry_size);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	rc = offload_system_mmap(tbl_scope_cb);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System alloc mmap failed\n");
+		goto cleanup_full;
+	}
+
+	return rc;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int dir;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+
+		/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+
+		/* Unmap memory */
+		tf_em_ctx_unreg(tbl_scope_cb, dir);
+
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+	}
+
+	tf_dmabuf_free(tfp, tbl_scope_cb);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
index 54d4c37f5..7eb72bd42 100644
--- a/drivers/net/bnxt/tf_core/tf_if_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -113,7 +113,7 @@ struct tf_if_tbl_set_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -143,7 +143,7 @@ struct tf_if_tbl_get_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [out] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 035c0948d..ed506defa 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -813,7 +813,19 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = parms->hcapi_type;
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
@@ -869,7 +881,19 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct hwrm_tf_tcam_free_input req =  { 0 };
 	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(in_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = in_parms->hcapi_type;
 	req.count = 1;
 	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 2a10b47ce..f20e8d729 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -38,6 +38,13 @@ struct tf_em_caps {
  */
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
+#ifdef TF_USE_SYSTEM_MEM
+	int lfc_fd;
+	uint32_t bus;
+	uint32_t devfn;
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+	bool valid;
+#endif
 	int index;
 	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps em_caps[TF_DIR_MAX];
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 426a182a9..3eade3127 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -87,6 +87,18 @@ tfp_send_msg_tunneled(struct tf *tfp,
 	return rc;
 }
 
+#ifdef TF_USE_SYSTEM_MEM
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows)
+{
+	return bnxt_hwrm_oem_cmd(container_of(tfp,
+					      struct bnxt,
+					      tfp),
+				 max_flows);
+}
+#endif /* TF_USE_SYSTEM_MEM */
+
 /**
  * Allocates zero'ed memory from the heap.
  *
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8789eba1f..421a7d9f7 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -170,6 +170,21 @@ int
 tfp_msg_hwrm_oem_cmd(struct tf *tfp,
 		     uint32_t max_flows);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 32/51] net/bnxt: integrate with the latest tf core changes
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (30 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
                     ` (19 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

ULP changes to integrate with the latest session open
interface in tf_core

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 46 ++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index c7281ab9a..a9ed5d92a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -68,6 +68,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 	struct rte_eth_dev		*ethdev = bp->eth_dev;
 	int32_t				rc = 0;
 	struct tf_open_session_parms	params;
+	struct tf_session_resources	*resources;
 
 	memset(&params, 0, sizeof(params));
 
@@ -79,6 +80,51 @@ ulp_ctx_session_open(struct bnxt *bp,
 		return rc;
 	}
 
+	params.shadow_copy = false;
+	params.device_type = TF_DEVICE_TYPE_WH;
+	resources = &params.resources;
+	/** RX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_L2_CTXT] = 16;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 720;
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 720;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 16;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 416;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 2048;
+
+	/** TX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_L2_CTXT] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 8;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 8;
+
+	/* EEM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+
 	rc = tf_open_session(&bp->tfp, &params);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 33/51] net/bnxt: add support for internal encap records
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (31 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 34/51] net/bnxt: add support for if table processing Ajit Khaparde
                     ` (18 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Somnath Kotur, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

Modifications to allow internal encap records to be supported:
- Modified the mapper index table processing to handle encap without an
  action record
- Modified the session open code to reserve some 64 Byte internal encap
  records on tx
- Modified the blob encap swap to support encap without action record

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c   |  3 +++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c | 29 +++++++++++++---------------
 drivers/net/bnxt/tf_ulp/ulp_utils.c  |  2 +-
 3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index a9ed5d92a..4c1a1c44c 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -113,6 +113,9 @@ ulp_ctx_session_open(struct bnxt *bp,
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
 
+	/* ENCAP */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 734db7c6c..a9a625f9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1473,7 +1473,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
 
-	if (!flds || !num_flds) {
+	if (!flds || (!num_flds && !encap_flds)) {
 		BNXT_TF_DBG(ERR, "template undefined for the index table\n");
 		return -EINVAL;
 	}
@@ -1482,7 +1482,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	for (i = 0; i < (num_flds + encap_flds); i++) {
 		/* set the swap index if encap swap bit is enabled */
 		if (parms->device_params->encap_byte_swap && encap_flds &&
-		    ((i + 1) == num_flds))
+		    (i == num_flds))
 			ulp_blob_encap_swap_idx_set(&data);
 
 		/* Process the result fields */
@@ -1495,18 +1495,15 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			BNXT_TF_DBG(ERR, "data field failed\n");
 			return rc;
 		}
+	}
 
-		/* if encap bit swap is enabled perform the bit swap */
-		if (parms->device_params->encap_byte_swap && encap_flds) {
-			if ((i + 1) == (num_flds + encap_flds))
-				ulp_blob_perform_encap_swap(&data);
+	/* if encap bit swap is enabled perform the bit swap */
+	if (parms->device_params->encap_byte_swap && encap_flds) {
+		ulp_blob_perform_encap_swap(&data);
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-			if ((i + 1) == (num_flds + encap_flds)) {
-				BNXT_TF_DBG(INFO, "Dump fter encap swap\n");
-				ulp_mapper_blob_dump(&data);
-			}
+		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
+		ulp_mapper_blob_dump(&data);
 #endif
-		}
 	}
 
 	/* Perform the tf table allocation by filling the alloc params */
@@ -1817,6 +1814,11 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+					    tbl->resource_func);
+				return rc;
+			}
 			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected action resource %d\n",
@@ -1824,11 +1826,6 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 			return -EINVAL;
 		}
 	}
-	if (rc) {
-		BNXT_TF_DBG(ERR, "Resource type %d failed\n",
-			    tbl->resource_func);
-		return rc;
-	}
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
index 3a4157f22..3afaac647 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -478,7 +478,7 @@ ulp_blob_perform_encap_swap(struct ulp_blob *blob)
 		BNXT_TF_DBG(ERR, "invalid argument\n");
 		return; /* failure */
 	}
-	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx);
 	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
 
 	while (idx <= end_idx) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 34/51] net/bnxt: add support for if table processing
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (32 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
                     ` (17 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for if table processing in the ulp mapper
layer. This enables support for the default partition action
record pointer interface table.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   2 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 141 +++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   1 +
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 117 ++++++++-------
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   8 +-
 6 files changed, 187 insertions(+), 83 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4c1a1c44c..4835b951e 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -115,6 +115,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 
 	/* ENCAP */
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_16B] = 16;
 
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 22996e50e..384dc5b2c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -933,7 +933,7 @@ ulp_default_flow_db_cfa_action_get(struct bnxt_ulp_context *ulp_ctx,
 				   uint32_t flow_id,
 				   uint32_t *cfa_action)
 {
-	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX;
+	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION;
 	uint64_t hndl;
 	int32_t rc;
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index a9a625f9f..42bb98557 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -184,7 +184,8 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
 	return &ulp_act_tbl_list[idx];
 }
 
-/** Get a list of classifier tables that implement the flow
+/*
+ * Get a list of classifier tables that implement the flow
  * Gets a device dependent list of tables that implement the class template id
  *
  * dev_id [in] The device id of the forwarding element
@@ -193,13 +194,16 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
  *
  * num_tbls [out] The number of classifier tables in the returned array
  *
+ * fdb_tbl_idx [out] The flow database index Regular or default
+ *
  * returns An array of classifier tables to implement the flow, or NULL on
  * error
  */
 static struct bnxt_ulp_mapper_tbl_info *
 ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 			      uint32_t tid,
-			      uint32_t *num_tbls)
+			      uint32_t *num_tbls,
+			      uint32_t *fdb_tbl_idx)
 {
 	uint32_t idx;
 	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
@@ -212,7 +216,7 @@ ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 	 */
 	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
 	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
-
+	*fdb_tbl_idx = ulp_class_tmpl_list[tidx].flow_db_table_type;
 	return &ulp_class_tbl_list[idx];
 }
 
@@ -256,7 +260,8 @@ ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
-			     uint32_t *num_flds)
+			     uint32_t *num_flds,
+			     uint32_t *num_encap_flds)
 {
 	uint32_t idx;
 
@@ -265,6 +270,7 @@ ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
 
 	idx		= tbl->result_start_idx;
 	*num_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
 
 	/* NOTE: Need template to provide range checking define */
 	return &ulp_class_result_field_list[idx];
@@ -1146,6 +1152,7 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		struct bnxt_ulp_mapper_result_field_info *dflds;
 		struct bnxt_ulp_mapper_ident_info *idents;
 		uint32_t num_dflds, num_idents;
+		uint32_t encap_flds = 0;
 
 		/*
 		 * Since the cache entry is responsible for allocating
@@ -1166,8 +1173,9 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 
 		/* Create the result data blob */
-		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-		if (!dflds || !num_dflds) {
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds,
+						     &encap_flds);
+		if (!dflds || !num_dflds || encap_flds) {
 			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 			rc = -EINVAL;
 			goto error;
@@ -1293,6 +1301,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	int32_t	trc;
 	enum bnxt_ulp_flow_mem_type mtype = parms->device_params->flow_mem_type;
 	int32_t rc = 0;
+	uint32_t encap_flds = 0;
 
 	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
 	if (!kflds || !num_kflds) {
@@ -1327,8 +1336,8 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	 */
 
 	/* Create the result data blob */
-	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-	if (!dflds || !num_dflds) {
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds, &encap_flds);
+	if (!dflds || !num_dflds || encap_flds) {
 		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 		return -EINVAL;
 	}
@@ -1468,7 +1477,8 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Get the result fields list */
 	if (is_class_tbl)
-		flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+		flds = ulp_mapper_result_fields_get(tbl, &num_flds,
+						    &encap_flds);
 	else
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
@@ -1761,6 +1771,76 @@ ulp_mapper_cache_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0;
+	struct tf_set_if_tbl_entry_parms iftbl_params = { 0 };
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	uint32_t encap_flds;
+
+	/* Initialize the blob data */
+	if (!ulp_blob_init(&data, tbl->result_bit_size,
+			   parms->device_params->byte_order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	/* Get the result fields list */
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds, &encap_flds);
+
+	if (!flds || !num_flds || encap_flds) {
+		BNXT_TF_DBG(ERR, "template undefined for the IF table\n");
+		return -EINVAL;
+	}
+
+	/* process the result fields, loop through them */
+	for (i = 0; i < num_flds; i++) {
+		/* Process the result fields */
+		rc = ulp_mapper_result_field_process(parms,
+						     tbl->direction,
+						     &flds[i],
+						     &data,
+						     "IFtable Result");
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	/* Get the index details from computed field */
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+
+	/* Perform the tf table set by filling the set params */
+	iftbl_params.dir = tbl->direction;
+	iftbl_params.type = tbl->resource_type;
+	iftbl_params.data = ulp_blob_data_get(&data, &tmplen);
+	iftbl_params.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+	iftbl_params.idx = idx;
+
+	rc = tf_set_if_tbl_entry(tfp, &iftbl_params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+			    iftbl_params.type,
+			    (iftbl_params.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iftbl_params.idx,
+			    rc);
+		return rc;
+	}
+
+	/*
+	 * TBD: Need to look at the need to store idx in flow db for restore
+	 * the table to its orginial state on deletion of this entry.
+	 */
+	return rc;
+}
+
 static int32_t
 ulp_mapper_glb_resource_info_init(struct tf *tfp,
 				  struct bnxt_ulp_mapper_data *mapper_data)
@@ -1862,6 +1942,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE:
 			rc = ulp_mapper_cache_tbl_process(parms, tbl);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_IF_TABLE:
+			rc = ulp_mapper_if_tbl_process(parms, tbl);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
 				    tbl->resource_func);
@@ -2064,20 +2147,29 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Get the action table entry from device id and act context id */
 	parms.act_tid = cparms->act_tid;
-	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
-						     parms.act_tid,
-						     &parms.num_atbls);
-	if (!parms.atbls || !parms.num_atbls) {
-		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		return -EINVAL;
+
+	/*
+	 * Perform the action table get only if act template is not zero
+	 * for act template zero like for default rules ignore the action
+	 * table processing.
+	 */
+	if (parms.act_tid) {
+		parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+							     parms.act_tid,
+							     &parms.num_atbls);
+		if (!parms.atbls || !parms.num_atbls) {
+			BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			return -EINVAL;
+		}
 	}
 
 	/* Get the class table entry from device id and act context id */
 	parms.class_tid = cparms->class_tid;
 	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
 						    parms.class_tid,
-						    &parms.num_ctbls);
+						    &parms.num_ctbls,
+						    &parms.tbl_idx);
 	if (!parms.ctbls || !parms.num_ctbls) {
 		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
 			    parms.dev_id, parms.class_tid);
@@ -2111,7 +2203,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	 * free each of them.
 	 */
 	rc = ulp_flow_db_fid_alloc(ulp_ctx,
-				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   parms.tbl_idx,
 				   cparms->func_id,
 				   &parms.fid);
 	if (rc) {
@@ -2120,11 +2212,14 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	}
 
 	/* Process the action template list from the selected action table*/
-	rc = ulp_mapper_action_tbls_process(&parms);
-	if (rc) {
-		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		goto flow_error;
+	if (parms.act_tid) {
+		rc = ulp_mapper_action_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "action tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			goto flow_error;
+		}
 	}
 
 	/* All good. Now process the class template */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 89c08ab25..517422338 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -256,6 +256,7 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 			BNXT_TF_DBG(ERR, "Mark index greater than allocated\n");
 			return -EINVAL;
 		}
+		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
 	}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index ac84f88e9..66343b918 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -88,35 +88,36 @@ enum bnxt_ulp_byte_order {
 };
 
 enum bnxt_ulp_cf_idx {
-	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 0,
-	BNXT_ULP_CF_IDX_O_VTAG_NUM = 1,
-	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 2,
-	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 3,
-	BNXT_ULP_CF_IDX_I_VTAG_NUM = 4,
-	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 5,
-	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 6,
-	BNXT_ULP_CF_IDX_INCOMING_IF = 7,
-	BNXT_ULP_CF_IDX_DIRECTION = 8,
-	BNXT_ULP_CF_IDX_SVIF_FLAG = 9,
-	BNXT_ULP_CF_IDX_O_L3 = 10,
-	BNXT_ULP_CF_IDX_I_L3 = 11,
-	BNXT_ULP_CF_IDX_O_L4 = 12,
-	BNXT_ULP_CF_IDX_I_L4 = 13,
-	BNXT_ULP_CF_IDX_DEV_PORT_ID = 14,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 15,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 16,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 17,
-	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 18,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 19,
-	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 20,
-	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 21,
-	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 22,
-	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 23,
-	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 24,
-	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 25,
-	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 26,
-	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 27,
-	BNXT_ULP_CF_IDX_LAST = 28
+	BNXT_ULP_CF_IDX_NOT_USED = 0,
+	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 1,
+	BNXT_ULP_CF_IDX_O_VTAG_NUM = 2,
+	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 3,
+	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 4,
+	BNXT_ULP_CF_IDX_I_VTAG_NUM = 5,
+	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 6,
+	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 7,
+	BNXT_ULP_CF_IDX_INCOMING_IF = 8,
+	BNXT_ULP_CF_IDX_DIRECTION = 9,
+	BNXT_ULP_CF_IDX_SVIF_FLAG = 10,
+	BNXT_ULP_CF_IDX_O_L3 = 11,
+	BNXT_ULP_CF_IDX_I_L3 = 12,
+	BNXT_ULP_CF_IDX_O_L4 = 13,
+	BNXT_ULP_CF_IDX_I_L4 = 14,
+	BNXT_ULP_CF_IDX_DEV_PORT_ID = 15,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 16,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 17,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 18,
+	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 19,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 20,
+	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 21,
+	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 22,
+	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 23,
+	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 24,
+	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 25,
+	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
+	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
+	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
+	BNXT_ULP_CF_IDX_LAST = 29
 };
 
 enum bnxt_ulp_critical_resource {
@@ -133,11 +134,6 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
-enum bnxt_ulp_df_param_type {
-	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
-	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
-};
-
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
@@ -154,7 +150,8 @@ enum bnxt_ulp_flow_mem_type {
 enum bnxt_ulp_glb_regfile_index {
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 2
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
 };
 
 enum bnxt_ulp_hdr_type {
@@ -204,22 +201,22 @@ enum bnxt_ulp_priority {
 };
 
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 0,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 1,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 2,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 3,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 4,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 5,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 6,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 7,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 8,
-	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 9,
-	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 10,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 11,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 12,
-	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 13,
-	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 14,
-	BNXT_ULP_REGFILE_INDEX_NOT_USED = 15,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 1,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 8,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 9,
+	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 10,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 11,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 12,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
+	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
+	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
 	BNXT_ULP_REGFILE_INDEX_LAST = 16
 };
 
@@ -265,10 +262,10 @@ enum bnxt_ulp_resource_func {
 enum bnxt_ulp_resource_sub_type {
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT_INDEX = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT_INDEX = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX = 1,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
 	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
 };
 
@@ -282,7 +279,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
 	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
@@ -398,7 +394,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
 	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
 	BNXT_ULP_SYM_NO = 0,
@@ -489,6 +484,11 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_wh_plus {
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+};
+
 enum bnxt_ulp_act_prop_sz {
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
@@ -588,4 +588,9 @@ enum bnxt_ulp_act_hid {
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
 	BNXT_ULP_ACT_HID_0040 = 0x0040
 };
+
+enum bnxt_ulp_df_tpl {
+	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5c4335847..1188223aa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -150,9 +150,10 @@ struct bnxt_ulp_device_params {
 
 /* Flow Mapper */
 struct bnxt_ulp_mapper_tbl_list_info {
-	uint32_t	device_name;
-	uint32_t	start_tbl_idx;
-	uint32_t	num_tbls;
+	uint32_t		device_name;
+	uint32_t		start_tbl_idx;
+	uint32_t		num_tbls;
+	enum bnxt_ulp_fdb_type	flow_db_table_type;
 };
 
 struct bnxt_ulp_mapper_tbl_info {
@@ -183,6 +184,7 @@ struct bnxt_ulp_mapper_tbl_info {
 
 	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
+	uint32_t			comp_field_idx;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 35/51] net/bnxt: disable Tx vector mode if truflow is enabled
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (33 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 34/51] net/bnxt: add support for if table processing Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
                     ` (16 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The vector mode in the tx handler is disabled when truflow is
enabled since truflow now requires bd action record support.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 697cd6651..355025741 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1116,12 +1116,15 @@ bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
 {
 #ifdef RTE_ARCH_X86
 #ifndef RTE_LIBRTE_IEEE1588
+	struct bnxt *bp = eth_dev->data->dev_private;
+
 	/*
 	 * Vector mode transmit can be enabled only if not using scatter rx
 	 * or tx offloads.
 	 */
 	if (!eth_dev->data->scattered_rx &&
-	    !eth_dev->data->dev_conf.txmode.offloads) {
+	    !eth_dev->data->dev_conf.txmode.offloads &&
+	    !BNXT_TRUFLOW_EN(bp)) {
 		PMD_DRV_LOG(INFO, "Using vector mode transmit for port %d\n",
 			    eth_dev->data->port_id);
 		return bnxt_xmit_pkts_vec;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 36/51] net/bnxt: add index opcode and operand to mapper table
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (34 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
                     ` (15 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Extended the regfile and computed field operations to a common
index opcode operation and added globlal resource operations are
also part of the index opcode operation.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 56 ++++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  9 ++-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 45 +++++----------
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  8 +++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  4 +-
 5 files changed, 80 insertions(+), 42 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 42bb98557..7b3b3d698 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1443,7 +1443,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	struct bnxt_ulp_mapper_result_field_info *flds;
 	struct ulp_flow_db_res_params	fid_parms;
 	struct ulp_blob	data;
-	uint64_t idx;
+	uint64_t idx = 0;
 	uint16_t tmplen;
 	uint32_t i, num_flds;
 	int32_t rc = 0, trc = 0;
@@ -1516,6 +1516,42 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 	}
 
+	/*
+	 * Check for index opcode, if it is Global then
+	 * no need to allocate the table, just set the table
+	 * and exit since it is not maintained in the flow db.
+	 */
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_GLOBAL) {
+		/* get the index from index operand */
+		if (tbl->index_operand < BNXT_ULP_GLB_REGFILE_INDEX_LAST &&
+		    ulp_mapper_glb_resource_read(parms->mapper_data,
+						 tbl->direction,
+						 tbl->index_operand,
+						 &idx)) {
+			BNXT_TF_DBG(ERR, "Glbl regfile[%d] read failed.\n",
+				    tbl->index_operand);
+			return -EINVAL;
+		}
+		/* set the Tf index table */
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->resource_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+		sparms.idx		= tfp_be_to_cpu_64(idx);
+		sparms.tbl_scope_id	= tbl_scope_id;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Glbl Set table[%d][%s][%d] failed rc=%d\n",
+				    sparms.type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+			return rc;
+		}
+		return 0; /* success */
+	}
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1546,11 +1582,13 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Always storing values in Regfile in BE */
 	idx = tfp_cpu_to_be_64(idx);
-	rc = ulp_regfile_write(parms->regfile, tbl->regfile_idx, idx);
-	if (!rc) {
-		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
-			    tbl->regfile_idx);
-		goto error;
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_ALLOCATE) {
+		rc = ulp_regfile_write(parms->regfile, tbl->index_operand, idx);
+		if (!rc) {
+			BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+				    tbl->index_operand);
+			goto error;
+		}
 	}
 
 	/* Perform the tf table set by filling the set params */
@@ -1815,7 +1853,11 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	}
 
 	/* Get the index details from computed field */
-	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+	if (tbl->index_opcode != BNXT_ULP_INDEX_OPCODE_COMP_FIELD) {
+		BNXT_TF_DBG(ERR, "Invalid tbl index opcode\n");
+		return -EINVAL;
+	}
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->index_operand);
 
 	/* Perform the tf table set by filling the set params */
 	iftbl_params.dir = tbl->direction;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 8af23eff1..9b14fa0bd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -76,7 +76,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -90,7 +91,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -104,7 +106,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	}
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index e773afd60..d4c7bfa4d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -113,8 +113,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 0,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -135,8 +134,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -157,8 +155,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -179,8 +176,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -201,8 +197,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -223,8 +218,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -245,8 +239,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -267,8 +260,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -289,8 +281,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -311,8 +302,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -333,8 +323,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -355,8 +344,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -377,8 +365,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -399,8 +386,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -421,8 +407,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 66343b918..0215a5dde 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -161,6 +161,14 @@ enum bnxt_ulp_hdr_type {
 	BNXT_ULP_HDR_TYPE_LAST = 3
 };
 
+enum bnxt_ulp_index_opcode {
+	BNXT_ULP_INDEX_OPCODE_NOT_USED = 0,
+	BNXT_ULP_INDEX_OPCODE_ALLOCATE = 1,
+	BNXT_ULP_INDEX_OPCODE_GLOBAL = 2,
+	BNXT_ULP_INDEX_OPCODE_COMP_FIELD = 3,
+	BNXT_ULP_INDEX_OPCODE_LAST = 4
+};
+
 enum bnxt_ulp_mapper_opc {
 	BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 1188223aa..a3ddd33fd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -182,9 +182,9 @@ struct bnxt_ulp_mapper_tbl_info {
 	uint32_t	ident_start_idx;
 	uint16_t	ident_nums;
 
-	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
-	uint32_t			comp_field_idx;
+	enum bnxt_ulp_index_opcode	index_opcode;
+	uint32_t			index_operand;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 37/51] net/bnxt: add support for global resource templates
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (35 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
                     ` (14 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the global resource templates, so that they
can be reused by the other regular templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 178 +++++++++++++++++-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   1 +
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   3 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   6 +
 4 files changed, 181 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 7b3b3d698..6fd55b2a2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -80,15 +80,20 @@ ulp_mapper_glb_resource_write(struct bnxt_ulp_mapper_data *data,
  * returns 0 on success
  */
 static int32_t
-ulp_mapper_resource_ident_allocate(struct tf *tfp,
+ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 				   struct bnxt_ulp_mapper_data *mapper_data,
 				   struct bnxt_ulp_glb_resource_info *glb_res)
 {
 	struct tf_alloc_identifier_parms iparms = { 0 };
 	struct tf_free_identifier_parms fparms;
 	uint64_t regval;
+	struct tf *tfp;
 	int32_t rc = 0;
 
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
 	iparms.ident_type = glb_res->resource_type;
 	iparms.dir = glb_res->direction;
 
@@ -115,13 +120,76 @@ ulp_mapper_resource_ident_allocate(struct tf *tfp,
 		return rc;
 	}
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res[%s][%d][%d] = 0x%04x\n",
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
 		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
 		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
 #endif
 	return rc;
 }
 
+/*
+ * Internal function to allocate index tbl resource and store it in mapper data.
+ *
+ * returns 0 on success
+ */
+static int32_t
+ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
+				    struct bnxt_ulp_mapper_data *mapper_data,
+				    struct bnxt_ulp_glb_resource_info *glb_res)
+{
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+	uint64_t regval;
+	struct tf *tfp;
+	uint32_t tbl_scope_id;
+	int32_t rc = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
+	/* Get the scope id */
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	aparms.type = glb_res->resource_type;
+	aparms.dir = glb_res->direction;
+	aparms.search_enable = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO;
+	aparms.tbl_scope_id = tbl_scope_id;
+
+	/* Allocate the index tbl using tf api */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to alloc identifier [%s][%d]\n",
+			    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    aparms.type);
+		return rc;
+	}
+
+	/* entries are stored as big-endian format */
+	regval = tfp_cpu_to_be_64((uint64_t)aparms.idx);
+	/* write to the mapper global resource */
+	rc = ulp_mapper_glb_resource_write(mapper_data, glb_res, regval);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to write to global resource id\n");
+		/* Free the identifer when update failed */
+		free_parms.dir = aparms.dir;
+		free_parms.type = aparms.type;
+		free_parms.idx = aparms.idx;
+		tf_free_tbl_entry(tfp, &free_parms);
+		return rc;
+	}
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
+		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
+#endif
+	return rc;
+}
+
 /* Retrieve the cache initialization parameters for the tbl_idx */
 static struct bnxt_ulp_cache_tbl_params *
 ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
@@ -132,6 +200,16 @@ ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
 	return &ulp_cache_tbl_params[tbl_idx];
 }
 
+/* Retrieve the global template table */
+static uint32_t *
+ulp_mapper_glb_template_table_get(uint32_t *num_entries)
+{
+	if (!num_entries)
+		return NULL;
+	*num_entries = BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ;
+	return ulp_glb_template_tbl;
+}
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -659,7 +737,10 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 			return -EINVAL;
 		}
 		act_bit = tfp_be_to_cpu_64(act_bit);
-		act_val = ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit);
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit))
+			act_val = 1;
+		else
+			act_val = 0;
 		if (fld->field_bit_size > ULP_BYTE_2_BITS(sizeof(act_val))) {
 			BNXT_TF_DBG(ERR, "%s field size is incorrect\n", name);
 			return -EINVAL;
@@ -1552,6 +1633,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 		return 0; /* success */
 	}
+
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1616,6 +1698,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	fid_parms.direction	= tbl->direction;
 	fid_parms.resource_func	= tbl->resource_func;
 	fid_parms.resource_type	= tbl->resource_type;
+	fid_parms.resource_sub_type = tbl->resource_sub_type;
 	fid_parms.resource_hndl	= aparms.idx;
 	fid_parms.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO;
 
@@ -1884,7 +1967,7 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 }
 
 static int32_t
-ulp_mapper_glb_resource_info_init(struct tf *tfp,
+ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 				  struct bnxt_ulp_mapper_data *mapper_data)
 {
 	struct bnxt_ulp_glb_resource_info *glb_res;
@@ -1901,15 +1984,23 @@ ulp_mapper_glb_resource_info_init(struct tf *tfp,
 	for (idx = 0; idx < num_glb_res_ids; idx++) {
 		switch (glb_res[idx].resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
-			rc = ulp_mapper_resource_ident_allocate(tfp,
+			rc = ulp_mapper_resource_ident_allocate(ulp_ctx,
 								mapper_data,
 								&glb_res[idx]);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_resource_index_tbl_alloc(ulp_ctx,
+								 mapper_data,
+								 &glb_res[idx]);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Global resource %x not supported\n",
 				    glb_res[idx].resource_func);
+			rc = -EINVAL;
 			break;
 		}
+		if (rc)
+			return rc;
 	}
 	return rc;
 }
@@ -2125,7 +2216,9 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 			res.resource_func = ent->resource_func;
 			res.direction = dir;
 			res.resource_type = ent->resource_type;
-			res.resource_hndl = ent->resource_hndl;
+			/*convert it from BE to cpu */
+			res.resource_hndl =
+				tfp_be_to_cpu_64(ent->resource_hndl);
 			ulp_mapper_resource_free(ulp_ctx, &res);
 		}
 	}
@@ -2144,6 +2237,71 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
 
+/* Function to handle the default global templates that are allocated during
+ * the startup and reused later.
+ */
+static int32_t
+ulp_mapper_glb_template_table_init(struct bnxt_ulp_context *ulp_ctx)
+{
+	uint32_t *glbl_tmpl_list;
+	uint32_t num_glb_tmpls, idx, dev_id;
+	struct bnxt_ulp_mapper_parms parms;
+	struct bnxt_ulp_mapper_data *mapper_data;
+	int32_t rc = 0;
+
+	glbl_tmpl_list = ulp_mapper_glb_template_table_get(&num_glb_tmpls);
+	if (!glbl_tmpl_list || !num_glb_tmpls)
+		return rc; /* No global templates to process */
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mapper_data = bnxt_ulp_cntxt_ptr2_mapper_data_get(ulp_ctx);
+	if (!mapper_data) {
+		BNXT_TF_DBG(ERR, "Failed to get the ulp mapper data\n");
+		return -EINVAL;
+	}
+
+	/* Iterate the global resources and process each one */
+	for (idx = 0; idx < num_glb_tmpls; idx++) {
+		/* Initialize the parms structure */
+		memset(&parms, 0, sizeof(parms));
+		parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+		parms.ulp_ctx = ulp_ctx;
+		parms.dev_id = dev_id;
+		parms.mapper_data = mapper_data;
+
+		/* Get the class table entry from dev id and class id */
+		parms.class_tid = glbl_tmpl_list[idx];
+		parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+							    parms.class_tid,
+							    &parms.num_ctbls,
+							    &parms.tbl_idx);
+		if (!parms.ctbls || !parms.num_ctbls) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		parms.device_params = bnxt_ulp_device_params_get(parms.dev_id);
+		if (!parms.device_params) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		rc = ulp_mapper_class_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "class tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return rc;
+		}
+	}
+	return rc;
+}
+
 /* Function to handle the mapping of the Flow to be compatible
  * with the underlying hardware.
  */
@@ -2316,7 +2474,7 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 	}
 
 	/* Allocate the global resource ids */
-	rc = ulp_mapper_glb_resource_info_init(tfp, data);
+	rc = ulp_mapper_glb_resource_info_init(ulp_ctx, data);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to initialize global resource ids\n");
 		goto error;
@@ -2344,6 +2502,12 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 		}
 	}
 
+	rc = ulp_mapper_glb_template_table_init(ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize global templates\n");
+		goto error;
+	}
+
 	return 0;
 error:
 	/* Ignore the return code in favor of returning the original error. */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 0215a5dde..7c0dc5ee4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -26,6 +26,7 @@
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
 #define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 2efd11447..beca3baa7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -546,3 +546,6 @@ uint32_t bnxt_ulp_encap_vtag_map[] = {
 	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
 	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
+
+uint32_t ulp_glb_template_tbl[] = {
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index a3ddd33fd..4bcd02ba2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -299,4 +299,10 @@ extern struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[];
  */
 extern struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[];
 
+/*
+ * The ulp_global template table is used to initialize default entries
+ * that could be reused by other templates.
+ */
+extern uint32_t ulp_glb_template_tbl[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 38/51] net/bnxt: add support for internal exact match entries
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (36 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
@ 2020-07-01  6:51   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
                     ` (13 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:51 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the internal exact match entries.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 38 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 13 +++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 21 ++++++----
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  4 ++
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   |  6 +--
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 13 ++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |  7 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  5 +++
 8 files changed, 85 insertions(+), 22 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4835b951e..1b52861d4 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -213,8 +213,27 @@ static int32_t
 ulp_eem_tbl_scope_init(struct bnxt *bp)
 {
 	struct tf_alloc_tbl_scope_parms params = {0};
+	uint32_t dev_id;
+	struct bnxt_ulp_device_params *dparms;
 	int rc;
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope alloc is not required\n");
+		return 0;
+	}
+
 	bnxt_init_tbl_scope_parms(bp, &params);
 
 	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
@@ -240,6 +259,8 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 	struct tf_free_tbl_scope_parms	params = {0};
 	struct tf			*tfp;
 	int32_t				rc = 0;
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id;
 
 	if (!ulp_ctx || !ulp_ctx->cfg_data)
 		return -EINVAL;
@@ -254,6 +275,23 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 		return -EINVAL;
 	}
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope free is not required\n");
+		return 0;
+	}
+
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 384dc5b2c..7696de2a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -114,7 +114,8 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info *resource_info,
 	}
 
 	/* Store the handle as 64bit only for EM table entries */
-	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE &&
+	    params->resource_func != BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
 		resource_info->resource_type = params->resource_type;
 		resource_info->resource_sub_type = params->resource_sub_type;
@@ -145,7 +146,8 @@ ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info *resource_info,
 	/* use the helper function to get the resource func */
 	params->resource_func = ulp_flow_db_resource_func_get(resource_info);
 
-	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    params->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		params->resource_hndl = resource_info->resource_em_handle;
 	} else if (params->resource_func & ULP_FLOW_DB_RES_FUNC_NEED_LOWER) {
 		params->resource_hndl = resource_info->resource_hndl;
@@ -908,7 +910,9 @@ ulp_flow_db_resource_hndl_get(struct bnxt_ulp_context *ulp_ctx,
 				}
 
 			} else if (resource_func ==
-				   BNXT_ULP_RESOURCE_FUNC_EM_TABLE){
+				   BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+				   resource_func ==
+				   BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 				*res_hndl = fid_res->resource_em_handle;
 				return 0;
 			}
@@ -966,7 +970,8 @@ static void ulp_flow_db_res_dump(struct ulp_fdb_resource_info	*r,
 
 	BNXT_TF_DBG(DEBUG, "Resource func = %x, nxt_resource_idx = %x\n",
 		    res_func, (ULP_FLOW_DB_RES_NXT_MASK & r->nxt_resource_idx));
-	if (res_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE)
+	if (res_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    res_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		BNXT_TF_DBG(DEBUG, "EM Handle = 0x%016" PRIX64 "\n",
 			    r->resource_em_handle);
 	else
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 6fd55b2a2..e2b771c9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -556,15 +556,18 @@ ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp,
 }
 
 static inline int32_t
-ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
-			  struct tf *tfp,
-			  struct ulp_flow_db_res_params *res)
+ulp_mapper_em_entry_free(struct bnxt_ulp_context *ulp,
+			 struct tf *tfp,
+			 struct ulp_flow_db_res_params *res)
 {
 	struct tf_delete_em_entry_parms fparms = { 0 };
 	int32_t rc;
 
 	fparms.dir		= res->direction;
-	fparms.mem		= TF_MEM_EXTERNAL;
+	if (res->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE)
+		fparms.mem = TF_MEM_EXTERNAL;
+	else
+		fparms.mem = TF_MEM_INTERNAL;
 	fparms.flow_handle	= res->resource_hndl;
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
@@ -1443,7 +1446,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 
 	/* do the transpose for the internal EM keys */
-	if (tbl->resource_type == TF_MEM_INTERNAL)
+	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		ulp_blob_perform_byte_reverse(&key);
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
@@ -2066,7 +2069,8 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
 			break;
-		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
 			rc = ulp_mapper_em_tbl_process(parms, tbl);
 			break;
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
@@ -2119,8 +2123,9 @@ ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
 	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
 		break;
-	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
-		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+	case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+	case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
+		rc = ulp_mapper_em_entry_free(ulp, tfp, res);
 		break;
 	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 517422338..b3527eccb 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -87,6 +87,9 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	/* Need to allocate 2 * Num flows to account for hash type bit */
 	mark_tbl->gfid_num_entries = dparms->mark_db_gfid_entries;
+	if (!mark_tbl->gfid_num_entries)
+		goto gfid_not_required;
+
 	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
 					 mark_tbl->gfid_num_entries *
 					 sizeof(struct bnxt_gfid_mark_info),
@@ -109,6 +112,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_num_entries - 1,
 		    mark_tbl->gfid_mask);
 
+gfid_not_required:
 	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 	return 0;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index d4c7bfa4d..8eb559050 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -179,7 +179,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -284,7 +284,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -389,7 +389,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 7c0dc5ee4..3168d29a9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -149,10 +149,11 @@ enum bnxt_ulp_flow_mem_type {
 };
 
 enum bnxt_ulp_glb_regfile_index {
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
+	BNXT_ULP_GLB_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 1,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR = 3,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 4
 };
 
 enum bnxt_ulp_hdr_type {
@@ -257,8 +258,8 @@ enum bnxt_ulp_match_type_bitmask {
 
 enum bnxt_ulp_resource_func {
 	BNXT_ULP_RESOURCE_FUNC_INVALID = 0x00,
-	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 0x20,
-	BNXT_ULP_RESOURCE_FUNC_RSVD1 = 0x40,
+	BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE = 0x20,
+	BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE = 0x40,
 	BNXT_ULP_RESOURCE_FUNC_RSVD2 = 0x60,
 	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0x80,
 	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 0x81,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index beca3baa7..7c440e3a4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -321,7 +321,12 @@ struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	.mark_db_gfid_entries   = 65536,
 	.flow_count_db_entries  = 16384,
 	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2
+	.num_phy_ports          = 2,
+	.ext_cntr_table_type    = 0,
+	.byte_count_mask        = 0x00000003ffffffff,
+	.packet_count_mask      = 0xfffffffc00000000,
+	.byte_count_shift       = 0,
+	.packet_count_shift     = 36
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4bcd02ba2..5a7a7b910 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -146,6 +146,11 @@ struct bnxt_ulp_device_params {
 	uint64_t			flow_db_num_entries;
 	uint32_t			flow_count_db_entries;
 	uint32_t			num_resources_per_flow;
+	uint32_t			ext_cntr_table_type;
+	uint64_t			byte_count_mask;
+	uint64_t			packet_count_mask;
+	uint32_t			byte_count_shift;
+	uint32_t			packet_count_shift;
 };
 
 /* Flow Mapper */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 39/51] net/bnxt: add support for conditional execution of mapper tables
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (37 preceding siblings ...)
  2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
                     ` (12 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for conditional execution of the mapper tables so that
actions like count will have table processed only if action count
is configured.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 45 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |  8 ++++
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 12 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  2 +
 5 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e2b771c9f..d0931d411 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2008,6 +2008,44 @@ ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 	return rc;
 }
 
+/*
+ * Function to process the conditional opcode of the mapper table.
+ * returns 1 to skip the table.
+ * return 0 to continue processing the table.
+ */
+static int32_t
+ulp_mapper_tbl_cond_opcode_process(struct bnxt_ulp_mapper_parms *parms,
+				   struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	int32_t rc = 1;
+
+	switch (tbl->cond_opcode) {
+	case BNXT_ULP_COND_OPCODE_NOP:
+		rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_COMP_FIELD:
+		if (tbl->cond_operand < BNXT_ULP_CF_IDX_LAST &&
+		    ULP_COMP_FLD_IDX_RD(parms, tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_ACTION_BIT:
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_HDR_BIT:
+		if (ULP_BITMAP_ISSET(parms->hdr_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	default:
+		BNXT_TF_DBG(ERR,
+			    "Invalid arg in mapper tbl for cond opcode\n");
+		break;
+	}
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -2027,6 +2065,9 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	for (i = 0; i < parms->num_atbls; i++) {
 		tbl = &parms->atbls[i];
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
@@ -2065,6 +2106,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 	for (i = 0; i < parms->num_ctbls; i++) {
 		struct bnxt_ulp_mapper_tbl_info *tbl = &parms->ctbls[i];
 
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
@@ -2326,6 +2370,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	memset(&parms, 0, sizeof(parms));
 	parms.act_prop = cparms->act_prop;
 	parms.act_bitmap = cparms->act;
+	parms.hdr_bitmap = cparms->hdr_bitmap;
 	parms.regfile = &regfile;
 	parms.hdr_field = cparms->hdr_field;
 	parms.comp_fld = cparms->comp_fld;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b159081b1..19134830a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -62,6 +62,7 @@ struct bnxt_ulp_mapper_parms {
 	uint32_t				num_ctbls;
 	struct ulp_rte_act_prop			*act_prop;
 	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_bitmap		*hdr_bitmap;
 	struct ulp_rte_hdr_field		*hdr_field;
 	uint32_t				*comp_fld;
 	struct ulp_regfile			*regfile;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 41ac77c6f..8fffaecce 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1128,6 +1128,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv4 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
@@ -1148,6 +1152,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv6 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 3168d29a9..27628a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -118,7 +118,17 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
 	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
 	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
-	BNXT_ULP_CF_IDX_LAST = 29
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 29,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 30,
+	BNXT_ULP_CF_IDX_LAST = 31
+};
+
+enum bnxt_ulp_cond_opcode {
+	BNXT_ULP_COND_OPCODE_NOP = 0,
+	BNXT_ULP_COND_OPCODE_COMP_FIELD = 1,
+	BNXT_ULP_COND_OPCODE_ACTION_BIT = 2,
+	BNXT_ULP_COND_OPCODE_HDR_BIT = 3,
+	BNXT_ULP_COND_OPCODE_LAST = 4
 };
 
 enum bnxt_ulp_critical_resource {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5a7a7b910..df999b18c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -165,6 +165,8 @@ struct bnxt_ulp_mapper_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t			resource_type; /* TF_ enum type */
 	enum bnxt_ulp_resource_sub_type	resource_sub_type;
+	enum bnxt_ulp_cond_opcode	cond_opcode;
+	uint32_t			cond_operand;
 	uint8_t		direction;
 	uint32_t	priority;
 	uint8_t		srch_b4_alloc;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 40/51] net/bnxt: enable port MAC qcfg command for trusted VF
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (38 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 41/51] net/bnxt: enhancements for port db Ajit Khaparde
                     ` (11 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_MAC_QCFG command on trusted vf to fetch the port count.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2605ef039..6ade32d1b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3194,14 +3194,14 @@ int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 
 	bp->port_svif = BNXT_SVIF_INVALID;
 
-	if (!BNXT_PF(bp))
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
 		return 0;
 
 	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
-	HWRM_CHECK_RESULT();
+	HWRM_CHECK_RESULT_SILENT();
 
 	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
 	if (port_svif_info &
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 41/51] net/bnxt: enhancements for port db
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (39 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
                     ` (10 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

1. Add "enum bnxt_ulp_intf_type” as the second parameter for the
   port & func helper functions
2. Return vfrep related port & func information in the helper functions
3. Allocate phy_port_list dynamically based on port count
4. Introduce ulp_func_id_tbl array for book keeping func related
   information indexed by func_id

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c           |  64 ++++++++--
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h |   6 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c       |   2 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c  |   9 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.c    | 143 +++++++++++++++++------
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    |  56 +++++++--
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  22 +++-
 8 files changed, 250 insertions(+), 62 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 43e5e7162..32acced60 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -23,6 +23,7 @@
 
 #include "tf_core.h"
 #include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -879,10 +880,11 @@ extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
-uint16_t bnxt_get_vnic_id(uint16_t port);
-uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
-uint16_t bnxt_get_fw_func_id(uint16_t port);
-uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif,
+		       enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_parif(uint16_t port, enum bnxt_ulp_intf_type type);
 uint16_t bnxt_get_phy_port_id(uint16_t port);
 uint16_t bnxt_get_vport(uint16_t port);
 enum bnxt_ulp_intf_type
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 355025741..332644d77 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5067,25 +5067,48 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 }
 
 uint16_t
-bnxt_get_svif(uint16_t port_id, bool func_svif)
+bnxt_get_svif(uint16_t port_id, bool func_svif,
+	      enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->svif;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
 uint16_t
-bnxt_get_vnic_id(uint16_t port)
+bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt_vnic_info *vnic;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->dflt_vnic_id;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
@@ -5094,12 +5117,23 @@ bnxt_get_vnic_id(uint16_t port)
 }
 
 uint16_t
-bnxt_get_fw_func_id(uint16_t port)
+bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return bp->fw_fid;
@@ -5116,8 +5150,14 @@ bnxt_get_interface_type(uint16_t port)
 		return BNXT_ULP_INTF_TYPE_VF_REP;
 
 	bp = eth_dev->data->dev_private;
-	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
-			   : BNXT_ULP_INTF_TYPE_VF;
+	if (BNXT_PF(bp))
+		return BNXT_ULP_INTF_TYPE_PF;
+	else if (BNXT_VF_IS_TRUSTED(bp))
+		return BNXT_ULP_INTF_TYPE_TRUSTED_VF;
+	else if (BNXT_VF(bp))
+		return BNXT_ULP_INTF_TYPE_VF;
+
+	return BNXT_ULP_INTF_TYPE_INVALID;
 }
 
 uint16_t
@@ -5130,6 +5170,9 @@ bnxt_get_phy_port_id(uint16_t port_id)
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
 		vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
 		eth_dev = vfr->parent_dev;
 	}
 
@@ -5139,15 +5182,20 @@ bnxt_get_phy_port_id(uint16_t port_id)
 }
 
 uint16_t
-bnxt_get_parif(uint16_t port_id)
+bnxt_get_parif(uint16_t port_id, enum bnxt_ulp_intf_type type)
 {
-	struct bnxt_vf_representor *vfr;
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
-		vfr = eth_dev->data->dev_private;
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid - 1;
+
 		eth_dev = vfr->parent_dev;
 	}
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f772d4919..ebb71405b 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -6,6 +6,11 @@
 #ifndef _BNXT_TF_COMMON_H_
 #define _BNXT_TF_COMMON_H_
 
+#include <inttypes.h>
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db_enum.h"
+
 #define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
 
 #define BNXT_ULP_EM_FLOWS			8192
@@ -48,6 +53,7 @@ enum ulp_direction_type {
 enum bnxt_ulp_intf_type {
 	BNXT_ULP_INTF_TYPE_INVALID = 0,
 	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_TRUSTED_VF,
 	BNXT_ULP_INTF_TYPE_VF,
 	BNXT_ULP_INTF_TYPE_PF_REP,
 	BNXT_ULP_INTF_TYPE_VF_REP,
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 1b52861d4..e5e7e5f43 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -658,7 +658,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	rc = ulp_dparms_init(bp, bp->ulp_ctx);
 
 	/* create the port database */
-	rc = ulp_port_db_init(bp->ulp_ctx);
+	rc = ulp_port_db_init(bp->ulp_ctx, bp->port_cnt);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to create the port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6eb2d6146..138b0b73d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -128,7 +128,8 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	mapper_cparms.act_prop = &params.act_prop;
 	mapper_cparms.class_tid = class_id;
 	mapper_cparms.act_tid = act_tmpl;
-	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id,
+						    BNXT_ULP_INTF_TYPE_INVALID);
 	mapper_cparms.dir = params.dir;
 
 	/* Call the ulp mapper to create the flow in the hardware. */
@@ -226,7 +227,8 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	}
 
 	flow_id = (uint32_t)(uintptr_t)flow;
-	func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	func_id = bnxt_get_fw_func_id(dev->data->port_id,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
@@ -270,7 +272,8 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	if (ulp_ctx_deinit_allowed(bp)) {
 		ret = ulp_flow_db_session_flow_flush(ulp_ctx);
 	} else if (bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx)) {
-		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id);
+		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id,
+					      BNXT_ULP_INTF_TYPE_INVALID);
 		ret = ulp_flow_db_function_flow_flush(ulp_ctx, func_id);
 	}
 	if (ret)
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index ea27ef41f..659cefa07 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -33,7 +33,7 @@ ulp_port_db_allocate_ifindex(struct bnxt_ulp_port_db *port_db)
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt)
 {
 	struct bnxt_ulp_port_db *port_db;
 
@@ -60,6 +60,18 @@ int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
 			    "Failed to allocate mem for port interface list\n");
 		goto error_free;
 	}
+
+	/* Allocate the phy port list */
+	port_db->phy_port_list = rte_zmalloc("bnxt_ulp_phy_port_list",
+					     port_cnt *
+					     sizeof(struct ulp_phy_port_info),
+					     0);
+	if (!port_db->phy_port_list) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate mem for phy port list\n");
+		goto error_free;
+	}
+
 	return 0;
 
 error_free:
@@ -89,6 +101,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 	bnxt_ulp_cntxt_ptr2_port_db_set(ulp_ctxt, NULL);
 
 	/* Free up all the memory. */
+	rte_free(port_db->phy_port_list);
 	rte_free(port_db->ulp_intf_list);
 	rte_free(port_db);
 	return 0;
@@ -110,6 +123,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	struct ulp_phy_port_info *port_data;
 	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	struct ulp_func_if_info *func;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -134,20 +148,48 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	intf = &port_db->ulp_intf_list[ifindex];
 
 	intf->type = bnxt_get_interface_type(port_id);
+	intf->drv_func_id = bnxt_get_fw_func_id(port_id,
+						BNXT_ULP_INTF_TYPE_INVALID);
+
+	func = &port_db->ulp_func_id_tbl[intf->drv_func_id];
+	if (!func->func_valid) {
+		func->func_svif = bnxt_get_svif(port_id, true,
+						BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_spif = bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+		func->func_valid = true;
+	}
 
-	intf->func_id = bnxt_get_fw_func_id(port_id);
-	intf->func_svif = bnxt_get_svif(port_id, 1);
-	intf->func_spif = bnxt_get_phy_port_id(port_id);
-	intf->func_parif = bnxt_get_parif(port_id);
-	intf->default_vnic = bnxt_get_vnic_id(port_id);
-	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+	if (intf->type == BNXT_ULP_INTF_TYPE_VF_REP) {
+		intf->vf_func_id =
+			bnxt_get_fw_func_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+
+		func = &port_db->ulp_func_id_tbl[intf->vf_func_id];
+		func->func_svif =
+			bnxt_get_svif(port_id, true, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->func_spif =
+			bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+	}
 
-	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
-		port_data = &port_db->phy_port_list[intf->phy_port_id];
-		port_data->port_svif = bnxt_get_svif(port_id, 0);
+	port_data = &port_db->phy_port_list[func->phy_port_id];
+	if (!port_data->port_valid) {
+		port_data->port_svif =
+			bnxt_get_svif(port_id, false,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_spif = bnxt_get_phy_port_id(port_id);
-		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_vport = bnxt_get_vport(port_id);
+		port_data->port_valid = true;
 	}
 
 	return 0;
@@ -194,6 +236,7 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 			    uint32_t ifindex,
+			    uint32_t fid_type,
 			    uint16_t *func_id)
 {
 	struct bnxt_ulp_port_db *port_db;
@@ -203,7 +246,12 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*func_id =  port_db->ulp_intf_list[ifindex].func_id;
+
+	if (fid_type == BNXT_ULP_DRV_FUNC_FID)
+		*func_id =  port_db->ulp_intf_list[ifindex].drv_func_id;
+	else
+		*func_id =  port_db->ulp_intf_list[ifindex].vf_func_id;
+
 	return 0;
 }
 
@@ -212,7 +260,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * svif_type [in] the svif type of the given ifindex.
  * svif [out] the svif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -220,21 +268,27 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t svif_type,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*svif = port_db->ulp_intf_list[ifindex].func_svif;
+
+	if (svif_type == BNXT_ULP_DRV_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
+	} else if (svif_type == BNXT_ULP_VF_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*svif = port_db->phy_port_list[phy_port_id].port_svif;
 	}
 
@@ -246,7 +300,7 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * spif_type [in] the spif type of the given ifindex.
  * spif [out] the spif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -254,21 +308,27 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t spif_type,
 		     uint16_t *spif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+
+	if (spif_type == BNXT_ULP_DRV_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
+	} else if (spif_type == BNXT_ULP_VF_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*spif = port_db->phy_port_list[phy_port_id].port_spif;
 	}
 
@@ -280,7 +340,7 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * parif_type [in] the parif type of the given ifindex.
  * parif [out] the parif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -288,21 +348,26 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t parif_type,
 		     uint16_t *parif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	if (parif_type == BNXT_ULP_DRV_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
+	} else if (parif_type == BNXT_ULP_VF_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*parif = port_db->phy_port_list[phy_port_id].port_parif;
 	}
 
@@ -321,16 +386,26 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 			     uint32_t ifindex,
+			     uint32_t vnic_type,
 			     uint16_t *vnic)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	} else {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	}
+
 	return 0;
 }
 
@@ -348,14 +423,16 @@ ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 		      uint32_t ifindex, uint16_t *vport)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+
+	func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+	phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 	*vport = port_db->phy_port_list[phy_port_id].port_vport;
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 87de3bcbc..b1419a34c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -9,19 +9,54 @@
 #include "bnxt_ulp.h"
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
+#define BNXT_PORT_DB_MAX_FUNC			2048
 
-/* Structure for the Port database resource information. */
-struct ulp_interface_info {
-	enum bnxt_ulp_intf_type	type;
-	uint16_t		func_id;
+enum bnxt_ulp_svif_type {
+	BNXT_ULP_DRV_FUNC_SVIF = 0,
+	BNXT_ULP_VF_FUNC_SVIF,
+	BNXT_ULP_PHY_PORT_SVIF
+};
+
+enum bnxt_ulp_spif_type {
+	BNXT_ULP_DRV_FUNC_SPIF = 0,
+	BNXT_ULP_VF_FUNC_SPIF,
+	BNXT_ULP_PHY_PORT_SPIF
+};
+
+enum bnxt_ulp_parif_type {
+	BNXT_ULP_DRV_FUNC_PARIF = 0,
+	BNXT_ULP_VF_FUNC_PARIF,
+	BNXT_ULP_PHY_PORT_PARIF
+};
+
+enum bnxt_ulp_vnic_type {
+	BNXT_ULP_DRV_FUNC_VNIC = 0,
+	BNXT_ULP_VF_FUNC_VNIC
+};
+
+enum bnxt_ulp_fid_type {
+	BNXT_ULP_DRV_FUNC_FID,
+	BNXT_ULP_VF_FUNC_FID
+};
+
+struct ulp_func_if_info {
+	uint16_t		func_valid;
 	uint16_t		func_svif;
 	uint16_t		func_spif;
 	uint16_t		func_parif;
-	uint16_t		default_vnic;
+	uint16_t		func_vnic;
 	uint16_t		phy_port_id;
 };
 
+/* Structure for the Port database resource information. */
+struct ulp_interface_info {
+	enum bnxt_ulp_intf_type	type;
+	uint16_t		drv_func_id;
+	uint16_t		vf_func_id;
+};
+
 struct ulp_phy_port_info {
+	uint16_t	port_valid;
 	uint16_t	port_svif;
 	uint16_t	port_spif;
 	uint16_t	port_parif;
@@ -35,7 +70,8 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
-	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	*phy_port_list;
+	struct ulp_func_if_info		ulp_func_id_tbl[BNXT_PORT_DB_MAX_FUNC];
 };
 
 /*
@@ -46,7 +82,7 @@ struct bnxt_ulp_port_db {
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt);
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt);
 
 /*
  * Deinitialize the port database. Memory is deallocated in
@@ -94,7 +130,8 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex, uint16_t *func_id);
+			    uint32_t ifindex, uint32_t fid_type,
+			    uint16_t *func_id);
 
 /*
  * Api to get the svif for a given ulp ifindex.
@@ -150,7 +187,8 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex, uint16_t *vnic);
+			     uint32_t ifindex, uint32_t vnic_type,
+			     uint16_t *vnic);
 
 /*
  * Api to get the vport id for a given ulp ifindex.
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 8fffaecce..073b3537f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -166,6 +166,8 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 	uint16_t port_id = svif;
 	uint32_t dir = 0;
 	struct ulp_rte_hdr_field *hdr_field;
+	enum bnxt_ulp_svif_type svif_type;
+	enum bnxt_ulp_intf_type if_type;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -187,7 +189,18 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 				    "Invalid port id\n");
 			return BNXT_TF_RC_ERROR;
 		}
-		ulp_port_db_svif_get(params->ulp_ctx, ifindex, dir, &svif);
+
+		if (dir == ULP_DIR_INGRESS) {
+			svif_type = BNXT_ULP_PHY_PORT_SVIF;
+		} else {
+			if_type = bnxt_get_interface_type(port_id);
+			if (if_type == BNXT_ULP_INTF_TYPE_VF_REP)
+				svif_type = BNXT_ULP_VF_FUNC_SVIF;
+			else
+				svif_type = BNXT_ULP_DRV_FUNC_SVIF;
+		}
+		ulp_port_db_svif_get(params->ulp_ctx, ifindex, svif_type,
+				     &svif);
 		svif = rte_cpu_to_be_16(svif);
 	}
 	hdr_field = &params->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX];
@@ -1256,7 +1269,7 @@ ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
 
 	/* copy the PF of the current device into VNIC Property */
 	svif = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
-	svif = bnxt_get_vnic_id(svif);
+	svif = bnxt_get_vnic_id(svif, BNXT_ULP_INTF_TYPE_INVALID);
 	svif = rte_cpu_to_be_32(svif);
 	memcpy(&params->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 	       &svif, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1280,7 +1293,8 @@ ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using VF conversion */
-		pid = bnxt_get_vnic_id(vf_action->id);
+		pid = bnxt_get_vnic_id(vf_action->id,
+				       BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1307,7 +1321,7 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using port conversion */
-		pid = bnxt_get_vnic_id(port_id->id);
+		pid = bnxt_get_vnic_id(port_id->id, BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 42/51] net/bnxt: manage VF to VFR conduit
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (40 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 41/51] net/bnxt: enhancements for port db Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 43/51] net/bnxt: parse representor along with other dev-args Ajit Khaparde
                     ` (9 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

When VF-VFR conduits are created, a mark is added to the mark database.
mark_flag indicates whether the mark is valid and has VFR information
(VFR_ID bit in mark_flag). Rx path was checking for this VFR_ID bit.
However, while adding the mark to the mark database, VFR_ID bit is not
set in mark_flag.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index b3527eccb..b2c8c349c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -18,6 +18,8 @@
 						BNXT_ULP_MARK_VALID)
 #define ULP_MARK_DB_ENTRY_IS_INVALID(mark_info) (!((mark_info)->flags &\
 						   BNXT_ULP_MARK_VALID))
+#define ULP_MARK_DB_ENTRY_SET_VFR_ID(mark_info) ((mark_info)->flags |=\
+						 BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_VFR_ID(mark_info) ((mark_info)->flags &\
 						BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_GLOBAL_HW_FID(mark_info) ((mark_info)->flags &\
@@ -263,6 +265,9 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
+
+		if (mark_flag & BNXT_ULP_MARK_VFR_ID)
+			ULP_MARK_DB_ENTRY_SET_VFR_ID(&mtbl->lfid_tbl[fid]);
 	}
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 43/51] net/bnxt: parse representor along with other dev-args
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (41 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 44/51] net/bnxt: fill mapper parameters with default rules info Ajit Khaparde
                     ` (8 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Representor dev-args need to be parsed during pci probe as they determine
subsequent probe of VF representor ports as well.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 332644d77..0b38c84e3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -98,8 +98,10 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+#define BNXT_DEVARG_REPRESENTOR	"representor"
 
 static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_REPRESENTOR,
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
 	BNXT_DEVARG_MAX_NUM_KFLOWS,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 44/51] net/bnxt: fill mapper parameters with default rules info
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (42 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 43/51] net/bnxt: parse representor along with other dev-args Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
                     ` (7 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Default rules are needed for the packets to be punted between the
following entities in the non-offloaded path
1. Device PORT to DPDK App
2. DPDK App to Device PORT
3. VF Representor to VF
4. VF to VF Representor

This patch fills all the relevant information in the computed fields
& the act_prop fields for the flow mapper to create the necessary
tables in the hardware to enable the default rules.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c                |   6 +-
 drivers/net/bnxt/meson.build                  |   1 +
 drivers/net/bnxt/tf_ulp/Makefile              |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |  24 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |  30 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       | 385 ++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  10 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |   3 +-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   5 +
 9 files changed, 444 insertions(+), 21 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 0b38c84e3..de8e11a6e 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1275,9 +1275,6 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
-	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_deinit(bp);
-
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
@@ -1333,6 +1330,9 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
+	if (BNXT_TRUFLOW_EN(bp))
+		bnxt_ulp_deinit(bp);
+
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 8f6ed419e..2939857ca 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -61,6 +61,7 @@ sources = files('bnxt_cpr.c',
 	'tf_ulp/ulp_rte_parser.c',
 	'tf_ulp/bnxt_ulp_flow.c',
 	'tf_ulp/ulp_port_db.c',
+	'tf_ulp/ulp_def_rules.c',
 
 	'rte_pmd_bnxt.c')
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 57341f876..3f1b43bae 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -16,3 +16,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index eecc09cea..3563f63fa 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -12,6 +12,8 @@
 
 #include "rte_ethdev.h"
 
+#include "ulp_template_db_enum.h"
+
 struct bnxt_ulp_data {
 	uint32_t			tbl_scope_id;
 	struct bnxt_ulp_mark_tbl	*mark_tbl;
@@ -49,6 +51,12 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+struct ulp_tlv_param {
+	enum bnxt_ulp_df_param_type type;
+	uint32_t length;
+	uint8_t value[16];
+};
+
 /*
  * Allow the deletion of context only for the bnxt device that
  * created the session
@@ -127,4 +135,20 @@ bnxt_ulp_cntxt_ptr2_port_db_set(struct bnxt_ulp_context	*ulp_ctx,
 struct bnxt_ulp_port_db *
 bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx);
 
+/* Function to create default flows. */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id);
+
+/* Function to destroy default flows. */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev,
+			 uint32_t flow_id);
+
+int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		      struct rte_flow_error *error);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 138b0b73d..7ef306e58 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -207,7 +207,7 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev,
 }
 
 /* Function to destroy the rte flow. */
-static int
+int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 		      struct rte_flow *flow,
 		      struct rte_flow_error *error)
@@ -220,9 +220,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
@@ -233,17 +234,22 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
 		BNXT_TF_DBG(ERR, "Incorrect device params\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
-	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id);
-	if (ret)
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret) {
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+		if (error)
+			rte_flow_error_set(error, -ret,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
+	}
 
 	return ret;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
new file mode 100644
index 000000000..46b558f31
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt_tf_common.h"
+#include "ulp_template_struct.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_db_field.h"
+#include "ulp_utils.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+struct bnxt_ulp_def_param_handler {
+	int32_t (*vfr_func)(struct bnxt_ulp_context *ulp_ctx,
+			    struct ulp_tlv_param *param,
+			    struct bnxt_ulp_mapper_create_parms *mapper_params);
+};
+
+static int32_t
+ulp_set_svif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t svif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t svif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_svif_get(ulp_ctx, ifindex, svif_type, &svif);
+	if (rc)
+		return rc;
+
+	if (svif_type == BNXT_ULP_PHY_PORT_SVIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SVIF;
+	else if (svif_type == BNXT_ULP_DRV_FUNC_SVIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SVIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SVIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, svif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_spif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t spif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t spif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_spif_get(ulp_ctx, ifindex, spif_type, &spif);
+	if (rc)
+		return rc;
+
+	if (spif_type == BNXT_ULP_PHY_PORT_SPIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SPIF;
+	else if (spif_type == BNXT_ULP_DRV_FUNC_SPIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SPIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SPIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, spif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_parif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t  ifindex, uint8_t parif_type,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t parif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_parif_get(ulp_ctx, ifindex, parif_type, &parif);
+	if (rc)
+		return rc;
+
+	if (parif_type == BNXT_ULP_PHY_PORT_PARIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_PARIF;
+	else if (parif_type == BNXT_ULP_DRV_FUNC_PARIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_PARIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_PARIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, parif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vport_in_comp_fld(struct bnxt_ulp_context *ulp_ctx, uint32_t ifindex,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vport;
+	int rc;
+
+	rc = ulp_port_db_vport_get(ulp_ctx, ifindex, &vport);
+	if (rc)
+		return rc;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_PHY_PORT_VPORT,
+			    vport);
+	return 0;
+}
+
+static int32_t
+ulp_set_vnic_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t vnic_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vnic;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_default_vnic_get(ulp_ctx, ifindex, vnic_type, &vnic);
+	if (rc)
+		return rc;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_VNIC;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_VNIC;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, vnic);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vlan_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	struct ulp_rte_act_prop *act_prop = mapper_params->act_prop;
+
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_SET_VLAN_VID)) {
+		BNXT_TF_DBG(ERR,
+			    "VLAN already set, multiple VLANs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	port_id = rte_cpu_to_be_16(port_id);
+
+	ULP_BITMAP_SET(mapper_params->act->bits,
+		       BNXT_ULP_ACTION_BIT_SET_VLAN_VID);
+
+	memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG],
+	       &port_id, sizeof(port_id));
+
+	return 0;
+}
+
+static int32_t
+ulp_set_mark_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		BNXT_TF_DBG(ERR,
+			    "MARK already set, multiple MARKs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_DEV_PORT_ID,
+			    port_id);
+
+	return 0;
+}
+
+static int32_t
+ulp_df_dev_port_handler(struct bnxt_ulp_context *ulp_ctx,
+			struct ulp_tlv_param *param,
+			struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t port_id;
+	uint32_t ifindex;
+	int rc;
+
+	port_id = param->value[0] | param->value[1];
+
+	rc = ulp_port_db_dev_port_to_ulp_index(ulp_ctx, port_id, &ifindex);
+	if (rc) {
+		BNXT_TF_DBG(ERR,
+				"Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* Set port SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_PHY_PORT_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_DRV_FUNC_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_PARIF,
+				       mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set uplink VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, true, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, false, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VPORT */
+	rc = ulp_set_vport_in_comp_fld(ulp_ctx, ifindex, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VLAN */
+	rc = ulp_set_vlan_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set MARK */
+	rc = ulp_set_mark_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
+struct bnxt_ulp_def_param_handler ulp_def_handler_tbl[] = {
+	[BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID] = {
+			.vfr_func = ulp_df_dev_port_handler }
+};
+
+/*
+ * Function to create default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * param_list [in] Ptr to a list of parameters (Currently, only DPDK port_id).
+ * ulp_class_tid [in] Class template ID number.
+ * flow_id [out] Ptr to flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id)
+{
+	struct ulp_rte_hdr_field	hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	uint32_t			comp_fld[BNXT_ULP_CF_IDX_LAST];
+	struct bnxt_ulp_mapper_create_parms mapper_params = { 0 };
+	struct ulp_rte_act_prop		act_prop;
+	struct ulp_rte_act_bitmap	act = { 0 };
+	struct bnxt_ulp_context		*ulp_ctx;
+	uint32_t type;
+	int rc;
+
+	memset(&mapper_params, 0, sizeof(mapper_params));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(comp_fld, 0, sizeof(comp_fld));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	mapper_params.hdr_field = hdr_field;
+	mapper_params.act = &act;
+	mapper_params.act_prop = &act_prop;
+	mapper_params.comp_fld = comp_fld;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized. "
+				 "Failed to create default flow.\n");
+		return -EINVAL;
+	}
+
+	type = param_list->type;
+	while (type != BNXT_ULP_DF_PARAM_TYPE_LAST) {
+		if (ulp_def_handler_tbl[type].vfr_func) {
+			rc = ulp_def_handler_tbl[type].vfr_func(ulp_ctx,
+								param_list,
+								&mapper_params);
+			if (rc) {
+				BNXT_TF_DBG(ERR,
+					    "Failed to create default flow.\n");
+				return rc;
+			}
+		}
+
+		param_list++;
+		type = param_list->type;
+	}
+
+	mapper_params.class_tid = ulp_class_tid;
+
+	rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, flow_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create default flow.\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/*
+ * Function to destroy default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * flow_id [in] Flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int rc;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return -EINVAL;
+	}
+
+	rc = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				     BNXT_ULP_DEFAULT_FLOW_TABLE);
+	if (rc)
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index d0931d411..e39398a1b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2274,16 +2274,15 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 }
 
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type)
 {
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
 		return -EINVAL;
 	}
 
-	return ulp_mapper_resources_free(ulp_ctx,
-					 fid,
-					 BNXT_ULP_REGULAR_FLOW_TABLE);
+	return ulp_mapper_resources_free(ulp_ctx, fid, flow_tbl_type);
 }
 
 /* Function to handle the default global templates that are allocated during
@@ -2486,7 +2485,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 flow_error:
 	/* Free all resources that were allocated during flow creation */
-	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
 	if (trc)
 		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 19134830a..b35065449 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -109,7 +109,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
 
 /* Function that frees all resources associated with the flow. */
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type);
 
 /*
  * Function that frees all resources and can be called on default or regular
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 27628a510..2346797db 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -145,6 +145,11 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
+enum bnxt_ulp_df_param_type {
+	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
+	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
+};
+
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 45/51] net/bnxt: add VF-rep and stat templates
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (43 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 44/51] net/bnxt: fill mapper parameters with default rules info Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 46/51] net/bnxt: create default flow rules for the VF-rep conduit Ajit Khaparde
                     ` (6 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The support for VF representor and counters is added to the
ulp templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |   21 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    2 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  424 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5198 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  409 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   87 +-
 7 files changed, 4948 insertions(+), 1656 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e39398a1b..3f175fb51 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -22,7 +22,7 @@ ulp_mapper_glb_resource_info_list_get(uint32_t *num_entries)
 {
 	if (!num_entries)
 		return NULL;
-	*num_entries = BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+	*num_entries = BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 	return ulp_glb_resource_tbl;
 }
 
@@ -119,11 +119,6 @@ ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_identifier(tfp, &fparms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
-		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
-#endif
 	return rc;
 }
 
@@ -182,11 +177,6 @@ ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_tbl_entry(tfp, &free_parms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
-		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
-#endif
 	return rc;
 }
 
@@ -1441,9 +1431,6 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			return rc;
 		}
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	ulp_mapper_result_dump("EEM Result", tbl, &data);
-#endif
 
 	/* do the transpose for the internal EM keys */
 	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
@@ -1594,10 +1581,6 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	/* if encap bit swap is enabled perform the bit swap */
 	if (parms->device_params->encap_byte_swap && encap_flds) {
 		ulp_blob_perform_encap_swap(&data);
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
-		ulp_mapper_blob_dump(&data);
-#endif
 	}
 
 	/*
@@ -2255,7 +2238,7 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Iterate the global resources and process each one */
 	for (dir = TF_DIR_RX; dir < TF_DIR_MAX; dir++) {
-		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 		      idx++) {
 			ent = &mapper_data->glb_res_tbl[dir][idx];
 			if (ent->resource_func ==
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b35065449..f6d55449b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -46,7 +46,7 @@ struct bnxt_ulp_mapper_glb_resource_entry {
 
 struct bnxt_ulp_mapper_data {
 	struct bnxt_ulp_mapper_glb_resource_entry
-		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ];
+		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ];
 	struct bnxt_ulp_mapper_cache_entry
 		*cache_tbl[BNXT_ULP_CACHE_TBL_MAX_SZ];
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 9b14fa0bd..3d6507399 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -9,62 +9,293 @@
 #include "ulp_rte_parser.h"
 
 uint16_t ulp_act_sig_tbl[BNXT_ULP_ACT_SIG_TBL_MAX_SZ] = {
-	[BNXT_ULP_ACT_HID_00a1] = 1,
-	[BNXT_ULP_ACT_HID_0029] = 2,
-	[BNXT_ULP_ACT_HID_0040] = 3
+	[BNXT_ULP_ACT_HID_0002] = 1,
+	[BNXT_ULP_ACT_HID_0022] = 2,
+	[BNXT_ULP_ACT_HID_0026] = 3,
+	[BNXT_ULP_ACT_HID_0006] = 4,
+	[BNXT_ULP_ACT_HID_0009] = 5,
+	[BNXT_ULP_ACT_HID_0029] = 6,
+	[BNXT_ULP_ACT_HID_002d] = 7,
+	[BNXT_ULP_ACT_HID_004b] = 8,
+	[BNXT_ULP_ACT_HID_004a] = 9,
+	[BNXT_ULP_ACT_HID_004f] = 10,
+	[BNXT_ULP_ACT_HID_004e] = 11,
+	[BNXT_ULP_ACT_HID_006c] = 12,
+	[BNXT_ULP_ACT_HID_0070] = 13,
+	[BNXT_ULP_ACT_HID_0021] = 14,
+	[BNXT_ULP_ACT_HID_0025] = 15,
+	[BNXT_ULP_ACT_HID_0043] = 16,
+	[BNXT_ULP_ACT_HID_0042] = 17,
+	[BNXT_ULP_ACT_HID_0047] = 18,
+	[BNXT_ULP_ACT_HID_0046] = 19,
+	[BNXT_ULP_ACT_HID_0064] = 20,
+	[BNXT_ULP_ACT_HID_0068] = 21,
+	[BNXT_ULP_ACT_HID_00a1] = 22,
+	[BNXT_ULP_ACT_HID_00df] = 23
 };
 
 struct bnxt_ulp_act_match_info ulp_act_match_list[] = {
 	[1] = {
-	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_hid = BNXT_ULP_ACT_HID_0002,
 	.act_sig = { .bits =
-		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
-		BNXT_ULP_ACTION_BIT_MARK |
-		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DROP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
-	.act_tid = 0
+	.act_tid = 1
 	},
 	[2] = {
+	.act_hid = BNXT_ULP_ACT_HID_0022,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[3] = {
+	.act_hid = BNXT_ULP_ACT_HID_0026,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[4] = {
+	.act_hid = BNXT_ULP_ACT_HID_0006,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[5] = {
+	.act_hid = BNXT_ULP_ACT_HID_0009,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[6] = {
 	.act_hid = BNXT_ULP_ACT_HID_0029,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
 		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[7] = {
+	.act_hid = BNXT_ULP_ACT_HID_002d,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
 		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.act_tid = 1
 	},
-	[3] = {
-	.act_hid = BNXT_ULP_ACT_HID_0040,
+	[8] = {
+	.act_hid = BNXT_ULP_ACT_HID_004b,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[9] = {
+	.act_hid = BNXT_ULP_ACT_HID_004a,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[10] = {
+	.act_hid = BNXT_ULP_ACT_HID_004f,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[11] = {
+	.act_hid = BNXT_ULP_ACT_HID_004e,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[12] = {
+	.act_hid = BNXT_ULP_ACT_HID_006c,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[13] = {
+	.act_hid = BNXT_ULP_ACT_HID_0070,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[14] = {
+	.act_hid = BNXT_ULP_ACT_HID_0021,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[15] = {
+	.act_hid = BNXT_ULP_ACT_HID_0025,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[16] = {
+	.act_hid = BNXT_ULP_ACT_HID_0043,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[17] = {
+	.act_hid = BNXT_ULP_ACT_HID_0042,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[18] = {
+	.act_hid = BNXT_ULP_ACT_HID_0047,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[19] = {
+	.act_hid = BNXT_ULP_ACT_HID_0046,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[20] = {
+	.act_hid = BNXT_ULP_ACT_HID_0064,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[21] = {
+	.act_hid = BNXT_ULP_ACT_HID_0068,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[22] = {
+	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 2
+	},
+	[23] = {
+	.act_hid = BNXT_ULP_ACT_HID_00df,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_VXLAN_ENCAP |
 		BNXT_ULP_ACTION_BIT_VPORT |
 		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
-	.act_tid = 2
+	.act_tid = 3
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 0
+	.num_tbls = 2,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 1,
-	.start_tbl_idx = 1
+	.start_tbl_idx = 2,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 2
+	.num_tbls = 3,
+	.start_tbl_idx = 3,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_STATS_64,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_ACTION_BIT,
+	.cond_operand = BNXT_ULP_ACTION_BIT_COUNT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
@@ -72,7 +303,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 0,
+	.result_start_idx = 1,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -87,7 +318,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 26,
+	.result_start_idx = 27,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -97,12 +328,46 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 53,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 56,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_TX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 52,
+	.result_start_idx = 59,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
@@ -114,10 +379,19 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
-	.field_bit_size = 14,
+	.field_bit_size = 64,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
 	.field_bit_size = 1,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
@@ -131,7 +405,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_COUNT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -187,7 +471,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -195,11 +489,7 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.result_operand = {
-		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
@@ -212,7 +502,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -224,7 +524,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DROP & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 14,
@@ -308,7 +618,11 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 12,
@@ -336,6 +650,50 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 128,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 14,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index 8eb559050..feac30af2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -10,8 +10,8 @@
 
 uint16_t ulp_class_sig_tbl[BNXT_ULP_CLASS_SIG_TBL_MAX_SZ] = {
 	[BNXT_ULP_CLASS_HID_0080] = 1,
-	[BNXT_ULP_CLASS_HID_0000] = 2,
-	[BNXT_ULP_CLASS_HID_0087] = 3
+	[BNXT_ULP_CLASS_HID_0087] = 2,
+	[BNXT_ULP_CLASS_HID_0000] = 3
 };
 
 struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
@@ -23,1871 +23,4722 @@ struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
 		BNXT_ULP_HDR_BIT_O_UDP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 0,
+	.class_tid = 8,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[2] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0000,
+	.class_hid = BNXT_ULP_CLASS_HID_0087,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
+		BNXT_ULP_HDR_BIT_T_VXLAN |
+		BNXT_ULP_HDR_BIT_I_ETH |
+		BNXT_ULP_HDR_BIT_I_IPV4 |
+		BNXT_ULP_HDR_BIT_I_UDP |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT |
+		BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 1,
+	.class_tid = 9,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[3] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0087,
+	.class_hid = BNXT_ULP_CLASS_HID_0000,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_HDR_BIT_T_VXLAN |
-		BNXT_ULP_HDR_BIT_I_ETH |
-		BNXT_ULP_HDR_BIT_I_IPV4 |
-		BNXT_ULP_HDR_BIT_I_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
 	.field_sig = { .bits =
-		BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT |
-		BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 2,
+	.class_tid = 10,
 	.act_vnic = 0,
 	.wc_pri = 0
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 4,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 2,
+	.start_tbl_idx = 4,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 6,
+	.start_tbl_idx = 6,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((4 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 0
+	.start_tbl_idx = 12,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((5 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 17,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((6 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 20,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((7 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 23,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((8 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 5
+	.start_tbl_idx = 24,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((9 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 5,
+	.start_tbl_idx = 29,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
+	},
+	[((10 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 10
+	.start_tbl_idx = 34,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 0,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
 	.result_start_idx = 0,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 0,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 2,
+	.key_start_idx = 0,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 1,
+	.result_start_idx = 26,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 15,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 14,
-	.result_bit_size = 10,
+	.result_start_idx = 39,
+	.result_bit_size = 32,
 	.result_num_fields = 1,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
-	.ident_nums = 1,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 40,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 41,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 18,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 15,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 67,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 60,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 23,
-	.result_bit_size = 64,
-	.result_num_fields = 9,
-	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 80,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 71,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 32,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 92,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 73,
+	.key_start_idx = 26,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 33,
+	.result_start_idx = 118,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 131,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 86,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 46,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.key_start_idx = 39,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 157,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
-	.ident_nums = 1,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_TX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 89,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 47,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 52,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 170,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 131,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 55,
+	.key_start_idx = 65,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 183,
 	.result_bit_size = 64,
-	.result_num_fields = 9,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 196,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 197,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 142,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 64,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 198,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
-	.ident_nums = 1,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_VFR_FLAG,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 144,
+	.key_start_idx = 78,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 65,
+	.result_start_idx = 224,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 157,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 78,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 237,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 249,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 160,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 79,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 91,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 275,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 202,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 87,
-	.result_bit_size = 64,
+	.result_start_idx = 288,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 104,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 314,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 117,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 327,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 340,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_GLOBAL,
+	.index_operand = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 130,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 366,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 132,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 367,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 145,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 380,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 148,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 381,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 190,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 389,
+	.result_bit_size = 64,
 	.result_num_fields = 9,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
-	}
-};
-
-struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 201,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 398,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 203,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 399,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 216,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 412,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 219,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 413,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 261,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 421,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 272,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 430,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 274,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 431,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 287,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 444,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 290,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 445,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 332,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 453,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	}
+};
+
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_PARIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_PARIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_DST_PORT & 0xff,
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT & 0xff,
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x02}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_DST_PORT & 0xff,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_DST_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
-	}
-};
-
-struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
 	.field_bit_size = 10,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
@@ -2309,7 +5160,12 @@ struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 16,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 2346797db..695546437 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -6,7 +6,7 @@
 #ifndef ULP_TEMPLATE_DB_H_
 #define ULP_TEMPLATE_DB_H_
 
-#define BNXT_ULP_REGFILE_MAX_SZ 16
+#define BNXT_ULP_REGFILE_MAX_SZ 17
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_CACHE_TBL_MAX_SZ 4
@@ -18,15 +18,15 @@
 #define BNXT_ULP_CLASS_HID_SHFTL 23
 #define BNXT_ULP_CLASS_HID_MASK 255
 #define BNXT_ULP_ACT_SIG_TBL_MAX_SZ 256
-#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 4
+#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 24
 #define BNXT_ULP_ACT_HID_LOW_PRIME 7919
 #define BNXT_ULP_ACT_HID_HIGH_PRIME 7919
-#define BNXT_ULP_ACT_HID_SHFTR 0
+#define BNXT_ULP_ACT_HID_SHFTR 23
 #define BNXT_ULP_ACT_HID_SHFTL 23
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
-#define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
-#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
+#define BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ 5
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -242,7 +242,8 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
 	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
 	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
-	BNXT_ULP_REGFILE_INDEX_LAST = 16
+	BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR = 16,
+	BNXT_ULP_REGFILE_INDEX_LAST = 17
 };
 
 enum bnxt_ulp_search_before_alloc {
@@ -252,18 +253,18 @@ enum bnxt_ulp_search_before_alloc {
 };
 
 enum bnxt_ulp_fdb_resource_flags {
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01,
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00,
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01
 };
 
 enum bnxt_ulp_fdb_type {
-	BNXT_ULP_FDB_TYPE_DEFAULT = 1,
-	BNXT_ULP_FDB_TYPE_REGULAR = 0
+	BNXT_ULP_FDB_TYPE_REGULAR = 0,
+	BNXT_ULP_FDB_TYPE_DEFAULT = 1
 };
 
 enum bnxt_ulp_flow_dir_bitmask {
-	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000,
-	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000
+	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000,
+	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000
 };
 
 enum bnxt_ulp_match_type_bitmask {
@@ -285,190 +286,66 @@ enum bnxt_ulp_resource_func {
 };
 
 enum bnxt_ulp_resource_sub_type {
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1
 };
 
 enum bnxt_ulp_sym {
-	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
-	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
-	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
-	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
-	BNXT_ULP_SYM_ECV_VALID_NO = 0,
-	BNXT_ULP_SYM_ECV_VALID_YES = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
-	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
-	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
-	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
-	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
-	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
-	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
-	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
-	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
-	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
-	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
-	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
-	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
-	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
-	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
-	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
-	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
-	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
-	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
-	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
-	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_PKT_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
-	BNXT_ULP_SYM_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_POP_VLAN_YES = 1,
 	BNXT_ULP_SYM_RECYCLE_CNT_IGNORE = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
 	BNXT_ULP_SYM_RECYCLE_CNT_ONE = 1,
-	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
 	BNXT_ULP_SYM_RECYCLE_CNT_TWO = 2,
-	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
+	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
+	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
 	BNXT_ULP_SYM_RESERVED_IGNORE = 0,
-	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
-	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
+	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
 	BNXT_ULP_SYM_TL2_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_NO = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_NO = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_YES = 1,
@@ -478,40 +355,164 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_TL4_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
-	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
+	BNXT_ULP_SYM_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_ECV_VALID_NO = 0,
+	BNXT_ULP_SYM_ECV_VALID_YES = 1,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
 	BNXT_ULP_SYM_WH_PLUS_INT_ACT_REC = 1,
-	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
-	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
 	BNXT_ULP_SYM_WH_PLUS_UC_ACT_REC = 0,
+	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
+	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
+	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
+	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
+	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
+	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
+	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
+	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
 enum bnxt_ulp_wh_plus {
-	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448
 };
 
 enum bnxt_ulp_act_prop_sz {
@@ -604,18 +605,44 @@ enum bnxt_ulp_act_prop_idx {
 
 enum bnxt_ulp_class_hid {
 	BNXT_ULP_CLASS_HID_0080 = 0x0080,
-	BNXT_ULP_CLASS_HID_0000 = 0x0000,
-	BNXT_ULP_CLASS_HID_0087 = 0x0087
+	BNXT_ULP_CLASS_HID_0087 = 0x0087,
+	BNXT_ULP_CLASS_HID_0000 = 0x0000
 };
 
 enum bnxt_ulp_act_hid {
-	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_0002 = 0x0002,
+	BNXT_ULP_ACT_HID_0022 = 0x0022,
+	BNXT_ULP_ACT_HID_0026 = 0x0026,
+	BNXT_ULP_ACT_HID_0006 = 0x0006,
+	BNXT_ULP_ACT_HID_0009 = 0x0009,
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
-	BNXT_ULP_ACT_HID_0040 = 0x0040
+	BNXT_ULP_ACT_HID_002d = 0x002d,
+	BNXT_ULP_ACT_HID_004b = 0x004b,
+	BNXT_ULP_ACT_HID_004a = 0x004a,
+	BNXT_ULP_ACT_HID_004f = 0x004f,
+	BNXT_ULP_ACT_HID_004e = 0x004e,
+	BNXT_ULP_ACT_HID_006c = 0x006c,
+	BNXT_ULP_ACT_HID_0070 = 0x0070,
+	BNXT_ULP_ACT_HID_0021 = 0x0021,
+	BNXT_ULP_ACT_HID_0025 = 0x0025,
+	BNXT_ULP_ACT_HID_0043 = 0x0043,
+	BNXT_ULP_ACT_HID_0042 = 0x0042,
+	BNXT_ULP_ACT_HID_0047 = 0x0047,
+	BNXT_ULP_ACT_HID_0046 = 0x0046,
+	BNXT_ULP_ACT_HID_0064 = 0x0064,
+	BNXT_ULP_ACT_HID_0068 = 0x0068,
+	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_00df = 0x00df
 };
 
 enum bnxt_ulp_df_tpl {
 	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
-	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2,
+	BNXT_ULP_DF_TPL_VFREP_TO_VF = 3,
+	BNXT_ULP_DF_TPL_VF_TO_VFREP = 4,
+	BNXT_ULP_DF_TPL_DRV_FUNC_SVIF_PUSH_VLAN = 5,
+	BNXT_ULP_DF_TPL_PORT_SVIF_VID_VNIC_POP_VLAN = 6,
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC = 7
 };
+
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
index 84b952304..769542042 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
@@ -6,220 +6,275 @@
 #ifndef ULP_HDR_FIELD_ENUMS_H_
 #define ULP_HDR_FIELD_ENUMS_H_
 
-enum bnxt_ulp_hf0 {
-	BNXT_ULP_HF0_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF0_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF0_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF0_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF0_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF0_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF0_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF0_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF0_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF0_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF0_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF0_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF0_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF0_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF0_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF0_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF0_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF0_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF0_IDX_O_UDP_CSUM              = 23
-};
-
 enum bnxt_ulp_hf1 {
-	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF1_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF1_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF1_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF1_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF1_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF1_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF1_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF1_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF1_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF1_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF1_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF1_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF1_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF1_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF1_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF1_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF1_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF1_IDX_O_UDP_CSUM              = 23
+	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0
 };
 
 enum bnxt_ulp_hf2 {
-	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF2_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF2_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF2_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF2_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF2_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF2_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF2_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF2_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF2_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF2_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF2_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF2_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF2_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF2_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF2_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF2_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF2_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF2_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF2_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF2_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF2_IDX_O_UDP_CSUM              = 23,
-	BNXT_ULP_HF2_IDX_T_VXLAN_FLAGS           = 24,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD0           = 25,
-	BNXT_ULP_HF2_IDX_T_VXLAN_VNI             = 26,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD1           = 27,
-	BNXT_ULP_HF2_IDX_I_ETH_DMAC              = 28,
-	BNXT_ULP_HF2_IDX_I_ETH_SMAC              = 29,
-	BNXT_ULP_HF2_IDX_I_ETH_TYPE              = 30,
-	BNXT_ULP_HF2_IDX_IO_VLAN_CFI_PRI         = 31,
-	BNXT_ULP_HF2_IDX_IO_VLAN_VID             = 32,
-	BNXT_ULP_HF2_IDX_IO_VLAN_TYPE            = 33,
-	BNXT_ULP_HF2_IDX_II_VLAN_CFI_PRI         = 34,
-	BNXT_ULP_HF2_IDX_II_VLAN_VID             = 35,
-	BNXT_ULP_HF2_IDX_II_VLAN_TYPE            = 36,
-	BNXT_ULP_HF2_IDX_I_IPV4_VER              = 37,
-	BNXT_ULP_HF2_IDX_I_IPV4_TOS              = 38,
-	BNXT_ULP_HF2_IDX_I_IPV4_LEN              = 39,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_ID          = 40,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_OFF         = 41,
-	BNXT_ULP_HF2_IDX_I_IPV4_TTL              = 42,
-	BNXT_ULP_HF2_IDX_I_IPV4_NEXT_PID         = 43,
-	BNXT_ULP_HF2_IDX_I_IPV4_CSUM             = 44,
-	BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR         = 45,
-	BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR         = 46,
-	BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT          = 47,
-	BNXT_ULP_HF2_IDX_I_UDP_DST_PORT          = 48,
-	BNXT_ULP_HF2_IDX_I_UDP_LENGTH            = 49,
-	BNXT_ULP_HF2_IDX_I_UDP_CSUM              = 50
-};
-
-enum bnxt_ulp_hf_bitmask0 {
-	BNXT_ULP_HF0_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf3 {
+	BNXT_ULP_HF3_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf4 {
+	BNXT_ULP_HF4_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf5 {
+	BNXT_ULP_HF5_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf6 {
+	BNXT_ULP_HF6_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf7 {
+	BNXT_ULP_HF7_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf8 {
+	BNXT_ULP_HF8_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF8_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF8_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF8_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF8_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF8_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF8_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF8_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF8_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF8_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF8_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF8_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF8_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF8_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF8_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF8_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF8_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF8_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF8_IDX_O_UDP_CSUM              = 23
+};
+
+enum bnxt_ulp_hf9 {
+	BNXT_ULP_HF9_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF9_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF9_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF9_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF9_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF9_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF9_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF9_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF9_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF9_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF9_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF9_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF9_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF9_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF9_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF9_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF9_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF9_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF9_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF9_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF9_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF9_IDX_O_UDP_CSUM              = 23,
+	BNXT_ULP_HF9_IDX_T_VXLAN_FLAGS           = 24,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD0           = 25,
+	BNXT_ULP_HF9_IDX_T_VXLAN_VNI             = 26,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD1           = 27,
+	BNXT_ULP_HF9_IDX_I_ETH_DMAC              = 28,
+	BNXT_ULP_HF9_IDX_I_ETH_SMAC              = 29,
+	BNXT_ULP_HF9_IDX_I_ETH_TYPE              = 30,
+	BNXT_ULP_HF9_IDX_IO_VLAN_CFI_PRI         = 31,
+	BNXT_ULP_HF9_IDX_IO_VLAN_VID             = 32,
+	BNXT_ULP_HF9_IDX_IO_VLAN_TYPE            = 33,
+	BNXT_ULP_HF9_IDX_II_VLAN_CFI_PRI         = 34,
+	BNXT_ULP_HF9_IDX_II_VLAN_VID             = 35,
+	BNXT_ULP_HF9_IDX_II_VLAN_TYPE            = 36,
+	BNXT_ULP_HF9_IDX_I_IPV4_VER              = 37,
+	BNXT_ULP_HF9_IDX_I_IPV4_TOS              = 38,
+	BNXT_ULP_HF9_IDX_I_IPV4_LEN              = 39,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_ID          = 40,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_OFF         = 41,
+	BNXT_ULP_HF9_IDX_I_IPV4_TTL              = 42,
+	BNXT_ULP_HF9_IDX_I_IPV4_PROTO_ID         = 43,
+	BNXT_ULP_HF9_IDX_I_IPV4_CSUM             = 44,
+	BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR         = 45,
+	BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR         = 46,
+	BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT          = 47,
+	BNXT_ULP_HF9_IDX_I_UDP_DST_PORT          = 48,
+	BNXT_ULP_HF9_IDX_I_UDP_LENGTH            = 49,
+	BNXT_ULP_HF9_IDX_I_UDP_CSUM              = 50
+};
+
+enum bnxt_ulp_hf10 {
+	BNXT_ULP_HF10_IDX_SVIF_INDEX             = 0,
+	BNXT_ULP_HF10_IDX_O_ETH_DMAC             = 1,
+	BNXT_ULP_HF10_IDX_O_ETH_SMAC             = 2,
+	BNXT_ULP_HF10_IDX_O_ETH_TYPE             = 3,
+	BNXT_ULP_HF10_IDX_OO_VLAN_CFI_PRI        = 4,
+	BNXT_ULP_HF10_IDX_OO_VLAN_VID            = 5,
+	BNXT_ULP_HF10_IDX_OO_VLAN_TYPE           = 6,
+	BNXT_ULP_HF10_IDX_OI_VLAN_CFI_PRI        = 7,
+	BNXT_ULP_HF10_IDX_OI_VLAN_VID            = 8,
+	BNXT_ULP_HF10_IDX_OI_VLAN_TYPE           = 9,
+	BNXT_ULP_HF10_IDX_O_IPV4_VER             = 10,
+	BNXT_ULP_HF10_IDX_O_IPV4_TOS             = 11,
+	BNXT_ULP_HF10_IDX_O_IPV4_LEN             = 12,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_ID         = 13,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_OFF        = 14,
+	BNXT_ULP_HF10_IDX_O_IPV4_TTL             = 15,
+	BNXT_ULP_HF10_IDX_O_IPV4_PROTO_ID        = 16,
+	BNXT_ULP_HF10_IDX_O_IPV4_CSUM            = 17,
+	BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR        = 18,
+	BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR        = 19,
+	BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT         = 20,
+	BNXT_ULP_HF10_IDX_O_UDP_DST_PORT         = 21,
+	BNXT_ULP_HF10_IDX_O_UDP_LENGTH           = 22,
+	BNXT_ULP_HF10_IDX_O_UDP_CSUM             = 23
 };
 
 enum bnxt_ulp_hf_bitmask1 {
-	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000
 };
 
 enum bnxt_ulp_hf_bitmask2 {
-	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_VID         = 0x0000000010000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_VER          = 0x0000000004000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_NEXT_PID     = 0x0000000000100000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_CSUM          = 0x0000000000002000
+	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask3 {
+	BNXT_ULP_HF3_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask4 {
+	BNXT_ULP_HF4_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask5 {
+	BNXT_ULP_HF5_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask6 {
+	BNXT_ULP_HF6_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask7 {
+	BNXT_ULP_HF7_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask8 {
+	BNXT_ULP_HF8_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+};
+
+enum bnxt_ulp_hf_bitmask9 {
+	BNXT_ULP_HF9_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_VID         = 0x0000000010000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_VER          = 0x0000000004000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_PROTO_ID     = 0x0000000000100000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_CSUM          = 0x0000000000002000
 };
 
+enum bnxt_ulp_hf_bitmask10 {
+	BNXT_ULP_HF10_BITMASK_SVIF_INDEX         = 0x8000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_DMAC         = 0x4000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_SMAC         = 0x2000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_TYPE         = 0x1000000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_CFI_PRI    = 0x0800000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_VID        = 0x0400000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_TYPE       = 0x0200000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_CFI_PRI    = 0x0100000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_VID        = 0x0080000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_TYPE       = 0x0040000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_VER         = 0x0020000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TOS         = 0x0010000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_LEN         = 0x0008000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_ID     = 0x0004000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_OFF    = 0x0002000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TTL         = 0x0001000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_PROTO_ID    = 0x0000800000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_CSUM        = 0x0000400000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR    = 0x0000200000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR    = 0x0000100000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT     = 0x0000080000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT     = 0x0000040000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_LENGTH       = 0x0000020000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_CSUM         = 0x0000010000000000
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 7c440e3a4..f0a57cf65 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -294,60 +294,72 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 
 struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[] = {
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	}
 };
 
 struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
-	.flow_mem_type          = BNXT_ULP_FLOW_MEM_TYPE_EXT,
-	.byte_order             = BNXT_ULP_BYTE_ORDER_LE,
-	.encap_byte_swap        = 1,
-	.flow_db_num_entries    = 32768,
-	.mark_db_lfid_entries   = 65536,
-	.mark_db_gfid_entries   = 65536,
-	.flow_count_db_entries  = 16384,
-	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2,
-	.ext_cntr_table_type    = 0,
-	.byte_count_mask        = 0x00000003ffffffff,
-	.packet_count_mask      = 0xfffffffc00000000,
-	.byte_count_shift       = 0,
-	.packet_count_shift     = 36
+		.flow_mem_type           = BNXT_ULP_FLOW_MEM_TYPE_EXT,
+		.byte_order              = BNXT_ULP_BYTE_ORDER_LE,
+		.encap_byte_swap         = 1,
+		.flow_db_num_entries     = 32768,
+		.mark_db_lfid_entries    = 65536,
+		.mark_db_gfid_entries    = 65536,
+		.flow_count_db_entries   = 16384,
+		.num_resources_per_flow  = 8,
+		.num_phy_ports           = 2,
+		.ext_cntr_table_type     = 0,
+		.byte_count_mask         = 0x0000000fffffffff,
+		.packet_count_mask       = 0xffffffff00000000,
+		.byte_count_shift        = 0,
+		.packet_count_shift      = 36
 	}
 };
 
 struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[] = {
 	[0] = {
-	.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index       = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction               = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_RX
 	},
 	[1] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction          = TF_DIR_TX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_TX
 	},
 	[2] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_L2_CTXT,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
-	.direction          = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_RX
+	},
+	[3] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_TX
+	},
+	[4] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+		.resource_type           = TF_TBL_TYPE_FULL_ACT_RECORD,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR,
+		.direction               = TF_DIR_TX
 	}
 };
 
@@ -547,10 +559,11 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
 };
 
 uint32_t bnxt_ulp_encap_vtag_map[] = {
-	[0] = BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
-	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
-	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
 
 uint32_t ulp_glb_template_tbl[] = {
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC
 };
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 46/51] net/bnxt: create default flow rules for the VF-rep conduit
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (44 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
                     ` (5 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Invoked 3 new APIs for the default flow create/destroy and to get
the action ptr for a default flow.
Changed ulp_intf_update() to accept rte_eth_dev as input and invoke
the same from the VF rep start function.
ULP Mark Manager will indicate if the cfa_code returned in the
Rx completion descriptor was for one of the default flow rules
created for the VF representor conduit. The mark_id returned
in such a case would be the VF rep's DPDK Port id, which can be
used to get the corresponding rte_eth_dev struct in bnxt_vf_recv

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   4 +-
 drivers/net/bnxt/bnxt_reps.c | 134 ++++++++++++++++++++++++-----------
 drivers/net/bnxt/bnxt_reps.h |   3 +-
 drivers/net/bnxt/bnxt_rxr.c  |  25 +++----
 drivers/net/bnxt/bnxt_txq.h  |   1 +
 5 files changed, 111 insertions(+), 56 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 32acced60..f16bf3319 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -806,8 +806,10 @@ struct bnxt_vf_representor {
 	uint16_t		fw_fid;
 	uint16_t		dflt_vnic_id;
 	uint16_t		svif;
-	uint16_t		tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	uint16_t		rx_cfa_code;
+	uint32_t		rep2vf_flow_id;
+	uint32_t		vf2rep_flow_id;
 	/* Private data store of associated PF/Trusted VF */
 	struct rte_eth_dev	*parent_dev;
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index ea6f0010f..a37a06184 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -12,6 +12,9 @@
 #include "bnxt_txr.h"
 #include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
+#include "bnxt_tf_common.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
@@ -29,30 +32,20 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 };
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf)
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf)
 {
 	struct bnxt_sw_rx_bd *prod_rx_buf;
 	struct bnxt_rx_ring_info *rep_rxr;
 	struct bnxt_rx_queue *rep_rxq;
 	struct rte_eth_dev *vfr_eth_dev;
 	struct bnxt_vf_representor *vfr_bp;
-	uint16_t vf_id;
 	uint16_t mask;
 	uint8_t que;
 
-	vf_id = bp->cfa_code_map[cfa_code];
-	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
-	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
-		return 1;
-	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	vfr_eth_dev = &rte_eth_devices[port_id];
 	if (!vfr_eth_dev)
 		return 1;
 	vfr_bp = vfr_eth_dev->data->dev_private;
-	if (vfr_bp->rx_cfa_code != cfa_code) {
-		/* cfa_code not meant for this VF rep!!?? */
-		return 1;
-	}
 	/* If rxq_id happens to be > max rep_queue, use rxq0 */
 	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
 	rep_rxq = vfr_bp->rx_queues[que];
@@ -127,7 +120,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	pthread_mutex_lock(&parent->rep_info->vfr_lock);
 	ptxq = parent->tx_queues[qid];
 
-	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+	ptxq->vfr_tx_cfa_action = vf_rep_bp->vfr_tx_cfa_action;
 
 	for (i = 0; i < nb_pkts; i++) {
 		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
@@ -135,7 +128,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	}
 
 	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
-	ptxq->tx_cfa_action = 0;
+	ptxq->vfr_tx_cfa_action = 0;
 	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
 
 	return rc;
@@ -252,10 +245,67 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
-static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+static int bnxt_tf_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
+{
+	int rc;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
+	struct rte_eth_dev *parent_dev = vfr->parent_dev;
+	struct bnxt *parent_bp = parent_dev->data->dev_private;
+	uint16_t vfr_port_id = vfr_ethdev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(vfr_port_id >> 8) & 0xff, vfr_port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	ulp_port_db_dev_port_intf_update(parent_bp->ulp_ctx, vfr_ethdev);
+
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VFREP_TO_VF,
+				     &vfr->rep2vf_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VFR->VF failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VFR->VF! ***\n");
+	BNXT_TF_DBG(DEBUG, "rep2vf_flow_id = %d\n", vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_db_cfa_action_get(parent_bp->ulp_ctx,
+						vfr->rep2vf_flow_id,
+						&vfr->vfr_tx_cfa_action);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Failed to get action_ptr for VFR->VF dflt rule\n");
+		return -EIO;
+	}
+	BNXT_TF_DBG(DEBUG, "tx_cfa_action = %d\n", vfr->vfr_tx_cfa_action);
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VF_TO_VFREP,
+				     &vfr->vf2rep_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VF->VFR failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VF->VFR! ***\n");
+	BNXT_TF_DBG(DEBUG, "vfr2rep_flow_id = %d\n", vfr->vf2rep_flow_id);
+
+	return 0;
+}
+
+static int bnxt_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
 {
 	int rc = 0;
-	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
 
 	if (!vfr || !vfr->parent_dev) {
 		PMD_DRV_LOG(ERR,
@@ -263,10 +313,8 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 		return -ENOMEM;
 	}
 
-	parent_bp = vfr->parent_dev->data->dev_private;
-
 	/* Check if representor has been already allocated in FW */
-	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+	if (vfr->vfr_tx_cfa_action && vfr->rx_cfa_code)
 		return 0;
 
 	/*
@@ -274,24 +322,14 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 	 * Otherwise the FW will create the VF-rep rules with
 	 * default drop action.
 	 */
-
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
-				     vfr->vf_id,
-				     &vfr->tx_cfa_action,
-				     &vfr->rx_cfa_code);
-	 */
-	if (!rc) {
-		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+	rc = bnxt_tf_vfr_alloc(vfr_ethdev);
+	if (!rc)
 		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
 			    vfr->vf_id);
-	} else {
+	else
 		PMD_DRV_LOG(ERR,
 			    "Failed to alloc representor %d in FW\n",
 			    vfr->vf_id);
-	}
 
 	return rc;
 }
@@ -312,7 +350,7 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
 	int rc;
 
-	rc = bnxt_vfr_alloc(rep_bp);
+	rc = bnxt_vfr_alloc(eth_dev);
 
 	if (!rc) {
 		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
@@ -327,6 +365,25 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
+static int bnxt_tf_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->rep2vf_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed rep2vf flowid: %d\n",
+			    vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->vf2rep_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed vf2rep flowid: %d\n",
+			    vfr->vf2rep_flow_id);
+	return 0;
+}
+
 static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 {
 	int rc = 0;
@@ -341,15 +398,10 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp = vfr->parent_dev->data->dev_private;
 
 	/* Check if representor has been already freed in FW */
-	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+	if (!vfr->vfr_tx_cfa_action && !vfr->rx_cfa_code)
 		return 0;
 
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
-				    vfr->vf_id);
-	 */
+	rc = bnxt_tf_vfr_free(vfr);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
 			    "Failed to free representor %d in FW\n",
@@ -360,7 +412,7 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
 	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
 		    vfr->vf_id);
-	vfr->tx_cfa_action = 0;
+	vfr->vfr_tx_cfa_action = 0;
 	vfr->rx_cfa_code = 0;
 
 	return rc;
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 5c2e0a0b9..418b95afc 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -13,8 +13,7 @@
 #define BNXT_VF_IDX_INVALID             0xffff
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf);
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 37b534fc2..64058879e 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -403,9 +403,9 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
-static void
+static uint32_t
 bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
-			  struct rte_mbuf *mbuf)
+			  struct rte_mbuf *mbuf, uint32_t *vfr_flag)
 {
 	uint32_t cfa_code;
 	uint32_t meta_fmt;
@@ -415,8 +415,6 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	uint32_t flags2;
 	uint32_t gfid_support = 0;
 	int rc;
-	uint32_t vfr_flag;
-
 
 	if (BNXT_GFID_ENABLED(bp))
 		gfid_support = 1;
@@ -485,19 +483,21 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	}
 
 	rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid,
-				  cfa_code, &vfr_flag, &mark_id);
+				  cfa_code, vfr_flag, &mark_id);
 	if (!rc) {
 		/* Got the mark, write it to the mbuf and return */
 		mbuf->hash.fdir.hi = mark_id;
 		mbuf->udata64 = (cfa_code & 0xffffffffull) << 32;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
 		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-		return;
+		return mark_id;
 	}
 
 skip_mark:
 	mbuf->hash.fdir.hi = 0;
 	mbuf->hash.fdir.id = 0;
+
+	return 0;
 }
 
 void bnxt_set_mark_in_mbuf(struct bnxt *bp,
@@ -553,7 +553,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	int rc = 0;
 	uint8_t agg_buf = 0;
 	uint16_t cmp_type;
-	uint32_t flags2_f = 0;
+	uint32_t flags2_f = 0, vfr_flag = 0, mark_id = 0;
 	uint16_t flags_type;
 	struct bnxt *bp = rxq->bp;
 
@@ -632,7 +632,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	}
 
 	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+		mark_id = bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf,
+						    &vfr_flag);
 	else
 		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
@@ -736,10 +737,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
-	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
-	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
-		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
-				   mbuf)) {
+	if (BNXT_TRUFLOW_EN(bp) &&
+	    (BNXT_VF_IS_TRUSTED(bp) || BNXT_PF(bp)) &&
+	    vfr_flag) {
+		if (!bnxt_vfr_recv(mark_id, rxq->queue_id, mbuf)) {
 			/* Now return an error so that nb_rx_pkts is not
 			 * incremented.
 			 * This packet was meant to be given to the representor.
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 69ff89aab..a1ab3f39a 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -30,6 +30,7 @@ struct bnxt_tx_queue {
 	int			index;
 	int			tx_wake_thresh;
 	uint32_t                tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 47/51] net/bnxt: add port default rules for ingress and egress
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (45 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 46/51] net/bnxt: create default flow rules for the VF-rep conduit Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
                     ` (4 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

ingress & egress port default rules are needed to send the packet
from port_to_dpdk & dpdk_to_port respectively.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c     | 76 +++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h |  3 ++
 2 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de8e11a6e..2a19c5040 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -29,6 +29,7 @@
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
 #include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -1162,6 +1163,73 @@ static int bnxt_handle_if_change_status(struct bnxt *bp)
 	return rc;
 }
 
+static int32_t
+bnxt_create_port_app_df_rule(struct bnxt *bp, uint8_t flow_type,
+			     uint32_t *flow_id)
+{
+	uint16_t port_id = bp->eth_dev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(port_id >> 8) & 0xff, port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	return ulp_default_flow_create(bp->eth_dev, param_list, flow_type,
+				       flow_id);
+}
+
+static int32_t
+bnxt_create_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+	int rc;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_PORT_TO_VS,
+					  &cfg_data->port_to_app_flow_id);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to create port to app default rule\n");
+		return rc;
+	}
+
+	BNXT_TF_DBG(DEBUG, "***** created port to app default rule ******\n");
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_VS_TO_PORT,
+					  &cfg_data->app_to_port_flow_id);
+	if (!rc) {
+		rc = ulp_default_flow_db_cfa_action_get(bp->ulp_ctx,
+							cfg_data->app_to_port_flow_id,
+							&cfg_data->tx_cfa_action);
+		if (rc)
+			goto err;
+
+		BNXT_TF_DBG(DEBUG,
+			    "***** created app to port default rule *****\n");
+		return 0;
+	}
+
+err:
+	BNXT_TF_DBG(DEBUG, "Failed to create app to port default rule\n");
+	return rc;
+}
+
+static void
+bnxt_destroy_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->port_to_app_flow_id);
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->app_to_port_flow_id);
+}
+
 static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
@@ -1330,8 +1398,11 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
-	if (BNXT_TRUFLOW_EN(bp))
+	if (BNXT_TRUFLOW_EN(bp)) {
+		if (bp->rep_info != NULL)
+			bnxt_destroy_df_rules(bp);
 		bnxt_ulp_deinit(bp);
+	}
 
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
@@ -1581,6 +1652,9 @@ static int bnxt_promiscuous_disable_op(struct rte_eth_dev *eth_dev)
 	if (rc != 0)
 		vnic->flags = old_flags;
 
+	if (BNXT_TRUFLOW_EN(bp) && bp->rep_info != NULL)
+		bnxt_create_df_rules(bp);
+
 	return rc;
 }
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 3563f63fa..4843da562 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,9 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	uint32_t			port_to_app_flow_id;
+	uint32_t			app_to_port_flow_id;
+	uint32_t			tx_cfa_action;
 };
 
 struct bnxt_ulp_context {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 48/51] net/bnxt: fill cfa action in the Tx descriptor
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (46 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
                     ` (3 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Currently, only vfrep transmit requires cfa_action to be filled
in the tx buffer descriptor. However with truflow, dpdk(non vfrep)
to port also requires cfa_action to be filled in the tx buffer
descriptor.

This patch uses the correct cfa_action pointer while transmitting
the packet. Based on whether the packet is transmitted on non-vfrep
or vfrep, tx_cfa_action or vfr_tx_cfa_action inside txq will be
filled in the tx buffer descriptor.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index d7e193d38..f5884268e 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
+				PKT_TX_QINQ_PKT) ||
+	     txq->bp->ulp_ctx->cfg_data->tx_cfa_action ||
+	     txq->vfr_tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +186,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = txq->tx_cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp)) {
+			if (txq->vfr_tx_cfa_action)
+				cfa_action = txq->vfr_tx_cfa_action;
+			else
+				cfa_action =
+				      txq->bp->ulp_ctx->cfg_data->tx_cfa_action;
+		}
+
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
@@ -212,7 +222,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					&txr->tx_desc_ring[txr->tx_prod];
 		txbd1->lflags = 0;
 		txbd1->cfa_meta = vlan_tag_flags;
-		txbd1->cfa_action = cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp))
+			txbd1->cfa_action = cfa_action;
 
 		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
 			uint16_t hdr_size;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 49/51] net/bnxt: add ULP Flow counter Manager
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (47 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
                     ` (2 subsequent siblings)
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

The Flow counter manager allocates memory to hold the software view
of the counters where the on-chip counter data will be accumulated
along with another memory block that will be shadowing the on-chip
counter data i.e where the raw counter data will be DMAed into from
the chip.
It also keeps track of the first HW counter ID as that will be needed
to retrieve the counter data in bulk using a TF API. It issues this cmd
in an rte_alarm thread that keeps running every second.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_ulp/Makefile      |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h    |   8 +
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c  | 465 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h  | 148 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c |  27 ++
 7 files changed, 685 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 2939857ca..5fb0ed380 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -46,6 +46,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
 	'tf_core/tf_em_host.c',
+	'tf_ulp/ulp_fc_mgr.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 3f1b43bae..abb68150d 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -17,3 +17,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_fc_mgr.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index e5e7e5f43..c05861150 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -18,6 +18,7 @@
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "ulp_mark_mgr.h"
+#include "ulp_fc_mgr.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 #include "ulp_port_db.h"
@@ -705,6 +706,12 @@ bnxt_ulp_init(struct bnxt *bp)
 		goto jump_to_error;
 	}
 
+	rc = ulp_fc_mgr_init(bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize ulp flow counter mgr\n");
+		goto jump_to_error;
+	}
+
 	return rc;
 
 jump_to_error:
@@ -752,6 +759,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	/* cleanup the ulp mapper */
 	ulp_mapper_deinit(bp->ulp_ctx);
 
+	/* Delete the Flow Counter Manager */
+	ulp_fc_mgr_deinit(bp->ulp_ctx);
+
 	/* Delete the Port database */
 	ulp_port_db_deinit(bp->ulp_ctx);
 
@@ -963,3 +973,28 @@ bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx)
 
 	return ulp_ctx->cfg_data->port_db;
 }
+
+/* Function to set the flow counter info into the context */
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->fc_info = ulp_fc_info;
+
+	return 0;
+}
+
+/* Function to retrieve the flow counter info from the context. */
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->fc_info;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 4843da562..a13328426 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,7 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	struct bnxt_ulp_fc_info		*fc_info;
 	uint32_t			port_to_app_flow_id;
 	uint32_t			app_to_port_flow_id;
 	uint32_t			tx_cfa_action;
@@ -154,4 +155,11 @@ int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *error);
 
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info);
+
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
new file mode 100644
index 000000000..f70d4a295
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -0,0 +1,465 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_alarm.h>
+#include "bnxt.h"
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "ulp_fc_mgr.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_struct.h"
+#include "tf_tbl.h"
+
+static int
+ulp_fc_mgr_shadow_mem_alloc(struct hw_fc_mem_info *parms, int size)
+{
+	/* Allocate memory*/
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("ulp_fc_info",
+				    RTE_CACHE_LINE_ROUNDUP(size),
+				    4096);
+	if (parms->mem_va == NULL) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	rte_mem_lock_page(parms->mem_va);
+
+	parms->mem_pa = (void *)(uintptr_t)rte_mem_virt2phy(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void
+ulp_fc_mgr_shadow_mem_free(struct hw_fc_mem_info *parms)
+{
+	rte_free(parms->mem_va);
+}
+
+/*
+ * Allocate and Initialize all Flow Counter Manager resources for this ulp
+ * context.
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager.
+ *
+ */
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id, sw_acc_cntr_tbl_sz, hw_fc_mem_info_sz;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i, rc;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	ulp_fc_info = rte_zmalloc("ulp_fc_info", sizeof(*ulp_fc_info), 0);
+	if (!ulp_fc_info)
+		goto error;
+
+	rc = pthread_mutex_init(&ulp_fc_info->fc_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Failed to initialize fc mutex\n");
+		goto error;
+	}
+
+	/* Add the FC info tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, ulp_fc_info);
+
+	sw_acc_cntr_tbl_sz = sizeof(struct sw_acc_counter) *
+				dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		ulp_fc_info->sw_acc_tbl[i] = rte_zmalloc("ulp_sw_acc_cntr_tbl",
+							 sw_acc_cntr_tbl_sz, 0);
+		if (!ulp_fc_info->sw_acc_tbl[i])
+			goto error;
+	}
+
+	hw_fc_mem_info_sz = sizeof(uint64_t) * dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_fc_mgr_shadow_mem_alloc(&ulp_fc_info->shadow_hw_tbl[i],
+						 hw_fc_mem_info_sz);
+		if (rc)
+			goto error;
+	}
+
+	return 0;
+
+error:
+	ulp_fc_mgr_deinit(ctxt);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for fc mgr\n");
+
+	return -ENOMEM;
+}
+
+/*
+ * Release all resources in the Flow Counter Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EINVAL;
+
+	ulp_fc_mgr_thread_cancel(ctxt);
+
+	pthread_mutex_destroy(&ulp_fc_info->fc_lock);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		rte_free(ulp_fc_info->sw_acc_tbl[i]);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		ulp_fc_mgr_shadow_mem_free(&ulp_fc_info->shadow_hw_tbl[i]);
+
+
+	rte_free(ulp_fc_info);
+
+	/* Safe to ignore on deinit */
+	(void)bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, NULL);
+
+	return 0;
+}
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	return !!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD);
+}
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD)) {
+		rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+				  ulp_fc_mgr_alarm_cb,
+				  (void *)ctxt);
+		ulp_fc_info->flags |= ULP_FLAG_FC_THREAD;
+	}
+
+	return 0;
+}
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	ulp_fc_info->flags &= ~ULP_FLAG_FC_THREAD;
+	rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt);
+}
+
+/*
+ * DMA-in the raw counter data from the HW and accumulate in the
+ * local accumulator table using the TF-Core API
+ *
+ * tfp [in] The TF-Core context
+ *
+ * fc_info [in] The ULP Flow counter info ptr
+ *
+ * dir [in] The direction of the flow
+ *
+ * num_counters [in] The number of counters
+ *
+ */
+static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+				       struct bnxt_ulp_fc_info *fc_info,
+				       enum tf_dir dir, uint32_t num_counters)
+{
+	int rc = 0;
+	struct tf_tbl_get_bulk_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD: Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t *stats = NULL;
+	uint16_t i = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
+	parms.num_entries = num_counters;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.entry_sz_in_bytes = sizeof(uint64_t);
+	stats = (uint64_t *)fc_info->shadow_hw_tbl[dir].mem_va;
+	parms.physical_mem_addr = (uintptr_t)fc_info->shadow_hw_tbl[dir].mem_pa;
+
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Memory not initialized id:0x%x dir:%d\n",
+			    parms.starting_idx, dir);
+		return -EINVAL;
+	}
+
+	rc = tf_tbl_bulk_get(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Get failed for id:0x%x rc:%d\n",
+			    parms.starting_idx, rc);
+		return rc;
+	}
+
+	for (i = 0; i < num_counters; i++) {
+		/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+		sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
+		if (!sw_acc_tbl_entry->valid)
+			continue;
+		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i]);
+		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i]);
+	}
+
+	return rc;
+}
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg)
+{
+	int rc = 0, i;
+	struct bnxt_ulp_context *ctxt = arg;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct bnxt_ulp_device_params *dparms;
+	struct tf *tfp;
+	uint32_t dev_id;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ctxt);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return;
+	}
+
+	/*
+	 * Take the fc_lock to ensure no flow is destroyed
+	 * during the bulk get
+	 */
+	if (pthread_mutex_trylock(&ulp_fc_info->fc_lock))
+		goto out;
+
+	if (!ulp_fc_info->num_entries) {
+		pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
+					     dparms->flow_count_db_entries);
+		if (rc)
+			break;
+	}
+
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	/*
+	 * If cmd fails once, no need of
+	 * invoking again every second
+	 */
+
+	if (rc) {
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+out:
+	rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+			  ulp_fc_mgr_alarm_cb,
+			  (void *)ctxt);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	/* Assuming start_idx of 0 is invalid */
+	return (ulp_fc_info->shadow_hw_tbl[dir].start_idx != 0);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+				 uint32_t start_idx)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EIO;
+
+	/* Assuming that 0 is an invalid counter ID ? */
+	if (ulp_fc_info->shadow_hw_tbl[dir].start_idx == 0)
+		ulp_fc_info->shadow_hw_tbl[dir].start_idx = start_idx;
+
+	return 0;
+}
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			    uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->num_entries++;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
+
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			      uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
+	ulp_fc_info->num_entries--;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
new file mode 100644
index 000000000..faa77dd75
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FC_MGR_H_
+#define _ULP_FC_MGR_H_
+
+#include "bnxt_ulp.h"
+#include "tf_core.h"
+
+#define ULP_FLAG_FC_THREAD			BIT(0)
+#define ULP_FC_TIMER	1/* Timer freq in Sec Flow Counters */
+
+/* Macros to extract packet/byte counters from a 64-bit flow counter. */
+#define FLOW_CNTR_BYTE_WIDTH 36
+#define FLOW_CNTR_BYTE_MASK  (((uint64_t)1 << FLOW_CNTR_BYTE_WIDTH) - 1)
+
+#define FLOW_CNTR_PKTS(v) ((v) >> FLOW_CNTR_BYTE_WIDTH)
+#define FLOW_CNTR_BYTES(v) ((v) & FLOW_CNTR_BYTE_MASK)
+
+struct sw_acc_counter {
+	uint64_t pkt_count;
+	uint64_t byte_count;
+	bool	valid;
+};
+
+struct hw_fc_mem_info {
+	/*
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/*
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+	uint32_t start_idx;
+};
+
+struct bnxt_ulp_fc_info {
+	struct sw_acc_counter	*sw_acc_tbl[TF_DIR_MAX];
+	struct hw_fc_mem_info	shadow_hw_tbl[TF_DIR_MAX];
+	uint32_t		flags;
+	uint32_t		num_entries;
+	pthread_mutex_t		fc_lock;
+};
+
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Release all resources in the flow counter manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg);
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			     uint32_t start_idx);
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			uint32_t hw_cntr_id);
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			  uint32_t hw_cntr_id);
+/*
+ * Check if the starting HW counter ID value is set in the
+ * flow counter manager.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_FC_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 7696de2a5..a3cfe54bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -10,6 +10,7 @@
 #include "ulp_utils.h"
 #include "ulp_template_struct.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 
 #define ULP_FLOW_DB_RES_DIR_BIT		31
 #define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
@@ -484,6 +485,21 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 		ulp_flow_db_res_params_to_info(fid_resource, params);
 	}
 
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		/* Store the first HW counter ID for this table */
+		if (!ulp_fc_mgr_start_idx_isset(ulp_ctxt, params->direction))
+			ulp_fc_mgr_start_idx_set(ulp_ctxt, params->direction,
+						 params->resource_hndl);
+
+		ulp_fc_mgr_cntr_set(ulp_ctxt, params->direction,
+				    params->resource_hndl);
+
+		if (!ulp_fc_mgr_thread_isstarted(ulp_ctxt))
+			ulp_fc_mgr_thread_start(ulp_ctxt);
+	}
+
 	/* all good, return success */
 	return 0;
 }
@@ -574,6 +590,17 @@ int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
 					nxt_idx);
 	}
 
+	/* Now that the HW Flow counter resource is deleted, reset it's
+	 * corresponding slot in the SW accumulation table in the Flow Counter
+	 * manager
+	 */
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		ulp_fc_mgr_cntr_reset(ulp_ctxt, params->direction,
+				      params->resource_hndl);
+	}
+
 	/* all good, return success */
 	return 0;
 }
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 50/51] net/bnxt: add support for count action in flow query
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (48 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 51/51] doc: update release notes Ajit Khaparde
  2020-07-01 14:26   ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use the flow counter manager to fetch the accumulated stats for
a flow.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c |  45 +++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c    | 141 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h    |  17 ++-
 3 files changed, 196 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 7ef306e58..36a014184 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -9,6 +9,7 @@
 #include "ulp_matcher.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 #include <rte_malloc.h>
 
 static int32_t
@@ -289,11 +290,53 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	return ret;
 }
 
+/* Function to query the rte flows. */
+static int32_t
+bnxt_ulp_flow_query(struct rte_eth_dev *eth_dev,
+		    struct rte_flow *flow,
+		    const struct rte_flow_action *action,
+		    void *data,
+		    struct rte_flow_error *error)
+{
+	int rc = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	struct rte_flow_query_count *count;
+	uint32_t flow_id;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to query flow.");
+		return -EINVAL;
+	}
+
+	flow_id = (uint32_t)(uintptr_t)flow;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		count = data;
+		rc = ulp_fc_mgr_query_count_get(ulp_ctx, flow_id, count);
+		if (rc) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to query flow.");
+		}
+		break;
+	default:
+		rte_flow_error_set(error, -rc, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "Unsupported action item");
+	}
+
+	return rc;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
 	.flush = bnxt_ulp_flow_flush,
-	.query = NULL,
+	.query = bnxt_ulp_flow_query,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index f70d4a295..9944e9e5c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -11,6 +11,7 @@
 #include "bnxt_ulp.h"
 #include "bnxt_tf_common.h"
 #include "ulp_fc_mgr.h"
+#include "ulp_flow_db.h"
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "tf_tbl.h"
@@ -226,9 +227,10 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
  * num_counters [in] The number of counters
  *
  */
-static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+__rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 				       struct bnxt_ulp_fc_info *fc_info,
 				       enum tf_dir dir, uint32_t num_counters)
+/* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
 {
 	int rc = 0;
 	struct tf_tbl_get_bulk_parms parms = { 0 };
@@ -275,6 +277,45 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 
 	return rc;
 }
+
+static int ulp_get_single_flow_stat(struct tf *tfp,
+				    struct bnxt_ulp_fc_info *fc_info,
+				    enum tf_dir dir,
+				    uint32_t hw_cntr_id)
+{
+	int rc = 0;
+	struct tf_get_tbl_entry_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD:Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t stats = 0;
+	uint32_t sw_cntr_indx = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.idx = hw_cntr_id;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.data_sz_in_bytes = sizeof(uint64_t);
+	parms.data = (uint8_t *)&stats;
+	rc = tf_get_tbl_entry(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Get failed for id:0x%x rc:%d\n",
+			    parms.idx, rc);
+		return rc;
+	}
+
+	/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+	sw_cntr_indx = hw_cntr_id - fc_info->shadow_hw_tbl[dir].start_idx;
+	sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][sw_cntr_indx];
+	sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats);
+	sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats);
+
+	return rc;
+}
+
 /*
  * Alarm handler that will issue the TF-Core API to fetch
  * data from the chip's internal flow counters
@@ -282,15 +323,18 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
+
 void
 ulp_fc_mgr_alarm_cb(void *arg)
 {
-	int rc = 0, i;
+	int rc = 0;
+	unsigned int j;
+	enum tf_dir i;
 	struct bnxt_ulp_context *ctxt = arg;
 	struct bnxt_ulp_fc_info *ulp_fc_info;
 	struct bnxt_ulp_device_params *dparms;
 	struct tf *tfp;
-	uint32_t dev_id;
+	uint32_t dev_id, hw_cntr_id = 0;
 
 	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
 	if (!ulp_fc_info)
@@ -325,13 +369,27 @@ ulp_fc_mgr_alarm_cb(void *arg)
 		ulp_fc_mgr_thread_cancel(ctxt);
 		return;
 	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
+	/*
+	 * Commented for now till GET_BULK is resolved, just get the first flow
+	 * stat for now
+	 for (i = 0; i < TF_DIR_MAX; i++) {
 		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
 					     dparms->flow_count_db_entries);
 		if (rc)
 			break;
 	}
+	*/
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		for (j = 0; j < ulp_fc_info->num_entries; j++) {
+			if (!ulp_fc_info->sw_acc_tbl[i][j].valid)
+				continue;
+			hw_cntr_id = ulp_fc_info->sw_acc_tbl[i][j].hw_cntr_id;
+			rc = ulp_get_single_flow_stat(tfp, ulp_fc_info, i,
+						      hw_cntr_id);
+			if (rc)
+				break;
+		}
+	}
 
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -425,6 +483,7 @@ int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = hw_cntr_id;
 	ulp_fc_info->num_entries++;
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -456,6 +515,7 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
 	ulp_fc_info->num_entries--;
@@ -463,3 +523,74 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 
 	return 0;
 }
+
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ctxt,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count)
+{
+	int rc = 0;
+	uint32_t nxt_resource_index = 0;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct ulp_flow_db_res_params params;
+	enum tf_dir dir;
+	uint32_t hw_cntr_id = 0, sw_cntr_idx = 0;
+	struct sw_acc_counter sw_acc_tbl_entry;
+	bool found_cntr_resource = false;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -ENODEV;
+
+	do {
+		rc = ulp_flow_db_resource_get(ctxt,
+					      BNXT_ULP_REGULAR_FLOW_TABLE,
+					      flow_id,
+					      &nxt_resource_index,
+					      &params);
+		if (params.resource_func ==
+		     BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE &&
+		     (params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT ||
+		      params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT)) {
+			found_cntr_resource = true;
+			break;
+		}
+
+	} while (!rc);
+
+	if (rc)
+		return rc;
+
+	if (found_cntr_resource) {
+		dir = params.direction;
+		hw_cntr_id = params.resource_hndl;
+		sw_cntr_idx = hw_cntr_id -
+				ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+		sw_acc_tbl_entry = ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx];
+		if (params.resource_sub_type ==
+			BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+			count->hits_set = 1;
+			count->bytes_set = 1;
+			count->hits = sw_acc_tbl_entry.pkt_count;
+			count->bytes = sw_acc_tbl_entry.byte_count;
+		} else {
+			/* TBD: Handle External counters */
+			rc = -EINVAL;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
index faa77dd75..207267049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -23,6 +23,7 @@ struct sw_acc_counter {
 	uint64_t pkt_count;
 	uint64_t byte_count;
 	bool	valid;
+	uint32_t hw_cntr_id;
 };
 
 struct hw_fc_mem_info {
@@ -142,7 +143,21 @@ bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
-
 bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ulp_ctx,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count);
 #endif /* _ULP_FC_MGR_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v2 51/51] doc: update release notes
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (49 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
@ 2020-07-01  6:52   ` Ajit Khaparde
  2020-07-01 14:26   ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
  51 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01  6:52 UTC (permalink / raw)
  To: dev

Update release notes with enhancements in Broadcom PMD.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/rel_notes/release_20_08.rst | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 5cbc4ce14..12256dd08 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -91,6 +91,15 @@ New Features
 
   * Added support for DCF datapath configuration.
 
+* **Updated Broadcom bnxt driver.**
+
+  Updated the Broadcom bnxt driver with new features and improvements, including:
+
+  * Added support for VF representors.
+  * Added support for multiple devices.
+  * Added support for new resource manager API.
+  * Added support for VXLAN encap/decap.
+
 * **Added support for BPF_ABS/BPF_IND load instructions.**
 
   Added support for two BPF non-generic instructions:
@@ -107,7 +116,6 @@ New Features
   * Dump ``rte_flow`` memory consumption.
   * Measure packet per second forwarding.
 
-
 Removed Items
 -------------
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management
  2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
                     ` (50 preceding siblings ...)
  2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 51/51] doc: update release notes Ajit Khaparde
@ 2020-07-01 14:26   ` Ajit Khaparde
  2020-07-01 21:31     ` Ferruh Yigit
  51 siblings, 1 reply; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-01 14:26 UTC (permalink / raw)
  To: dpdk-dev

On Tue, Jun 30, 2020 at 11:52 PM Ajit Khaparde <ajit.khaparde@broadcom.com>
wrote:

> This patchset introduces support for VF representors, flow
> counters and on-chip exact match flows.
> Also implements the driver hook for the rte_flow_query API.
>
> v1->v2:
>  - update commit message
>  - rebase patches against latest changes in the tree
>  - fix signed-off-by tags
>  - update release notes
>

Applied to dpdk-next-net-brcm. Thanks

>
> Ajit Khaparde (1):
>   doc: update release notes
>
> Jay Ding (5):
>   net/bnxt: implement support for TCAM access
>   net/bnxt: support two level priority for TCAMs
>   net/bnxt: add external action alloc and free
>   net/bnxt: implement IF tables set and get
>   net/bnxt: add global config set and get APIs
>
> Kishore Padmanabha (8):
>   net/bnxt: integrate with the latest tf core changes
>   net/bnxt: add support for if table processing
>   net/bnxt: disable Tx vector mode if truflow is enabled
>   net/bnxt: add index opcode and operand to mapper table
>   net/bnxt: add support for global resource templates
>   net/bnxt: add support for internal exact match entries
>   net/bnxt: add support for conditional execution of mapper tables
>   net/bnxt: add VF-rep and stat templates
>
> Lance Richardson (1):
>   net/bnxt: initialize parent PF information
>
> Michael Wildt (7):
>   net/bnxt: add multi device support
>   net/bnxt: update multi device design support
>   net/bnxt: multiple device implementation
>   net/bnxt: update identifier with remap support
>   net/bnxt: update RM with residual checker
>   net/bnxt: update table get to use new design
>   net/bnxt: add TF register and unregister
>
> Mike Baucom (1):
>   net/bnxt: add support for internal encap records
>
> Peter Spreadborough (7):
>   net/bnxt: add support for exact match
>   net/bnxt: modify EM insert and delete to use HWRM direct
>   net/bnxt: support EM and TCAM lookup with table scope
>   net/bnxt: remove table scope from session
>   net/bnxt: add HCAPI interface support
>   net/bnxt: update RM to support HCAPI only
>   net/bnxt: add support for EEM System memory
>
> Randy Schacher (2):
>   net/bnxt: add core changes for EM and EEM lookups
>   net/bnxt: align CFA resources with RM
>
> Shahaji Bhosle (2):
>   net/bnxt: support bulk table get and mirror
>   net/bnxt: support two-level priority for TCAMs
>
> Somnath Kotur (7):
>   net/bnxt: add basic infrastructure for VF representors
>   net/bnxt: add support for VF-reps data path
>   net/bnxt: get IDs for VF-Rep endpoint
>   net/bnxt: parse representor along with other dev-args
>   net/bnxt: create default flow rules for the VF-rep conduit
>   net/bnxt: add ULP Flow counter Manager
>   net/bnxt: add support for count action in flow query
>
> Venkat Duvvuru (10):
>   net/bnxt: modify port db dev interface
>   net/bnxt: get port and function info
>   net/bnxt: add support for hwrm port phy qcaps
>   net/bnxt: modify port db to handle more info
>   net/bnxt: enable port MAC qcfg command for trusted VF
>   net/bnxt: enhancements for port db
>   net/bnxt: manage VF to VFR conduit
>   net/bnxt: fill mapper parameters with default rules info
>   net/bnxt: add port default rules for ingress and egress
>   net/bnxt: fill cfa action in the Tx descriptor
>
>  config/common_base                            |    1 +
>  doc/guides/rel_notes/release_20_08.rst        |   10 +-
>  drivers/net/bnxt/Makefile                     |    8 +-
>  drivers/net/bnxt/bnxt.h                       |  121 +-
>  drivers/net/bnxt/bnxt_ethdev.c                |  519 +-
>  drivers/net/bnxt/bnxt_hwrm.c                  |  122 +-
>  drivers/net/bnxt/bnxt_hwrm.h                  |    7 +
>  drivers/net/bnxt/bnxt_reps.c                  |  773 +++
>  drivers/net/bnxt/bnxt_reps.h                  |   45 +
>  drivers/net/bnxt/bnxt_rxr.c                   |   39 +-
>  drivers/net/bnxt/bnxt_rxr.h                   |    1 +
>  drivers/net/bnxt/bnxt_txq.h                   |    2 +
>  drivers/net/bnxt/bnxt_txr.c                   |   18 +-
>  drivers/net/bnxt/hcapi/Makefile               |   10 +
>  drivers/net/bnxt/hcapi/cfa_p40_hw.h           |  781 +++
>  drivers/net/bnxt/hcapi/cfa_p40_tbl.h          |  303 +
>  drivers/net/bnxt/hcapi/hcapi_cfa.h            |  276 +
>  drivers/net/bnxt/hcapi/hcapi_cfa_defs.h       |  672 +++
>  drivers/net/bnxt/hcapi/hcapi_cfa_p4.c         |  399 ++
>  drivers/net/bnxt/hcapi/hcapi_cfa_p4.h         |  467 ++
>  drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++--
>  drivers/net/bnxt/meson.build                  |   21 +-
>  drivers/net/bnxt/tf_core/Makefile             |   29 +-
>  drivers/net/bnxt/tf_core/bitalloc.c           |  107 +
>  drivers/net/bnxt/tf_core/bitalloc.h           |    5 +
>  drivers/net/bnxt/tf_core/cfa_resource_types.h |  293 +
>  drivers/net/bnxt/tf_core/hwrm_tf.h            |  995 +---
>  drivers/net/bnxt/tf_core/ll.c                 |   52 +
>  drivers/net/bnxt/tf_core/ll.h                 |   46 +
>  drivers/net/bnxt/tf_core/lookup3.h            |    1 -
>  drivers/net/bnxt/tf_core/stack.c              |   10 +-
>  drivers/net/bnxt/tf_core/stack.h              |   10 +
>  drivers/net/bnxt/tf_core/tf_common.h          |   43 +
>  drivers/net/bnxt/tf_core/tf_core.c            | 1495 +++--
>  drivers/net/bnxt/tf_core/tf_core.h            |  874 ++-
>  drivers/net/bnxt/tf_core/tf_device.c          |  271 +
>  drivers/net/bnxt/tf_core/tf_device.h          |  650 ++
>  drivers/net/bnxt/tf_core/tf_device_p4.c       |  147 +
>  drivers/net/bnxt/tf_core/tf_device_p4.h       |  104 +
>  drivers/net/bnxt/tf_core/tf_em.c              |  515 --
>  drivers/net/bnxt/tf_core/tf_em.h              |  492 +-
>  drivers/net/bnxt/tf_core/tf_em_common.c       | 1048 ++++
>  drivers/net/bnxt/tf_core/tf_em_common.h       |  134 +
>  drivers/net/bnxt/tf_core/tf_em_host.c         |  531 ++
>  drivers/net/bnxt/tf_core/tf_em_internal.c     |  352 ++
>  drivers/net/bnxt/tf_core/tf_em_system.c       |  533 ++
>  drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
>  drivers/net/bnxt/tf_core/tf_global_cfg.c      |  199 +
>  drivers/net/bnxt/tf_core/tf_global_cfg.h      |  170 +
>  drivers/net/bnxt/tf_core/tf_identifier.c      |  186 +
>  drivers/net/bnxt/tf_core/tf_identifier.h      |  147 +
>  drivers/net/bnxt/tf_core/tf_if_tbl.c          |  178 +
>  drivers/net/bnxt/tf_core/tf_if_tbl.h          |  236 +
>  drivers/net/bnxt/tf_core/tf_msg.c             | 1681 +++---
>  drivers/net/bnxt/tf_core/tf_msg.h             |  409 +-
>  drivers/net/bnxt/tf_core/tf_resources.h       |  531 --
>  drivers/net/bnxt/tf_core/tf_rm.c              | 3840 +++---------
>  drivers/net/bnxt/tf_core/tf_rm.h              |  554 +-
>  drivers/net/bnxt/tf_core/tf_session.c         |  776 +++
>  drivers/net/bnxt/tf_core/tf_session.h         |  565 +-
>  drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |   63 +
>  drivers/net/bnxt/tf_core/tf_shadow_tbl.h      |  240 +
>  drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |   63 +
>  drivers/net/bnxt/tf_core/tf_shadow_tcam.h     |  239 +
>  drivers/net/bnxt/tf_core/tf_tbl.c             | 1930 +-----
>  drivers/net/bnxt/tf_core/tf_tbl.h             |  469 +-
>  drivers/net/bnxt/tf_core/tf_tcam.c            |  430 ++
>  drivers/net/bnxt/tf_core/tf_tcam.h            |  360 ++
>  drivers/net/bnxt/tf_core/tf_util.c            |  176 +
>  drivers/net/bnxt/tf_core/tf_util.h            |   98 +
>  drivers/net/bnxt/tf_core/tfp.c                |   33 +-
>  drivers/net/bnxt/tf_core/tfp.h                |  153 +-
>  drivers/net/bnxt/tf_ulp/Makefile              |    2 +
>  drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   16 +
>  drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  129 +-
>  drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   35 +
>  drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |   84 +-
>  drivers/net/bnxt/tf_ulp/ulp_def_rules.c       |  385 ++
>  drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  596 ++
>  drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |  163 +
>  drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   42 +-
>  drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  481 +-
>  drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    6 +-
>  drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   10 +
>  drivers/net/bnxt/tf_ulp/ulp_port_db.c         |  235 +-
>  drivers/net/bnxt/tf_ulp/ulp_port_db.h         |  122 +-
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |   30 +-
>  drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  433 +-
>  .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5217 +++++++++++++----
>  .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  537 +-
>  .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
>  drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   85 +-
>  drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   23 +-
>  drivers/net/bnxt/tf_ulp/ulp_utils.c           |    2 +-
>  94 files changed, 28009 insertions(+), 11248 deletions(-)
>  create mode 100644 drivers/net/bnxt/bnxt_reps.c
>  create mode 100644 drivers/net/bnxt/bnxt_reps.h
>  create mode 100644 drivers/net/bnxt/hcapi/Makefile
>  create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
>  create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
>  create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
>  create mode 100644 drivers/net/bnxt/tf_core/ll.c
>  create mode 100644 drivers/net/bnxt/tf_core/ll.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_common.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
>  delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_util.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
>
> --
> 2.21.1 (Apple Git-122.3)
>
>

^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management
  2020-07-01 14:26   ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
@ 2020-07-01 21:31     ` Ferruh Yigit
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                         ` (2 more replies)
  0 siblings, 3 replies; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-01 21:31 UTC (permalink / raw)
  To: Ajit Khaparde, dpdk-dev

On 7/1/2020 3:26 PM, Ajit Khaparde wrote:
> On Tue, Jun 30, 2020 at 11:52 PM Ajit Khaparde <ajit.khaparde@broadcom.com>
> wrote:
> 
>> This patchset introduces support for VF representors, flow
>> counters and on-chip exact match flows.
>> Also implements the driver hook for the rte_flow_query API.
>>
>> v1->v2:
>>  - update commit message
>>  - rebase patches against latest changes in the tree
>>  - fix signed-off-by tags
>>  - update release notes
>>
> 
> Applied to dpdk-next-net-brcm. Thanks

Hi Ajit,

Build is failing, patchwork also reports these build errors [1], can you please
check?

[1]
http://mails.dpdk.org/archives/test-report/2020-July/140070.html

^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 00/51] add features for host-based flow management
  2020-07-01 21:31     ` Ferruh Yigit
@ 2020-07-02  4:10       ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
                           ` (50 more replies)
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
  2 siblings, 51 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev

v1->v2:
 - update commit message
 - rebase patches against latest changes in the tree
 - fix signed-off-by tags
 - update release notes

v2->v3:
 - fix compilation issues

Ajit Khaparde (1):
  doc: update release notes

Jay Ding (5):
  net/bnxt: implement support for TCAM access
  net/bnxt: support two level priority for TCAMs
  net/bnxt: add external action alloc and free
  net/bnxt: implement IF tables set and get
  net/bnxt: add global config set and get APIs

Kishore Padmanabha (8):
  net/bnxt: integrate with the latest tf core changes
  net/bnxt: add support for if table processing
  net/bnxt: disable Tx vector mode if truflow is enabled
  net/bnxt: add index opcode and operand to mapper table
  net/bnxt: add support for global resource templates
  net/bnxt: add support for internal exact match entries
  net/bnxt: add support for conditional execution of mapper tables
  net/bnxt: add VF-rep and stat templates

Lance Richardson (1):
  net/bnxt: initialize parent PF information

Michael Wildt (7):
  net/bnxt: add multi device support
  net/bnxt: update multi device design support
  net/bnxt: multiple device implementation
  net/bnxt: update identifier with remap support
  net/bnxt: update RM with residual checker
  net/bnxt: update table get to use new design
  net/bnxt: add TF register and unregister

Mike Baucom (1):
  net/bnxt: add support for internal encap records

Peter Spreadborough (7):
  net/bnxt: add support for exact match
  net/bnxt: modify EM insert and delete to use HWRM direct
  net/bnxt: add HCAPI interface support
  net/bnxt: support EM and TCAM lookup with table scope
  net/bnxt: update RM to support HCAPI only
  net/bnxt: remove table scope from session
  net/bnxt: add support for EEM System memory

Randy Schacher (2):
  net/bnxt: add core changes for EM and EEM lookups
  net/bnxt: align CFA resources with RM

Shahaji Bhosle (2):
  net/bnxt: support bulk table get and mirror
  net/bnxt: support two-level priority for TCAMs

Somnath Kotur (7):
  net/bnxt: add basic infrastructure for VF representors
  net/bnxt: add support for VF-reps data path
  net/bnxt: get IDs for VF-Rep endpoint
  net/bnxt: parse representor along with other dev-args
  net/bnxt: create default flow rules for the VF-rep conduit
  net/bnxt: add ULP Flow counter Manager
  net/bnxt: add support for count action in flow query

Venkat Duvvuru (10):
  net/bnxt: modify port db dev interface
  net/bnxt: get port and function info
  net/bnxt: add support for hwrm port phy qcaps
  net/bnxt: modify port db to handle more info
  net/bnxt: enable port MAC qcfg command for trusted VF
  net/bnxt: enhancements for port db
  net/bnxt: manage VF to VFR conduit
  net/bnxt: fill mapper parameters with default rules info
  net/bnxt: add port default rules for ingress and egress
  net/bnxt: fill cfa action in the Tx descriptor

 config/common_base                            |    1 +
 doc/guides/rel_notes/release_20_08.rst        |   11 +-
 drivers/net/bnxt/Makefile                     |    8 +-
 drivers/net/bnxt/bnxt.h                       |  121 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  519 +-
 drivers/net/bnxt/bnxt_hwrm.c                  |  122 +-
 drivers/net/bnxt/bnxt_hwrm.h                  |    7 +
 drivers/net/bnxt/bnxt_reps.c                  |  773 +++
 drivers/net/bnxt/bnxt_reps.h                  |   45 +
 drivers/net/bnxt/bnxt_rxr.c                   |   39 +-
 drivers/net/bnxt/bnxt_rxr.h                   |    1 +
 drivers/net/bnxt/bnxt_txq.h                   |    2 +
 drivers/net/bnxt/bnxt_txr.c                   |   18 +-
 drivers/net/bnxt/hcapi/Makefile               |   10 +
 drivers/net/bnxt/hcapi/cfa_p40_hw.h           |  781 +++
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h          |  303 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h            |  276 +
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h       |  672 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c         |  399 ++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h         |  467 ++
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++--
 drivers/net/bnxt/meson.build                  |   21 +-
 drivers/net/bnxt/tf_core/Makefile             |   29 +-
 drivers/net/bnxt/tf_core/bitalloc.c           |  107 +
 drivers/net/bnxt/tf_core/bitalloc.h           |    5 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h |  293 +
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  995 +---
 drivers/net/bnxt/tf_core/ll.c                 |   52 +
 drivers/net/bnxt/tf_core/ll.h                 |   46 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_common.h          |   43 +
 drivers/net/bnxt/tf_core/tf_core.c            | 1495 +++--
 drivers/net/bnxt/tf_core/tf_core.h            |  874 ++-
 drivers/net/bnxt/tf_core/tf_device.c          |  271 +
 drivers/net/bnxt/tf_core/tf_device.h          |  650 ++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  147 +
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  104 +
 drivers/net/bnxt/tf_core/tf_em.c              |  515 --
 drivers/net/bnxt/tf_core/tf_em.h              |  492 +-
 drivers/net/bnxt/tf_core/tf_em_common.c       | 1048 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  134 +
 drivers/net/bnxt/tf_core/tf_em_host.c         |  531 ++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  352 ++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  533 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c      |  199 +
 drivers/net/bnxt/tf_core/tf_global_cfg.h      |  170 +
 drivers/net/bnxt/tf_core/tf_identifier.c      |  186 +
 drivers/net/bnxt/tf_core/tf_identifier.h      |  147 +
 drivers/net/bnxt/tf_core/tf_if_tbl.c          |  178 +
 drivers/net/bnxt/tf_core/tf_if_tbl.h          |  236 +
 drivers/net/bnxt/tf_core/tf_msg.c             | 1681 +++---
 drivers/net/bnxt/tf_core/tf_msg.h             |  409 +-
 drivers/net/bnxt/tf_core/tf_resources.h       |  531 --
 drivers/net/bnxt/tf_core/tf_rm.c              | 3840 +++---------
 drivers/net/bnxt/tf_core/tf_rm.h              |  554 +-
 drivers/net/bnxt/tf_core/tf_session.c         |  776 +++
 drivers/net/bnxt/tf_core/tf_session.h         |  565 +-
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      |  240 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     |  239 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1930 +-----
 drivers/net/bnxt/tf_core/tf_tbl.h             |  469 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  430 ++
 drivers/net/bnxt/tf_core/tf_tcam.h            |  360 ++
 drivers/net/bnxt/tf_core/tf_util.c            |  176 +
 drivers/net/bnxt/tf_core/tf_util.h            |   98 +
 drivers/net/bnxt/tf_core/tfp.c                |   33 +-
 drivers/net/bnxt/tf_core/tfp.h                |  153 +-
 drivers/net/bnxt/tf_ulp/Makefile              |    2 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   16 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  129 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   35 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |   84 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       |  385 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  596 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |  163 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   42 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  481 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    6 +-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   10 +
 drivers/net/bnxt/tf_ulp/ulp_port_db.c         |  235 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.h         |  122 +-
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |   30 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  433 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5217 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  537 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   85 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   23 +-
 drivers/net/bnxt/tf_ulp/ulp_utils.c           |    2 +-
 94 files changed, 28009 insertions(+), 11247 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 01/51] net/bnxt: add basic infrastructure for VF reps
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
                           ` (49 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru, Kalesh AP

From: Somnath Kotur <somnath.kotur@broadcom.com>

Defines data structures and code to init/uninit
VF representors during pci_probe and pci_remove
respectively.
Most of the dev_ops for the VF representor are just
stubs for now and will be will be filled out in next patch.

To create a representor using testpmd:
testpmd -c 0xff -wB:D.F,representor=1 -- -i
testpmd -c 0xff -w05:02.0,representor=[1] -- -i

To create a representor using ovs-dpdk:
1. Firt add the trusted VF port to a bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0
2. Add the representor port to the bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0,representor=1

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile      |   2 +
 drivers/net/bnxt/bnxt.h        |  64 +++++++-
 drivers/net/bnxt/bnxt_ethdev.c | 225 ++++++++++++++++++++------
 drivers/net/bnxt/bnxt_reps.c   | 287 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_reps.h   |  35 ++++
 drivers/net/bnxt/meson.build   |   1 +
 6 files changed, 566 insertions(+), 48 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index a375299c3..365627499 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -14,6 +14,7 @@ LIB = librte_pmd_bnxt.a
 EXPORT_MAP := rte_pmd_bnxt_version.map
 
 CFLAGS += -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 CFLAGS += $(WERROR_FLAGS)
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
@@ -38,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_reps.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c
 ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index d455f8d84..9b7b87cee 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -220,6 +220,7 @@ struct bnxt_child_vf_info {
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
+#define BNXT_MAX_VF_REPS	64
 #define BNXT_TOTAL_VFS(bp)	((bp)->pf->total_vfs)
 #define BNXT_FIRST_VF_FID	128
 #define BNXT_PF_RINGS_USED(bp)	bnxt_get_num_queues(bp)
@@ -492,6 +493,10 @@ struct bnxt_mark_info {
 	bool		valid;
 };
 
+struct bnxt_rep_info {
+	struct rte_eth_dev	*vfr_eth_dev;
+};
+
 /* address space location of register */
 #define BNXT_FW_STATUS_REG_TYPE_MASK	3
 /* register is located in PCIe config space */
@@ -515,6 +520,40 @@ struct bnxt_mark_info {
 #define BNXT_FW_STATUS_HEALTHY		0x8000
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
+#define BNXT_ETH_RSS_SUPPORT (	\
+	ETH_RSS_IPV4 |		\
+	ETH_RSS_NONFRAG_IPV4_TCP |	\
+	ETH_RSS_NONFRAG_IPV4_UDP |	\
+	ETH_RSS_IPV6 |		\
+	ETH_RSS_NONFRAG_IPV6_TCP |	\
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
+				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_CKSUM | \
+				     DEV_TX_OFFLOAD_UDP_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_TSO | \
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_QINQ_INSERT | \
+				     DEV_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
+				     DEV_RX_OFFLOAD_VLAN_STRIP | \
+				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_UDP_CKSUM | \
+				     DEV_RX_OFFLOAD_TCP_CKSUM | \
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
+				     DEV_RX_OFFLOAD_KEEP_CRC | \
+				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
+				     DEV_RX_OFFLOAD_TCP_LRO | \
+				     DEV_RX_OFFLOAD_SCATTER | \
+				     DEV_RX_OFFLOAD_RSS_HASH)
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -682,6 +721,9 @@ struct bnxt {
 #define BNXT_MAX_RINGS(bp) \
 	(RTE_MIN((((bp)->max_cp_rings - BNXT_NUM_ASYNC_CPR(bp)) / 2U), \
 		 BNXT_MAX_TX_RINGS(bp)))
+
+#define BNXT_MAX_VF_REP_RINGS	8
+
 	uint16_t		max_nq_rings;
 	uint16_t		max_l2_ctx;
 	uint16_t		max_rx_em_flows;
@@ -711,7 +753,9 @@ struct bnxt {
 
 	uint16_t		fw_reset_min_msecs;
 	uint16_t		fw_reset_max_msecs;
-
+	uint16_t		switch_domain_id;
+	uint16_t		num_reps;
+	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -732,6 +776,18 @@ struct bnxt {
 
 #define BNXT_FC_TIMER	1 /* Timer freq in Sec Flow Counters */
 
+/**
+ * Structure to store private data for each VF representor instance
+ */
+struct bnxt_vf_representor {
+	uint16_t switch_domain_id;
+	uint16_t vf_id;
+	/* Private data store of associated PF/Trusted VF */
+	struct bnxt	*parent_priv;
+	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
 int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 		     bool exp_link_status);
@@ -744,7 +800,13 @@ void bnxt_schedule_fw_health_check(struct bnxt *bp);
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
 bool bnxt_stratus_device(struct bnxt *bp);
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete);
+
 extern const struct rte_flow_ops bnxt_flow_ops;
+
 #define bnxt_acquire_flow_lock(bp) \
 	pthread_mutex_lock(&(bp)->flow_lock)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7022f6d52..4911745af 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -18,6 +18,7 @@
 #include "bnxt_filter.h"
 #include "bnxt_hwrm.h"
 #include "bnxt_irq.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxq.h"
 #include "bnxt_rxr.h"
@@ -93,40 +94,6 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-#define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
-				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_VLAN_STRIP | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
-
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
@@ -163,7 +130,6 @@ static int bnxt_devarg_max_num_kflow_invalid(uint16_t max_num_kflows)
 }
 
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev);
 static int bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev);
@@ -198,7 +164,7 @@ static uint16_t bnxt_rss_ctxts(const struct bnxt *bp)
 				    BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
-static uint16_t  bnxt_rss_hash_tbl_size(const struct bnxt *bp)
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 {
 	if (!BNXT_CHIP_THOR(bp))
 		return HW_HASH_INDEX_SIZE;
@@ -1047,7 +1013,7 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	return -ENOSPC;
 }
 
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 {
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
@@ -1273,6 +1239,12 @@ static int bnxt_dev_set_link_down_op(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static void bnxt_free_switch_domain(struct bnxt *bp)
+{
+	if (bp->switch_domain_id)
+		rte_eth_switch_domain_free(bp->switch_domain_id);
+}
+
 /* Unload the driver, release resources */
 static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
@@ -1341,6 +1313,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
+	bnxt_free_switch_domain(bp);
+
 	bnxt_uninit_resources(bp, false);
 
 	bnxt_free_leds_info(bp);
@@ -1522,8 +1496,8 @@ int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 	return rc;
 }
 
-static int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
-			       int wait_to_complete)
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete)
 {
 	return bnxt_link_update(eth_dev, wait_to_complete, ETH_LINK_UP);
 }
@@ -5477,8 +5451,26 @@ bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
 	rte_kvargs_free(kvlist);
 }
 
+static int bnxt_alloc_switch_domain(struct bnxt *bp)
+{
+	int rc = 0;
+
+	if (BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) {
+		rc = rte_eth_switch_domain_alloc(&bp->switch_domain_id);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Failed to alloc switch domain: %d\n", rc);
+		else
+			PMD_DRV_LOG(INFO,
+				    "Switch domain allocated %d\n",
+				    bp->switch_domain_id);
+	}
+
+	return rc;
+}
+
 static int
-bnxt_dev_init(struct rte_eth_dev *eth_dev)
+bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	static int version_printed;
@@ -5557,6 +5549,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	if (rc)
 		goto error_free;
 
+	bnxt_alloc_switch_domain(bp);
+
 	/* Pass the information to the rte_eth_dev_close() that it should also
 	 * release the private port resources.
 	 */
@@ -5689,25 +5683,162 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *bp = eth_dev->data->dev_private;
+	struct rte_eth_dev *vf_rep_eth_dev;
+	int ret = 0, i;
+
+	if (!bp)
+		return -EINVAL;
+
+	for (i = 0; i < bp->num_reps; i++) {
+		vf_rep_eth_dev = bp->rep_info[i].vfr_eth_dev;
+		if (!vf_rep_eth_dev)
+			continue;
+		rte_eth_dev_destroy(vf_rep_eth_dev, bnxt_vf_representor_uninit);
+	}
+	ret = rte_eth_dev_destroy(eth_dev, bnxt_dev_uninit);
+
+	return ret;
+}
+
 static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct bnxt),
-		bnxt_dev_init);
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
+	uint16_t num_rep;
+	int i, ret = 0;
+	struct bnxt *backing_bp;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+	}
+
+	if (num_rep > BNXT_MAX_VF_REPS) {
+		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
+			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
+		ret = -EINVAL;
+		return ret;
+	}
+
+	/* probe representor ports now */
+	if (!backing_eth_dev)
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = -ENODEV;
+		return ret;
+	}
+	backing_bp = backing_eth_dev->data->dev_private;
+
+	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
+		PMD_DRV_LOG(ERR,
+			    "Not a PF or trusted VF. No Representor support\n");
+		/* Returning an error is not an option.
+		 * Applications are not handling this correctly
+		 */
+		return ret;
+	}
+
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		struct bnxt_vf_representor representor = {
+			.vf_id = eth_da.representor_ports[i],
+			.switch_domain_id = backing_bp->switch_domain_id,
+			.parent_priv = backing_bp
+		};
+
+		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
+			PMD_DRV_LOG(ERR, "VF-Rep id %d >= %d MAX VF ID\n",
+				    representor.vf_id, BNXT_MAX_VF_REPS);
+			continue;
+		}
+
+		/* representor port net_bdf_port */
+		snprintf(name, sizeof(name), "net_%s_representor_%d",
+			 pci_dev->device.name, eth_da.representor_ports[i]);
+
+		ret = rte_eth_dev_create(&pci_dev->device, name,
+					 sizeof(struct bnxt_vf_representor),
+					 NULL, NULL,
+					 bnxt_vf_representor_init,
+					 &representor);
+
+		if (!ret) {
+			vf_rep_eth_dev = rte_eth_dev_allocated(name);
+			if (!vf_rep_eth_dev) {
+				PMD_DRV_LOG(ERR, "Failed to find the eth_dev"
+					    " for VF-Rep: %s.", name);
+				bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+				ret = -ENODEV;
+				return ret;
+			}
+			backing_bp->rep_info[representor.vf_id].vfr_eth_dev =
+				vf_rep_eth_dev;
+			backing_bp->num_reps++;
+		} else {
+			PMD_DRV_LOG(ERR, "failed to create bnxt vf "
+				    "representor %s.", name);
+			bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+		}
+	}
+
+	return ret;
 }
 
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
-		return rte_eth_dev_pci_generic_remove(pci_dev,
-				bnxt_dev_uninit);
-	else
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!eth_dev)
+		return ENODEV; /* Invoked typically only by OVS-DPDK, by the
+				* time it comes here the eth_dev is already
+				* deleted by rte_eth_dev_close(), so returning
+				* +ve value will atleast help in proper cleanup
+				*/
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_vf_representor_uninit);
+		else
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_dev_uninit);
+	} else {
 		return rte_eth_dev_pci_generic_remove(pci_dev, NULL);
+	}
 }
 
 static struct rte_pci_driver bnxt_rte_pmd = {
 	.id_table = bnxt_pci_id_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+			RTE_PCI_DRV_PROBE_AGAIN, /* Needed in case of VF-REPs
+						  * and OVS-DPDK
+						  */
 	.probe = bnxt_pci_probe,
 	.remove = bnxt_pci_remove,
 };
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
new file mode 100644
index 000000000..21f1b0765
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_ring.h"
+#include "bnxt_reps.h"
+#include "hsi_struct_def_dpdk.h"
+
+static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
+	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
+	.dev_configure = bnxt_vf_rep_dev_configure_op,
+	.dev_start = bnxt_vf_rep_dev_start_op,
+	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.link_update = bnxt_vf_rep_link_update_op,
+	.dev_close = bnxt_vf_rep_dev_close_op,
+	.dev_stop = bnxt_vf_rep_dev_stop_op
+};
+
+static uint16_t
+bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
+		     __rte_unused struct rte_mbuf **rx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
+		     __rte_unused struct rte_mbuf **tx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
+{
+	struct bnxt_vf_representor *vf_rep_bp = eth_dev->data->dev_private;
+	struct bnxt_vf_representor *rep_params =
+				 (struct bnxt_vf_representor *)params;
+	struct rte_eth_link *link;
+	struct bnxt *parent_bp;
+
+	vf_rep_bp->vf_id = rep_params->vf_id;
+	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
+	vf_rep_bp->parent_priv = rep_params->parent_priv;
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+	eth_dev->data->representor_id = rep_params->vf_id;
+
+	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
+	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
+	       sizeof(vf_rep_bp->mac_addr));
+	eth_dev->data->mac_addrs =
+		(struct rte_ether_addr *)&vf_rep_bp->mac_addr;
+	eth_dev->dev_ops = &bnxt_vf_rep_dev_ops;
+
+	/* No data-path, but need stub Rx/Tx functions to avoid crash
+	 * when testing with ovs-dpdk
+	 */
+	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
+	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
+	/* Link state. Inherited from PF or trusted VF */
+	parent_bp = vf_rep_bp->parent_priv;
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+
+	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
+	bnxt_print_link_info(eth_dev);
+
+	/* Pass the information to the rte_eth_dev_close() that it should also
+	 * release the private port resources.
+	 */
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+	PMD_DRV_LOG(INFO,
+		    "Switch domain id %d: Representor Device %d init done\n",
+		    vf_rep_bp->switch_domain_id, vf_rep_bp->vf_id);
+
+	return 0;
+}
+
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+
+	uint16_t vf_id;
+
+	eth_dev->data->mac_addrs = NULL;
+
+	parent_bp = rep->parent_priv;
+	if (parent_bp) {
+		parent_bp->num_reps--;
+		vf_id = rep->vf_id;
+		if (parent_bp->rep_info) {
+			memset(&parent_bp->rep_info[vf_id], 0,
+			       sizeof(parent_bp->rep_info[vf_id]));
+			/* mark that this representor has been freed */
+		}
+	}
+	eth_dev->dev_ops = NULL;
+	return 0;
+}
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+	struct rte_eth_link *link;
+	int rc;
+
+	parent_bp = rep->parent_priv;
+	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
+
+	/* Link state. Inherited from PF or trusted VF */
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+	bnxt_print_link_info(eth_dev);
+
+	return rc;
+}
+
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_rep_link_update_op(eth_dev, 1);
+
+	return 0;
+}
+
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
+{
+	eth_dev = eth_dev;
+}
+
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_representor_uninit(eth_dev);
+}
+
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp;
+	uint16_t max_vnics, i, j, vpool, vrxq;
+	unsigned int max_rx_rings;
+	int rc = 0;
+
+	/* MAC Specifics */
+	parent_bp = rep_bp->parent_priv;
+	if (!parent_bp) {
+		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
+		return rc;
+	}
+	PMD_DRV_LOG(DEBUG, "Representor dev_info_get_op\n");
+	dev_info->max_mac_addrs = parent_bp->max_l2_ctx;
+	dev_info->max_hash_mac_addrs = 0;
+
+	max_rx_rings = BNXT_MAX_VF_REP_RINGS;
+	/* For the sake of symmetry, max_rx_queues = max_tx_queues */
+	dev_info->max_rx_queues = max_rx_rings;
+	dev_info->max_tx_queues = max_rx_rings;
+	dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
+	dev_info->hash_key_size = 40;
+	max_vnics = parent_bp->max_vnics;
+
+	/* MTU specifics */
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+	dev_info->max_mtu = BNXT_MAX_MTU;
+
+	/* Fast path specifics */
+	dev_info->min_rx_bufsize = 1;
+	dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+
+	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
+	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
+		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
+	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
+
+	/* *INDENT-OFF* */
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 0,
+		},
+		.rx_free_thresh = 32,
+		/* If no descriptors available, pkts are dropped by default */
+		.rx_drop_en = 1,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = 32,
+			.hthresh = 0,
+			.wthresh = 0,
+		},
+		.tx_free_thresh = 32,
+		.tx_rs_thresh = 32,
+	};
+	eth_dev->data->dev_conf.intr_conf.lsc = 1;
+
+	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+	dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC;
+	dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC;
+
+	/* *INDENT-ON* */
+
+	/*
+	 * TODO: default_rxconf, default_txconf, rx_desc_lim, and tx_desc_lim
+	 *       need further investigation.
+	 */
+
+	/* VMDq resources */
+	vpool = 64; /* ETH_64_POOLS */
+	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	for (i = 0; i < 4; vpool >>= 1, i++) {
+		if (max_vnics > vpool) {
+			for (j = 0; j < 5; vrxq >>= 1, j++) {
+				if (dev_info->max_rx_queues > vrxq) {
+					if (vpool > vrxq)
+						vpool = vrxq;
+					goto found;
+				}
+			}
+			/* Not enough resources to support VMDq */
+			break;
+		}
+	}
+	/* Not enough resources to support VMDq */
+	vpool = 0;
+	vrxq = 0;
+found:
+	dev_info->max_vmdq_pools = vpool;
+	dev_info->vmdq_queue_num = vrxq;
+
+	dev_info->vmdq_pool_base = 0;
+	dev_info->vmdq_queue_base = 0;
+
+	return 0;
+}
+
+int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
+{
+	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	return 0;
+}
+
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
+
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
new file mode 100644
index 000000000..6048faf08
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_REPS_H_
+#define _BNXT_REPS_H_
+
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info);
+int bnxt_vf_rep_dev_configure_op(struct rte_eth_dev *eth_dev);
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl);
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp);
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf);
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+#endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 4306c6039..5c7859cb5 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -21,6 +21,7 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+	'bnxt_reps.c',
 
 	'tf_core/tf_core.c',
 	'tf_core/bitalloc.c',
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 02/51] net/bnxt: add support for VF-reps data path
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
                           ` (48 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Added code to support Tx/Rx from a VF representor port.
The VF-reps use the RX/TX rings of the Trusted VF/PF.
For each VF-rep, the Trusted VF/PF driver issues a VFR_ALLOC FW cmd that
returns "cfa_code" and "cfa_action" values.
The FW sets up the filter tables in such a way that VF traffic by
default (in absence of other rules) gets punted to the parent function
i.e. either the Trusted VF or the PF.
The cfa_code value in the RX-compl informs the driver of the source VF.
For traffic being transmitted from the VF-rep, the TX BD is tagged with
a cfa_action value that informs the HW to punt it to the corresponding
VF.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  30 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 150 ++++++++---
 drivers/net/bnxt/bnxt_reps.c   | 476 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/bnxt_reps.h   |  11 +
 drivers/net/bnxt/bnxt_rxr.c    |  22 +-
 drivers/net/bnxt/bnxt_rxr.h    |   1 +
 drivers/net/bnxt/bnxt_txq.h    |   1 +
 drivers/net/bnxt/bnxt_txr.c    |   4 +-
 8 files changed, 616 insertions(+), 79 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 9b7b87cee..443d9fee4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -495,6 +495,7 @@ struct bnxt_mark_info {
 
 struct bnxt_rep_info {
 	struct rte_eth_dev	*vfr_eth_dev;
+	pthread_mutex_t		vfr_lock;
 };
 
 /* address space location of register */
@@ -755,7 +756,8 @@ struct bnxt {
 	uint16_t		fw_reset_max_msecs;
 	uint16_t		switch_domain_id;
 	uint16_t		num_reps;
-	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
+	struct bnxt_rep_info	*rep_info;
+	uint16_t                *cfa_code_map;
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -780,12 +782,28 @@ struct bnxt {
  * Structure to store private data for each VF representor instance
  */
 struct bnxt_vf_representor {
-	uint16_t switch_domain_id;
-	uint16_t vf_id;
+	uint16_t		switch_domain_id;
+	uint16_t		vf_id;
+	uint16_t		tx_cfa_action;
+	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
-	struct bnxt	*parent_priv;
-	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
-	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct rte_eth_dev	*parent_dev;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t			dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct bnxt_rx_queue	**rx_queues;
+	unsigned int		rx_nr_rings;
+	unsigned int		tx_nr_rings;
+	uint64_t                tx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                tx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_bytes[BNXT_MAX_VF_REP_RINGS];
+};
+
+struct bnxt_vf_rep_tx_queue {
+	struct bnxt_tx_queue *txq;
+	struct bnxt_vf_representor *bp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4911745af..4202904c9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -137,6 +137,7 @@ static void bnxt_cancel_fw_health_check(struct bnxt *bp);
 static int bnxt_restore_vlan_filters(struct bnxt *bp);
 static void bnxt_dev_recover(void *arg);
 static void bnxt_free_error_recovery_info(struct bnxt *bp);
+static void bnxt_free_rep_info(struct bnxt *bp);
 
 int is_bnxt_in_error(struct bnxt *bp)
 {
@@ -5243,7 +5244,7 @@ bnxt_init_locks(struct bnxt *bp)
 
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 {
-	int rc;
+	int rc = 0;
 
 	rc = bnxt_init_fw(bp);
 	if (rc)
@@ -5642,6 +5643,8 @@ bnxt_uninit_locks(struct bnxt *bp)
 {
 	pthread_mutex_destroy(&bp->flow_lock);
 	pthread_mutex_destroy(&bp->def_cp_lock);
+	if (bp->rep_info)
+		pthread_mutex_destroy(&bp->rep_info->vfr_lock);
 }
 
 static int
@@ -5664,6 +5667,7 @@ bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev)
 
 	bnxt_uninit_locks(bp);
 	bnxt_free_flow_stats_info(bp);
+	bnxt_free_rep_info(bp);
 	rte_free(bp->ptp_cfg);
 	bp->ptp_cfg = NULL;
 	return rc;
@@ -5703,56 +5707,73 @@ static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static void bnxt_free_rep_info(struct bnxt *bp)
 {
-	char name[RTE_ETH_NAME_MAX_LEN];
-	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
-	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
-	uint16_t num_rep;
-	int i, ret = 0;
-	struct bnxt *backing_bp;
+	rte_free(bp->rep_info);
+	bp->rep_info = NULL;
+	rte_free(bp->cfa_code_map);
+	bp->cfa_code_map = NULL;
+}
 
-	if (pci_dev->device.devargs) {
-		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
-					    &eth_da);
-		if (ret)
-			return ret;
-	}
+static int bnxt_init_rep_info(struct bnxt *bp)
+{
+	int i = 0, rc;
 
-	num_rep = eth_da.nb_representor_ports;
-	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
-		    num_rep);
+	if (bp->rep_info)
+		return 0;
 
-	/* We could come here after first level of probe is already invoked
-	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
-	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
-	 */
-	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
-					 sizeof(struct bnxt),
-					 eth_dev_pci_specific_init, pci_dev,
-					 bnxt_dev_init, NULL);
+	bp->rep_info = rte_zmalloc("bnxt_rep_info",
+				   sizeof(bp->rep_info[0]) * BNXT_MAX_VF_REPS,
+				   0);
+	if (!bp->rep_info) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for rep info\n");
+		return -ENOMEM;
+	}
+	bp->cfa_code_map = rte_zmalloc("bnxt_cfa_code_map",
+				       sizeof(*bp->cfa_code_map) *
+				       BNXT_MAX_CFA_CODE, 0);
+	if (!bp->cfa_code_map) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for cfa_code_map\n");
+		bnxt_free_rep_info(bp);
+		return -ENOMEM;
+	}
 
-		if (ret || !num_rep)
-			return ret;
+	for (i = 0; i < BNXT_MAX_CFA_CODE; i++)
+		bp->cfa_code_map[i] = BNXT_VF_IDX_INVALID;
+
+	rc = pthread_mutex_init(&bp->rep_info->vfr_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Unable to initialize vfr_lock\n");
+		bnxt_free_rep_info(bp);
+		return rc;
 	}
+	return rc;
+}
+
+static int bnxt_rep_port_probe(struct rte_pci_device *pci_dev,
+			       struct rte_eth_devargs eth_da,
+			       struct rte_eth_dev *backing_eth_dev)
+{
+	struct rte_eth_dev *vf_rep_eth_dev;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct bnxt *backing_bp;
+	uint16_t num_rep;
+	int i, ret = 0;
 
+	num_rep = eth_da.nb_representor_ports;
 	if (num_rep > BNXT_MAX_VF_REPS) {
 		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
-			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
-		ret = -EINVAL;
-		return ret;
+			    num_rep, BNXT_MAX_VF_REPS);
+		return -EINVAL;
 	}
 
-	/* probe representor ports now */
-	if (!backing_eth_dev)
-		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = -ENODEV;
-		return ret;
+	if (num_rep > RTE_MAX_ETHPORTS) {
+		PMD_DRV_LOG(ERR,
+			    "nb_representor_ports = %d > %d MAX ETHPORTS\n",
+			    num_rep, RTE_MAX_ETHPORTS);
+		return -EINVAL;
 	}
+
 	backing_bp = backing_eth_dev->data->dev_private;
 
 	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
@@ -5761,14 +5782,17 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		/* Returning an error is not an option.
 		 * Applications are not handling this correctly
 		 */
-		return ret;
+		return 0;
 	}
 
-	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+	if (bnxt_init_rep_info(backing_bp))
+		return 0;
+
+	for (i = 0; i < num_rep; i++) {
 		struct bnxt_vf_representor representor = {
 			.vf_id = eth_da.representor_ports[i],
 			.switch_domain_id = backing_bp->switch_domain_id,
-			.parent_priv = backing_bp
+			.parent_dev = backing_eth_dev
 		};
 
 		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
@@ -5809,6 +5833,48 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return ret;
 }
 
+static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			  struct rte_pci_device *pci_dev)
+{
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev;
+	uint16_t num_rep;
+	int ret = 0;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	}
+
+	/* probe representor ports now */
+	ret = bnxt_rep_port_probe(pci_dev, eth_da, backing_eth_dev);
+
+	return ret;
+}
+
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct rte_eth_dev *eth_dev;
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 21f1b0765..777179558 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -6,6 +6,11 @@
 #include "bnxt.h"
 #include "bnxt_ring.h"
 #include "bnxt_reps.h"
+#include "bnxt_rxq.h"
+#include "bnxt_rxr.h"
+#include "bnxt_txq.h"
+#include "bnxt_txr.h"
+#include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
@@ -13,25 +18,128 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_configure = bnxt_vf_rep_dev_configure_op,
 	.dev_start = bnxt_vf_rep_dev_start_op,
 	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.rx_queue_release = bnxt_vf_rep_rx_queue_release_op,
 	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.tx_queue_release = bnxt_vf_rep_tx_queue_release_op,
 	.link_update = bnxt_vf_rep_link_update_op,
 	.dev_close = bnxt_vf_rep_dev_close_op,
-	.dev_stop = bnxt_vf_rep_dev_stop_op
+	.dev_stop = bnxt_vf_rep_dev_stop_op,
+	.stats_get = bnxt_vf_rep_stats_get_op,
+	.stats_reset = bnxt_vf_rep_stats_reset_op,
 };
 
-static uint16_t
-bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
-		     __rte_unused struct rte_mbuf **rx_pkts,
-		     __rte_unused uint16_t nb_pkts)
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf)
 {
+	struct bnxt_sw_rx_bd *prod_rx_buf;
+	struct bnxt_rx_ring_info *rep_rxr;
+	struct bnxt_rx_queue *rep_rxq;
+	struct rte_eth_dev *vfr_eth_dev;
+	struct bnxt_vf_representor *vfr_bp;
+	uint16_t vf_id;
+	uint16_t mask;
+	uint8_t que;
+
+	vf_id = bp->cfa_code_map[cfa_code];
+	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
+	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
+		return 1;
+	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	if (!vfr_eth_dev)
+		return 1;
+	vfr_bp = vfr_eth_dev->data->dev_private;
+	if (vfr_bp->rx_cfa_code != cfa_code) {
+		/* cfa_code not meant for this VF rep!!?? */
+		return 1;
+	}
+	/* If rxq_id happens to be > max rep_queue, use rxq0 */
+	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
+	rep_rxq = vfr_bp->rx_queues[que];
+	rep_rxr = rep_rxq->rx_ring;
+	mask = rep_rxr->rx_ring_struct->ring_mask;
+
+	/* Put this mbuf on the RxQ of the Representor */
+	prod_rx_buf =
+		&rep_rxr->rx_buf_ring[rep_rxr->rx_prod++ & mask];
+	if (!prod_rx_buf->mbuf) {
+		prod_rx_buf->mbuf = mbuf;
+		vfr_bp->rx_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_pkts[que]++;
+	} else {
+		vfr_bp->rx_drop_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_drop_pkts[que]++;
+		rte_free(mbuf); /* Representor Rx ring full, drop pkt */
+	}
+
 	return 0;
 }
 
 static uint16_t
-bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
-		     __rte_unused struct rte_mbuf **tx_pkts,
+bnxt_vf_rep_rx_burst(void *rx_queue,
+		     struct rte_mbuf **rx_pkts,
+		     uint16_t nb_pkts)
+{
+	struct bnxt_rx_queue *rxq = rx_queue;
+	struct bnxt_sw_rx_bd *cons_rx_buf;
+	struct bnxt_rx_ring_info *rxr;
+	uint16_t nb_rx_pkts = 0;
+	uint16_t mask, i;
+
+	if (!rxq)
+		return 0;
+
+	rxr = rxq->rx_ring;
+	mask = rxr->rx_ring_struct->ring_mask;
+	for (i = 0; i < nb_pkts; i++) {
+		cons_rx_buf = &rxr->rx_buf_ring[rxr->rx_cons & mask];
+		if (!cons_rx_buf->mbuf)
+			return nb_rx_pkts;
+		rx_pkts[nb_rx_pkts] = cons_rx_buf->mbuf;
+		rx_pkts[nb_rx_pkts]->port = rxq->port_id;
+		cons_rx_buf->mbuf = NULL;
+		nb_rx_pkts++;
+		rxr->rx_cons++;
+	}
+
+	return nb_rx_pkts;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
 		     __rte_unused uint16_t nb_pkts)
 {
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+	struct bnxt_tx_queue *ptxq;
+	struct bnxt *parent;
+	struct  bnxt_vf_representor *vf_rep_bp;
+	int qid;
+	int rc;
+	int i;
+
+	if (!vfr_txq)
+		return 0;
+
+	qid = vfr_txq->txq->queue_id;
+	vf_rep_bp = vfr_txq->bp;
+	parent = vf_rep_bp->parent_dev->data->dev_private;
+	pthread_mutex_lock(&parent->rep_info->vfr_lock);
+	ptxq = parent->tx_queues[qid];
+
+	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+
+	for (i = 0; i < nb_pkts; i++) {
+		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
+		vf_rep_bp->tx_pkts[qid]++;
+	}
+
+	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
+	ptxq->tx_cfa_action = 0;
+	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
+
+	return rc;
+
 	return 0;
 }
 
@@ -45,7 +153,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
-	vf_rep_bp->parent_priv = rep_params->parent_priv;
+	vf_rep_bp->parent_dev = rep_params->parent_dev;
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	eth_dev->data->representor_id = rep_params->vf_id;
@@ -63,7 +171,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
 	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
 	/* Link state. Inherited from PF or trusted VF */
-	parent_bp = vf_rep_bp->parent_priv;
+	parent_bp = vf_rep_bp->parent_dev->data->dev_private;
 	link = &parent_bp->eth_dev->data->dev_link;
 
 	eth_dev->data->dev_link.link_speed = link->link_speed;
@@ -94,18 +202,18 @@ int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
 	uint16_t vf_id;
 
 	eth_dev->data->mac_addrs = NULL;
-
-	parent_bp = rep->parent_priv;
-	if (parent_bp) {
-		parent_bp->num_reps--;
-		vf_id = rep->vf_id;
-		if (parent_bp->rep_info) {
-			memset(&parent_bp->rep_info[vf_id], 0,
-			       sizeof(parent_bp->rep_info[vf_id]));
-			/* mark that this representor has been freed */
-		}
-	}
 	eth_dev->dev_ops = NULL;
+
+	parent_bp = rep->parent_dev->data->dev_private;
+	if (!parent_bp)
+		return 0;
+
+	parent_bp->num_reps--;
+	vf_id = rep->vf_id;
+	if (parent_bp->rep_info)
+		memset(&parent_bp->rep_info[vf_id], 0,
+		       sizeof(parent_bp->rep_info[vf_id]));
+		/* mark that this representor has been freed */
 	return 0;
 }
 
@@ -117,7 +225,7 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	struct rte_eth_link *link;
 	int rc;
 
-	parent_bp = rep->parent_priv;
+	parent_bp = rep->parent_dev->data->dev_private;
 	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
 
 	/* Link state. Inherited from PF or trusted VF */
@@ -132,16 +240,134 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
+static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already allocated in FW */
+	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * Alloc VF rep rules in CFA after default VNIC is created.
+	 * Otherwise the FW will create the VF-rep rules with
+	 * default drop action.
+	 */
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
+				     vfr->vf_id,
+				     &vfr->tx_cfa_action,
+				     &vfr->rx_cfa_code);
+	 */
+	if (!rc) {
+		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
+			    vfr->vf_id);
+	} else {
+		PMD_DRV_LOG(ERR,
+			    "Failed to alloc representor %d in FW\n",
+			    vfr->vf_id);
+	}
+
+	return rc;
+}
+
+static void bnxt_vf_rep_free_rx_mbufs(struct bnxt_vf_representor *rep_bp)
+{
+	struct bnxt_rx_queue *rxq;
+	unsigned int i;
+
+	for (i = 0; i < rep_bp->rx_nr_rings; i++) {
+		rxq = rep_bp->rx_queues[i];
+		bnxt_rx_queue_release_mbufs(rxq);
+	}
+}
+
 int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 {
-	bnxt_vf_rep_link_update_op(eth_dev, 1);
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int rc;
 
-	return 0;
+	rc = bnxt_vfr_alloc(rep_bp);
+
+	if (!rc) {
+		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
+		eth_dev->tx_pkt_burst = &bnxt_vf_rep_tx_burst;
+
+		bnxt_vf_rep_link_update_op(eth_dev, 1);
+	} else {
+		eth_dev->data->dev_link.link_status = 0;
+		bnxt_vf_rep_free_rx_mbufs(rep_bp);
+	}
+
+	return rc;
+}
+
+static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already freed in FW */
+	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
+				    vfr->vf_id);
+	 */
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to free representor %d in FW\n",
+			    vfr->vf_id);
+		return rc;
+	}
+
+	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
+	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
+		    vfr->vf_id);
+	vfr->tx_cfa_action = 0;
+	vfr->rx_cfa_code = 0;
+
+	return rc;
 }
 
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *vfr_bp = eth_dev->data->dev_private;
+
+	/* Avoid crashes as we are about to free queues */
+	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
+	eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+
+	bnxt_vfr_free(vfr_bp);
+
+	if (eth_dev->data->dev_started)
+		eth_dev->data->dev_link.link_status = 0;
+
+	bnxt_vf_rep_free_rx_mbufs(vfr_bp);
 }
 
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
@@ -159,7 +385,7 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	int rc = 0;
 
 	/* MAC Specifics */
-	parent_bp = rep_bp->parent_priv;
+	parent_bp = rep_bp->parent_dev->data->dev_private;
 	if (!parent_bp) {
 		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
 		return rc;
@@ -257,7 +483,13 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
 {
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+
 	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	rep_bp->rx_queues = (void *)eth_dev->data->rx_queues;
+	rep_bp->tx_nr_rings = eth_dev->data->nb_tx_queues;
+	rep_bp->rx_nr_rings = eth_dev->data->nb_rx_queues;
+
 	return 0;
 }
 
@@ -269,9 +501,94 @@ int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  rx_conf,
 				  __rte_unused struct rte_mempool *mp)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_rx_queue *parent_rxq;
+	struct bnxt_rx_queue *rxq;
+	struct bnxt_sw_rx_bd *buf_ring;
+	int rc = 0;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Rx ring %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_RX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid\n", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_rxq = parent_bp->rx_queues[queue_idx];
+	if (!parent_rxq) {
+		PMD_DRV_LOG(ERR, "Parent RxQ has not been configured yet\n");
+		return -EINVAL;
+	}
+
+	if (nb_desc != parent_rxq->nb_rx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent rxq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->rx_queues) {
+		rxq = eth_dev->data->rx_queues[queue_idx];
+		if (rxq)
+			bnxt_rx_queue_release_op(rxq);
+	}
+
+	rxq = rte_zmalloc_socket("bnxt_vfr_rx_queue",
+				 sizeof(struct bnxt_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_rx_queue allocation failed!\n");
+		return -ENOMEM;
+	}
+
+	rxq->nb_rx_desc = nb_desc;
+
+	rc = bnxt_init_rx_ring_struct(rxq, socket_id);
+	if (rc)
+		goto out;
+
+	buf_ring = rte_zmalloc_socket("bnxt_rx_vfr_buf_ring",
+				      sizeof(struct bnxt_sw_rx_bd) *
+				      rxq->rx_ring->rx_ring_struct->ring_size,
+				      RTE_CACHE_LINE_SIZE, socket_id);
+	if (!buf_ring) {
+		PMD_DRV_LOG(ERR, "bnxt_rx_vfr_buf_ring allocation failed!\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rxq->rx_ring->rx_buf_ring = buf_ring;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = eth_dev->data->port_id;
+	eth_dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
+
+out:
+	if (rxq)
+		bnxt_rx_queue_release_op(rxq);
+
+	return rc;
+}
+
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue)
+{
+	struct bnxt_rx_queue *rxq = (struct bnxt_rx_queue *)rx_queue;
+
+	if (!rxq)
+		return;
+
+	bnxt_rx_queue_release_mbufs(rxq);
+
+	bnxt_free_ring(rxq->rx_ring->rx_ring_struct);
+	bnxt_free_ring(rxq->rx_ring->ag_ring_struct);
+	bnxt_free_ring(rxq->cp_ring->cp_ring_struct);
+
+	rte_free(rxq);
 }
 
 int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
@@ -281,7 +598,112 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_tx_queue *parent_txq, *txq;
+	struct bnxt_vf_rep_tx_queue *vfr_txq;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Tx rings %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_TX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_txq = parent_bp->tx_queues[queue_idx];
+	if (!parent_txq) {
+		PMD_DRV_LOG(ERR, "Parent TxQ has not been configured yet\n");
+		return -EINVAL;
+	}
 
+	if (nb_desc != parent_txq->nb_tx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent txq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->tx_queues) {
+		vfr_txq = eth_dev->data->tx_queues[queue_idx];
+		bnxt_vf_rep_tx_queue_release_op(vfr_txq);
+		vfr_txq = NULL;
+	}
+
+	vfr_txq = rte_zmalloc_socket("bnxt_vfr_tx_queue",
+				     sizeof(struct bnxt_vf_rep_tx_queue),
+				     RTE_CACHE_LINE_SIZE, socket_id);
+	if (!vfr_txq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_tx_queue allocation failed!");
+		return -ENOMEM;
+	}
+	txq = rte_zmalloc_socket("bnxt_tx_queue",
+				 sizeof(struct bnxt_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!");
+		rte_free(vfr_txq);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->queue_id = queue_idx;
+	txq->port_id = eth_dev->data->port_id;
+	vfr_txq->txq = txq;
+	vfr_txq->bp = rep_bp;
+	eth_dev->data->tx_queues[queue_idx] = vfr_txq;
+
+	return 0;
+}
+
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue)
+{
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+
+	if (!vfr_txq)
+		return;
+
+	rte_free(vfr_txq->txq);
+	rte_free(vfr_txq);
+}
+
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		stats->obytes += rep_bp->tx_bytes[i];
+		stats->opackets += rep_bp->tx_pkts[i];
+		stats->ibytes += rep_bp->rx_bytes[i];
+		stats->ipackets += rep_bp->rx_pkts[i];
+		stats->imissed += rep_bp->rx_drop_pkts[i];
+
+		stats->q_ipackets[i] = rep_bp->rx_pkts[i];
+		stats->q_ibytes[i] = rep_bp->rx_bytes[i];
+		stats->q_opackets[i] = rep_bp->tx_pkts[i];
+		stats->q_obytes[i] = rep_bp->tx_bytes[i];
+		stats->q_errors[i] = rep_bp->rx_drop_pkts[i];
+	}
+
+	return 0;
+}
+
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		rep_bp->tx_pkts[i] = 0;
+		rep_bp->tx_bytes[i] = 0;
+		rep_bp->rx_pkts[i] = 0;
+		rep_bp->rx_bytes[i] = 0;
+		rep_bp->rx_drop_pkts[i] = 0;
+	}
 	return 0;
 }
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 6048faf08..5c2e0a0b9 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -9,6 +9,12 @@
 #include <rte_malloc.h>
 #include <rte_ethdev.h>
 
+#define BNXT_MAX_CFA_CODE               65536
+#define BNXT_VF_IDX_INVALID             0xffff
+
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
@@ -30,6 +36,11 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused unsigned int socket_id,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf);
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue);
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue);
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats);
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev);
 #endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 11807f409..37b534fc2 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -12,6 +12,7 @@
 #include <rte_memory.h>
 
 #include "bnxt.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxr.h"
 #include "bnxt_rxq.h"
@@ -539,7 +540,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
-			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
+		       struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
@@ -735,6 +736,20 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
+	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
+	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
+		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
+				   mbuf)) {
+			/* Now return an error so that nb_rx_pkts is not
+			 * incremented.
+			 * This packet was meant to be given to the representor.
+			 * So no need to account the packet and give it to
+			 * parent Rx burst function.
+			 */
+			rc = -ENODEV;
+		}
+	}
+
 next_rx:
 
 	*raw_cons = tmp_raw_cons;
@@ -751,6 +766,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint32_t raw_cons = cpr->cp_raw_cons;
 	uint32_t cons;
 	int nb_rx_pkts = 0;
+	int nb_rep_rx_pkts = 0;
 	struct rx_pkt_cmpl *rxcmp;
 	uint16_t prod = rxr->rx_prod;
 	uint16_t ag_prod = rxr->ag_prod;
@@ -784,6 +800,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				nb_rx_pkts++;
 			if (rc == -EBUSY)	/* partial completion */
 				break;
+			if (rc == -ENODEV)	/* completion for representor */
+				nb_rep_rx_pkts++;
 		} else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
 			evt =
 			bnxt_event_hwrm_resp_handler(rxq->bp,
@@ -802,7 +820,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 
 	cpr->cp_raw_cons = raw_cons;
-	if (!nb_rx_pkts && !evt) {
+	if (!nb_rx_pkts && !nb_rep_rx_pkts && !evt) {
 		/*
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 811dcd86b..e60c97fa1 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -188,6 +188,7 @@ struct bnxt_sw_rx_bd {
 struct bnxt_rx_ring_info {
 	uint16_t		rx_prod;
 	uint16_t		ag_prod;
+	uint16_t                rx_cons; /* Needed for representor */
 	struct bnxt_db_info     rx_db;
 	struct bnxt_db_info     ag_db;
 
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 37a3f9539..69ff89aab 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -29,6 +29,7 @@ struct bnxt_tx_queue {
 	struct bnxt		*bp;
 	int			index;
 	int			tx_wake_thresh;
+	uint32_t                tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 16021407e..d7e193d38 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT))
+				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +184,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = 0;
+		cfa_action = txq->tx_cfa_action;
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 03/51] net/bnxt: get IDs for VF-Rep endpoint
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
                           ` (47 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use 'first_vf_id' and the 'vf_id' that is input as part of adding
a representor to obtain the PCI function ID(FID) of the VF(VFR endpoint).
Use the FID as an input to FUNC_QCFG HWRM cmd to obtain the default
vnic ID of the VF.
Along with getting the default vNIC ID by supplying the FW FID of
the VF-rep endpoint to HWRM_FUNC_QCFG, obtain and store it's
function svif.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |  3 +++
 drivers/net/bnxt/bnxt_hwrm.c | 27 +++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h |  4 ++++
 drivers/net/bnxt/bnxt_reps.c | 12 ++++++++++++
 4 files changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 443d9fee4..7afbd5cab 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -784,6 +784,9 @@ struct bnxt {
 struct bnxt_vf_representor {
 	uint16_t		switch_domain_id;
 	uint16_t		vf_id;
+	uint16_t		fw_fid;
+	uint16_t		dflt_vnic_id;
+	uint16_t		svif;
 	uint16_t		tx_cfa_action;
 	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 945bc9018..ed42e58d4 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,33 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t svif_info;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	req.fid = rte_cpu_to_le_16(fid);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	if (vnic_id)
+		*vnic_id = rte_le_to_cpu_16(resp->dflt_vnic_id);
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif && (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID))
+		*svif = svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 {
 	struct hwrm_port_mac_qcfg_input req = {0};
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 58b414d4f..8d19998df 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -270,4 +270,8 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 				 enum bnxt_flow_dir dir,
 				 uint16_t cntr,
 				 uint16_t num_entries);
+int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
+			       uint16_t *vnic_id);
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif);
 #endif
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 777179558..ea6f0010f 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -150,6 +150,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 				 (struct bnxt_vf_representor *)params;
 	struct rte_eth_link *link;
 	struct bnxt *parent_bp;
+	int rc = 0;
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
@@ -179,6 +180,17 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_link.link_status = link->link_status;
 	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
 
+	vf_rep_bp->fw_fid = rep_params->vf_id + parent_bp->first_vf_id;
+	PMD_DRV_LOG(INFO, "vf_rep->fw_fid = %d\n", vf_rep_bp->fw_fid);
+	rc = bnxt_hwrm_get_dflt_vnic_svif(parent_bp, vf_rep_bp->fw_fid,
+					  &vf_rep_bp->dflt_vnic_id,
+					  &vf_rep_bp->svif);
+	if (rc)
+		PMD_DRV_LOG(ERR, "Failed to get default vnic id of VF\n");
+	else
+		PMD_DRV_LOG(INFO, "vf_rep->dflt_vnic_id = %d\n",
+			    vf_rep_bp->dflt_vnic_id);
+
 	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
 	bnxt_print_link_info(eth_dev);
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 04/51] net/bnxt: initialize parent PF information
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (2 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
                           ` (46 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev
  Cc: Lance Richardson, Venkat Duvvuru, Somnath Kotur, Kalesh AP,
	Kishore Padmanabha

From: Lance Richardson <lance.richardson@broadcom.com>

Add support to query parent PF information (MAC address,
function ID, port ID and default VNIC) from firmware.

Current firmware returns zero for parent default vnic,
a temporary Wh+-specific workaround is included until
that can be fixed.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  9 ++++++++
 drivers/net/bnxt/bnxt_ethdev.c | 23 +++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 42 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 75 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 7afbd5cab..2b87899a4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -217,6 +217,14 @@ struct bnxt_child_vf_info {
 	bool			persist_stats;
 };
 
+struct bnxt_parent_info {
+#define	BNXT_PF_FID_INVALID	0xFFFF
+	uint16_t		fid;
+	uint16_t		vnic;
+	uint16_t		port_id;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
@@ -738,6 +746,7 @@ struct bnxt {
 #define BNXT_OUTER_TPID_BD_SHFT	16
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
+	struct bnxt_parent_info	*parent;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4202904c9..bf018be16 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -97,6 +97,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+
 static const char *const bnxt_dev_args[] = {
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
@@ -173,6 +174,11 @@ uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 	return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
+static void bnxt_free_parent_info(struct bnxt *bp)
+{
+	rte_free(bp->parent);
+}
+
 static void bnxt_free_pf_info(struct bnxt *bp)
 {
 	rte_free(bp->pf);
@@ -223,6 +229,16 @@ static void bnxt_free_mem(struct bnxt *bp, bool reconfig)
 	bp->grp_info = NULL;
 }
 
+static int bnxt_alloc_parent_info(struct bnxt *bp)
+{
+	bp->parent = rte_zmalloc("bnxt_parent_info",
+				 sizeof(struct bnxt_parent_info), 0);
+	if (bp->parent == NULL)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int bnxt_alloc_pf_info(struct bnxt *bp)
 {
 	bp->pf = rte_zmalloc("bnxt_pf_info", sizeof(struct bnxt_pf_info), 0);
@@ -1322,6 +1338,7 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	bnxt_free_cos_queues(bp);
 	bnxt_free_link_info(bp);
 	bnxt_free_pf_info(bp);
+	bnxt_free_parent_info(bp);
 
 	eth_dev->dev_ops = NULL;
 	eth_dev->rx_pkt_burst = NULL;
@@ -5210,6 +5227,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_port_mac_qcfg(bp);
 
+	bnxt_hwrm_parent_pf_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
@@ -5528,6 +5547,10 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 	if (rc)
 		goto error_free;
 
+	rc = bnxt_alloc_parent_info(bp);
+	if (rc)
+		goto error_free;
+
 	rc = bnxt_alloc_hwrm_resources(bp);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ed42e58d4..347e1c71e 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,48 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	int rc;
+
+	if (!BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	if (!bp->parent)
+		return -EINVAL;
+
+	bp->parent->fid = BNXT_PF_FID_INVALID;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+
+	req.fid = rte_cpu_to_le_16(0xfffe); /* Request parent PF information. */
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	memcpy(bp->parent->mac_addr, resp->mac_address, RTE_ETHER_ADDR_LEN);
+	bp->parent->vnic = rte_le_to_cpu_16(resp->dflt_vnic_id);
+	bp->parent->fid = rte_le_to_cpu_16(resp->fid);
+	bp->parent->port_id = rte_le_to_cpu_16(resp->port_id);
+
+	/* FIXME: Temporary workaround - remove when firmware issue is fixed. */
+	if (bp->parent->vnic == 0) {
+		PMD_DRV_LOG(ERR, "Error: parent VNIC unavailable.\n");
+		/* Use hard-coded values appropriate for current Wh+ fw. */
+		if (bp->parent->fid == 2)
+			bp->parent->vnic = 0x100;
+		else
+			bp->parent->vnic = 1;
+	}
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 8d19998df..ef8997500 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -274,4 +274,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 			       uint16_t *vnic_id);
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 05/51] net/bnxt: modify port db dev interface
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (3 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 06/51] net/bnxt: get port and function info Ajit Khaparde
                           ` (45 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Modify ulp_port_db_dev_port_intf_update prototype to take
"struct rte_eth_dev *" as the second parameter.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    | 4 ++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 5 +++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.h | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 0c3c638ce..c7281ab9a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -548,7 +548,7 @@ bnxt_ulp_init(struct bnxt *bp)
 		}
 
 		/* update the port database */
-		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 		if (rc) {
 			BNXT_TF_DBG(ERR,
 				    "Failed to update port database\n");
@@ -584,7 +584,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	}
 
 	/* update the port database */
-	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to update port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index e3b924289..66b584026 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -104,10 +104,11 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp)
+					 struct rte_eth_dev *eth_dev)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint32_t port_id = bp->eth_dev->data->port_id;
+	struct bnxt *bp = eth_dev->data->dev_private;
+	uint32_t port_id = eth_dev->data->port_id;
 	uint32_t ifindex;
 	struct ulp_interface_info *intf;
 	int32_t rc;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 271c29a47..929a5a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -71,7 +71,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp);
+					 struct rte_eth_dev *eth_dev);
 
 /*
  * Api to get the ulp ifindex for a given device port.
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 06/51] net/bnxt: get port and function info
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (4 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
                           ` (44 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

add helper functions to get port & function related information
like parif, physical port id & vport id.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  8 ++++
 drivers/net/bnxt/bnxt_ethdev.c           | 58 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h | 10 ++++
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    | 10 ----
 4 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2b87899a4..0bdf8f5ba 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -855,6 +855,9 @@ extern const struct rte_flow_ops bnxt_flow_ops;
 	} \
 } while (0)
 
+#define	BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)	\
+		((eth_dev)->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+
 extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG_RAW(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
@@ -870,6 +873,11 @@ void bnxt_ulp_deinit(struct bnxt *bp);
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 uint16_t bnxt_get_fw_func_id(uint16_t port);
+uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_phy_port_id(uint16_t port);
+uint16_t bnxt_get_vport(uint16_t port);
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port);
 
 void bnxt_cancel_fc_thread(struct bnxt *bp);
 void bnxt_flow_cnt_alarm_cb(void *arg);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index bf018be16..af88b360f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -28,6 +28,7 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
+#include "bnxt_tf_common.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -5101,6 +5102,63 @@ bnxt_get_fw_func_id(uint16_t port)
 	return bp->fw_fid;
 }
 
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev))
+		return BNXT_ULP_INTF_TYPE_VF_REP;
+
+	bp = eth_dev->data->dev_private;
+	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
+			   : BNXT_ULP_INTF_TYPE_VF;
+}
+
+uint16_t
+bnxt_get_phy_port_id(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->pf->port_id : bp->parent->port_id;
+}
+
+uint16_t
+bnxt_get_parif(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->fw_fid - 1 : bp->parent->fid - 1;
+}
+
+uint16_t
+bnxt_get_vport(uint16_t port_id)
+{
+	return (1 << bnxt_get_phy_port_id(port_id));
+}
+
 static void bnxt_alloc_error_recovery_info(struct bnxt *bp)
 {
 	struct bnxt_error_recovery_info *info = bp->recovery_info;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f41757908..f772d4919 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -44,6 +44,16 @@ enum ulp_direction_type {
 	ULP_DIR_EGRESS,
 };
 
+/* enumeration of the interface types */
+enum bnxt_ulp_intf_type {
+	BNXT_ULP_INTF_TYPE_INVALID = 0,
+	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_VF,
+	BNXT_ULP_INTF_TYPE_PF_REP,
+	BNXT_ULP_INTF_TYPE_VF_REP,
+	BNXT_ULP_INTF_TYPE_LAST
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 929a5a510..604c4385a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -10,16 +10,6 @@
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
 
-/* enumeration of the interface types */
-enum bnxt_ulp_intf_type {
-	BNXT_ULP_INTF_TYPE_INVALID = 0,
-	BNXT_ULP_INTF_TYPE_PF = 1,
-	BNXT_ULP_INTF_TYPE_VF,
-	BNXT_ULP_INTF_TYPE_PF_REP,
-	BNXT_ULP_INTF_TYPE_VF_REP,
-	BNXT_ULP_INTF_TYPE_LAST
-};
-
 /* Structure for the Port database resource information. */
 struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 07/51] net/bnxt: add support for hwrm port phy qcaps
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (5 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 06/51] net/bnxt: get port and function info Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
                           ` (43 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_PHY_QCAPS to the firmware to get the physical
port count of the device.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c |  2 ++
 drivers/net/bnxt/bnxt_hwrm.c   | 22 ++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0bdf8f5ba..65862abdc 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -747,6 +747,7 @@ struct bnxt {
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
 	struct bnxt_parent_info	*parent;
+	uint8_t			port_cnt;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index af88b360f..697cd6651 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5287,6 +5287,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_parent_pf_qcfg(bp);
 
+	bnxt_hwrm_port_phy_qcaps(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 347e1c71e..e6a28d07c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1330,6 +1330,28 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	return rc;
 }
 
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp)
+{
+	int rc = 0;
+	struct hwrm_port_phy_qcaps_input req = {0};
+	struct hwrm_port_phy_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCAPS, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	bp->port_cnt = resp->port_cnt;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static bool bnxt_find_lossy_profile(struct bnxt *bp)
 {
 	int i = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index ef8997500..87cd40779 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -275,4 +275,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 08/51] net/bnxt: modify port db to handle more info
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (6 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 09/51] net/bnxt: add support for exact match Ajit Khaparde
                           ` (42 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Apart from func_svif, func_id & vnic, port_db now stores and
retrieves func_spif, func_parif, phy_port_id, port_svif, port_spif,
port_parif, port_vport. New helper functions have been added to
support the same.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 145 +++++++++++++++++++++-----
 drivers/net/bnxt/tf_ulp/ulp_port_db.h |  72 ++++++++++---
 2 files changed, 179 insertions(+), 38 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 66b584026..ea27ef41f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -106,13 +106,12 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 					 struct rte_eth_dev *eth_dev)
 {
-	struct bnxt_ulp_port_db *port_db;
-	struct bnxt *bp = eth_dev->data->dev_private;
 	uint32_t port_id = eth_dev->data->port_id;
-	uint32_t ifindex;
+	struct ulp_phy_port_info *port_data;
+	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	uint32_t ifindex;
 	int32_t rc;
-	struct bnxt_vnic_info *vnic;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db) {
@@ -133,22 +132,22 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 
 	/* update the interface details */
 	intf = &port_db->ulp_intf_list[ifindex];
-	if (BNXT_PF(bp) || BNXT_VF(bp)) {
-		if (BNXT_PF(bp)) {
-			intf->type = BNXT_ULP_INTF_TYPE_PF;
-			intf->port_svif = bp->port_svif;
-		} else {
-			intf->type = BNXT_ULP_INTF_TYPE_VF;
-		}
-		intf->func_id = bp->fw_fid;
-		intf->func_svif = bp->func_svif;
-		vnic = BNXT_GET_DEFAULT_VNIC(bp);
-		if (vnic)
-			intf->default_vnic = vnic->fw_vnic_id;
-		intf->bp = bp;
-		memcpy(intf->mac_addr, bp->mac_addr, sizeof(intf->mac_addr));
-	} else {
-		BNXT_TF_DBG(ERR, "Invalid interface type\n");
+
+	intf->type = bnxt_get_interface_type(port_id);
+
+	intf->func_id = bnxt_get_fw_func_id(port_id);
+	intf->func_svif = bnxt_get_svif(port_id, 1);
+	intf->func_spif = bnxt_get_phy_port_id(port_id);
+	intf->func_parif = bnxt_get_parif(port_id);
+	intf->default_vnic = bnxt_get_vnic_id(port_id);
+	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+
+	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
+		port_data = &port_db->phy_port_list[intf->phy_port_id];
+		port_data->port_svif = bnxt_get_svif(port_id, 0);
+		port_data->port_spif = bnxt_get_phy_port_id(port_id);
+		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_vport = bnxt_get_vport(port_id);
 	}
 
 	return 0;
@@ -209,7 +208,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 }
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -225,16 +224,88 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS)
+	if (dir == ULP_DIR_EGRESS) {
 		*svif = port_db->ulp_intf_list[ifindex].func_svif;
-	else
-		*svif = port_db->ulp_intf_list[ifindex].port_svif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*svif = port_db->phy_port_list[phy_port_id].port_svif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *spif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*spif = port_db->phy_port_list[phy_port_id].port_spif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *parif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*parif = port_db->phy_port_list[phy_port_id].port_parif;
+	}
+
 	return 0;
 }
 
@@ -262,3 +333,29 @@ ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
 	return 0;
 }
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint16_t *vport)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+	*vport = port_db->phy_port_list[phy_port_id].port_vport;
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 604c4385a..87de3bcbc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -15,11 +15,17 @@ struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
 	uint16_t		func_id;
 	uint16_t		func_svif;
-	uint16_t		port_svif;
+	uint16_t		func_spif;
+	uint16_t		func_parif;
 	uint16_t		default_vnic;
-	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
-	/* back pointer to the bnxt driver, it is null for rep ports */
-	struct bnxt		*bp;
+	uint16_t		phy_port_id;
+};
+
+struct ulp_phy_port_info {
+	uint16_t	port_svif;
+	uint16_t	port_spif;
+	uint16_t	port_parif;
+	uint16_t	port_vport;
 };
 
 /* Structure for the Port database */
@@ -29,6 +35,7 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
 };
 
 /*
@@ -74,8 +81,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
-				  uint32_t port_id,
-				  uint32_t *ifindex);
+				  uint32_t port_id, uint32_t *ifindex);
 
 /*
  * Api to get the function id for a given ulp ifindex.
@@ -88,11 +94,10 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex,
-			    uint16_t *func_id);
+			    uint32_t ifindex, uint16_t *func_id);
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -103,9 +108,36 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
-		     uint32_t ifindex,
-		     uint32_t dir,
-		     uint16_t *svif);
+		     uint32_t ifindex, uint32_t dir, uint16_t *svif);
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex, uint32_t dir, uint16_t *spif);
+
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint32_t dir, uint16_t *parif);
 
 /*
  * Api to get the vnic id for a given ulp ifindex.
@@ -118,7 +150,19 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex,
-			     uint16_t *vnic);
+			     uint32_t ifindex, uint16_t *vnic);
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex,	uint16_t *vport);
 
 #endif /* _ULP_PORT_DB_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 09/51] net/bnxt: add support for exact match
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (7 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
                           ` (41 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Add Exact Match support
- Create EM table pool of memory indices
- Insert exact match internal entry API
- Sends EM internal insert and delete request to firmware

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++++++++---
 drivers/net/bnxt/tf_core/hwrm_tf.h            |    9 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_core.c            |  144 +-
 drivers/net/bnxt/tf_core/tf_core.h            |  383 +-
 drivers/net/bnxt/tf_core/tf_em.c              |   98 +-
 drivers/net/bnxt/tf_core/tf_em.h              |   31 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_msg.c             |   86 +-
 drivers/net/bnxt/tf_core/tf_msg.h             |   13 +
 drivers/net/bnxt/tf_core/tf_session.h         |   18 +
 drivers/net/bnxt/tf_core/tf_tbl.c             |   99 +-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   57 +-
 drivers/net/bnxt/tf_core/tfp.h                |  123 +-
 16 files changed, 3493 insertions(+), 690 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index 7e30c9ffc..30516eb75 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -611,6 +611,10 @@ struct cmd_nums {
 	#define HWRM_FUNC_VF_BW_QCFG                      UINT32_C(0x196)
 	/* Queries pf ids belong to specified host(s) */
 	#define HWRM_FUNC_HOST_PF_IDS_QUERY               UINT32_C(0x197)
+	/* Queries extended stats per function */
+	#define HWRM_FUNC_QSTATS_EXT                      UINT32_C(0x198)
+	/* Queries exteded statitics context */
+	#define HWRM_STAT_EXT_CTX_QUERY                   UINT32_C(0x199)
 	/* Experimental */
 	#define HWRM_SELFTEST_QLIST                       UINT32_C(0x200)
 	/* Experimental */
@@ -647,41 +651,49 @@ struct cmd_nums {
 	/* Experimental */
 	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
 	/* Experimental */
-	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	#define HWRM_TF_SESSION_REGISTER                  UINT32_C(0x2c8)
 	/* Experimental */
-	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	#define HWRM_TF_SESSION_UNREGISTER                UINT32_C(0x2c9)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2ca)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2cb)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2cc)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cd)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2ce)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cf)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2da)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2db)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2e4)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2e5)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2e6)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2e7)
 	/* Experimental */
-	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2e8)
 	/* Experimental */
-	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2e9)
 	/* Experimental */
-	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	#define HWRM_TF_EM_INSERT                         UINT32_C(0x2ea)
 	/* Experimental */
-	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	#define HWRM_TF_EM_DELETE                         UINT32_C(0x2eb)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2f8)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2f9)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2fa)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2fb)
 	/* Experimental */
 	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
@@ -715,6 +727,13 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
 	/* Send driver debug information to firmware */
 	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
+	/* Query debug capabilities of firmware */
+	#define HWRM_DBG_QCAPS                            UINT32_C(0xff20)
+	/* Retrieve debug settings of firmware */
+	#define HWRM_DBG_QCFG                             UINT32_C(0xff21)
+	/* Set destination parameters for crashdump medium */
+	#define HWRM_DBG_CRASHDUMP_MEDIUM_CFG             UINT32_C(0xff22)
+	#define HWRM_NVM_REQ_ARBITRATION                  UINT32_C(0xffed)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -914,8 +933,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 30
-#define HWRM_VERSION_STR "1.10.1.30"
+#define HWRM_VERSION_RSVD 45
+#define HWRM_VERSION_STR "1.10.1.45"
 
 /****************
  * hwrm_ver_get *
@@ -2292,6 +2311,35 @@ struct cmpl_base {
 	 * Completion of TX packet. Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_TX_L2             UINT32_C(0x0)
+	/*
+	 * NO-OP completion:
+	 * Completion of NO-OP. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_NO_OP             UINT32_C(0x1)
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of coalesced TX packet. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_COAL        UINT32_C(0x2)
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of PTP TX packet. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_PTP         UINT32_C(0x3)
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_TPA_START_V2   UINT32_C(0xd)
+	/*
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_L2_V2          UINT32_C(0xf)
 	/*
 	 * RX L2 completion:
 	 * Completion of and L2 RX packet. Length = 32B
@@ -2321,6 +2369,24 @@ struct cmpl_base {
 	 * Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_STAT_EJECT        UINT32_C(0x1a)
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by
+	 * the Primate and processed by the VEE hardware to ensure that
+	 * all completions on a VEE function have been processed by the
+	 * VEE hardware before FLR process is completed.
+	 */
+	#define CMPL_BASE_TYPE_VEE_FLUSH         UINT32_C(0x1c)
+	/*
+	 * Mid Path Short Completion :
+	 * Completion of a Mid Path Command. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_SHORT    UINT32_C(0x1e)
+	/*
+	 * Mid Path Long Completion :
+	 * Completion of a Mid Path Command. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_LONG     UINT32_C(0x1f)
 	/*
 	 * HWRM Command Completion:
 	 * Completion of an HWRM command.
@@ -2398,7 +2464,9 @@ struct tx_cmpl {
 	uint16_t	unused_0;
 	/*
 	 * This is a copy of the opaque field from the first TX BD of this
-	 * transmitted packet.
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
 	 */
 	uint32_t	opaque;
 	uint16_t	errors_v;
@@ -2407,58 +2475,352 @@ struct tx_cmpl {
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	#define TX_CMPL_V                              UINT32_C(0x1)
-	#define TX_CMPL_ERRORS_MASK                    UINT32_C(0xfffe)
-	#define TX_CMPL_ERRORS_SFT                     1
+	#define TX_CMPL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_ERRORS_SFT                         1
 	/*
 	 * This error indicates that there was some sort of problem
 	 * with the BDs for the packet.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK        UINT32_C(0xe)
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT             1
 	/* No error */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR      (UINT32_C(0x0) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
 	/*
 	 * Bad Format:
 	 * BDs were not formatted correctly.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT       (UINT32_C(0x2) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
 	#define TX_CMPL_ERRORS_BUFFER_ERROR_LAST \
 		TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT
 	/*
 	 * When this bit is '1', it indicates that the length of
 	 * the packet was zero. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT          UINT32_C(0x10)
+	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
 	/*
 	 * When this bit is '1', it indicates that the packet
 	 * was longer than the programmed limit in TDI. No
 	 * packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH      UINT32_C(0x20)
+	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
 	/*
 	 * When this bit is '1', it indicates that one or more of the
 	 * BDs associated with this packet generated a PCI error.
 	 * This probably means the address was not valid.
 	 */
-	#define TX_CMPL_ERRORS_DMA_ERROR                UINT32_C(0x40)
+	#define TX_CMPL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
 	/*
 	 * When this bit is '1', it indicates that the packet was longer
 	 * than indicated by the hint. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_HINT_TOO_SHORT           UINT32_C(0x80)
+	#define TX_CMPL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
 	/*
 	 * When this bit is '1', it indicates that the packet was
 	 * dropped due to Poison TLP error on one or more of the
 	 * TLPs in the PXP completion.
 	 */
-	#define TX_CMPL_ERRORS_POISON_TLP_ERROR         UINT32_C(0x100)
+	#define TX_CMPL_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such completions
+	 * are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
 	/* unused2 is 16 b */
 	uint16_t	unused_1;
 	/* unused3 is 32 b */
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* tx_cmpl_coal (size:128b/16B) */
+struct tx_cmpl_coal {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_COAL_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_COAL_TYPE_SFT        0
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of TX packet. Length = 16B
+	 */
+	#define TX_CMPL_COAL_TYPE_TX_L2_COAL   UINT32_C(0x2)
+	#define TX_CMPL_COAL_TYPE_LAST        TX_CMPL_COAL_TYPE_TX_L2_COAL
+	#define TX_CMPL_COAL_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_COAL_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_COAL_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_COAL_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of the packet
+	 * which corresponds with the reported sq_cons_idx. Note that, with
+	 * coalesced completions, completions are generated for only some of the
+	 * packets. The driver will see the opaque field for only those packets.
+	 * Note that, if the packet was described by a short CSO or short CSO
+	 * inline BD, then the 16-bit opaque field from the short CSO BD will
+	 * appear in the bottom 16 bits of this field. For TX rings with
+	 * completion coalescing enabled (which would use the coalesced
+	 * completion record), it is suggested that the driver populate the
+	 * opaque field to indicate the specific TX ring with which the
+	 * completion is associated, then utilize the opaque and sq_cons_idx
+	 * fields in the coalesced completion record to determine the specific
+	 * packets that are to be completed on that ring.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_COAL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_COAL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define TX_CMPL_COAL_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_COAL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_COAL_ERRORS_POISON_TLP_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_COAL_ERRORS_INTERNAL_ERROR \
+		UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_COAL_ERRORS_TIMESTAMP_INVALID_ERROR \
+		UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	uint32_t	sq_cons_idx;
+	/*
+	 * This value is SQ index for the start of the packet following the
+	 * last completed packet.
+	 */
+	#define TX_CMPL_COAL_SQ_CONS_IDX_MASK UINT32_C(0xffffff)
+	#define TX_CMPL_COAL_SQ_CONS_IDX_SFT 0
+} __rte_packed;
+
+/* tx_cmpl_ptp (size:128b/16B) */
+struct tx_cmpl_ptp {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_PTP_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_PTP_TYPE_SFT        0
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of TX packet. Length = 32B
+	 */
+	#define TX_CMPL_PTP_TYPE_TX_L2_PTP    UINT32_C(0x2)
+	#define TX_CMPL_PTP_TYPE_LAST        TX_CMPL_PTP_TYPE_TX_L2_PTP
+	#define TX_CMPL_PTP_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_PTP_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_PTP_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_PTP_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of this
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_PTP_V                                  UINT32_C(0x1)
+	#define TX_CMPL_PTP_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_PTP_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_PTP_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_PTP_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped due
+	 * to a transient internal error in TDC. The packet or LSO can be
+	 * retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_PTP_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_PTP_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint32_t	timestamp_lo;
+} __rte_packed;
+
+/* tx_cmpl_ptp_hi (size:128b/16B) */
+struct tx_cmpl_ptp_hi {
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint16_t	timestamp_hi[3];
+	uint16_t	reserved16;
+	uint64_t	v2;
+	/*
+	 * This value is written by the NIC such that it will be different for
+	 * each pass through the completion queue.The even passes will write 1.
+	 * The odd passes will write 0
+	 */
+	#define TX_CMPL_PTP_HI_V2     UINT32_C(0x1)
+} __rte_packed;
+
 /* rx_pkt_cmpl (size:128b/16B) */
 struct rx_pkt_cmpl {
 	uint16_t	flags_type;
@@ -3003,12 +3365,8 @@ struct rx_pkt_cmpl_hi {
 	#define RX_PKT_CMPL_REORDER_SFT 0
 } __rte_packed;
 
-/*
- * This TPA completion structure is used on devices where the
- * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
- */
-/* rx_tpa_start_cmpl (size:128b/16B) */
-struct rx_tpa_start_cmpl {
+/* rx_pkt_v2_cmpl (size:128b/16B) */
+struct rx_pkt_v2_cmpl {
 	uint16_t	flags_type;
 	/*
 	 * This field indicates the exact type of the completion.
@@ -3017,84 +3375,143 @@ struct rx_tpa_start_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	#define RX_PKT_V2_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_PKT_V2_CMPL_TYPE_SFT                       0
 	/*
-	 * RX L2 TPA Start Completion:
-	 * Completion at the beginning of a TPA operation.
-	 * Length = 32B
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
-	#define RX_TPA_START_CMPL_TYPE_LAST \
-		RX_TPA_START_CMPL_TYPE_RX_TPA_START
-	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_START_CMPL_FLAGS_SFT                6
-	/* This bit will always be '0' for TPA start completions. */
-	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_PKT_V2_CMPL_TYPE_RX_L2_V2                    UINT32_C(0xf)
+	#define RX_PKT_V2_CMPL_TYPE_LAST \
+		RX_PKT_V2_CMPL_TYPE_RX_L2_V2
+	#define RX_PKT_V2_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_PKT_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Normal:
+	 * Packet was placed using normal algorithm.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_NORMAL \
+		(UINT32_C(0x0) << 7)
 	/*
 	 * Jumbo:
-	 * TPA Packet was placed using jumbo algorithm. This means
-	 * that the first buffer will be filled with data before
-	 * moving to aggregation buffers. Each aggregation buffer
-	 * will be filled before moving to the next aggregation
-	 * buffer.
+	 * Packet was placed using jumbo algorithm.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
 		(UINT32_C(0x1) << 7)
 	/*
 	 * Header/Data Separation:
 	 * Packet was placed using Header/Data separation algorithm.
 	 * The separation location is indicated by the itype field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
 	/*
-	 * GRO/Jumbo:
-	 * Packet will be placed using GRO/Jumbo where the first
-	 * packet is filled with data. Subsequent packets will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * Truncation:
+	 * Packet was placed using truncation algorithm. The
+	 * placed (truncated) length is indicated in the payload_offset
+	 * field. The original length is indicated in the len field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
-		(UINT32_C(0x5) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION \
+		(UINT32_C(0x3) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_PKT_V2_CMPL_FLAGS_RSS_VALID                 UINT32_C(0x400)
 	/*
-	 * GRO/Header-Data Separation:
-	 * Packet will be placed using GRO/HDS where the header
-	 * is in the first packet.
-	 * Payload of each packet will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
-		(UINT32_C(0x6) << 7)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* This bit is '1' if the RSS field in this completion is valid. */
-	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
-	/* unused is 1 b */
-	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	#define RX_PKT_V2_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_MASK                UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * Not Known:
+	 * Indicates that the packet type was not known.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_NOT_KNOWN \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * IP Packet:
+	 * Indicates that the packet was an IP packet, but further
+	 * classification was not possible.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_IP \
+		(UINT32_C(0x1) << 12)
 	/*
 	 * TCP Packet:
 	 * Indicates that the packet was IP and TCP.
+	 * This indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_TCP \
 		(UINT32_C(0x2) << 12)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
-		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
 	/*
-	 * This value indicates the amount of packet data written to the
-	 * buffer the opaque field in this completion corresponds to.
+	 * UDP Packet:
+	 * Indicates that the packet was IP and UDP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_UDP \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * FCoE Packet:
+	 * Indicates that the packet was recognized as a FCoE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_FCOE \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * RoCE Packet:
+	 * Indicates that the packet was recognized as a RoCE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ROCE \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * ICMP Packet:
+	 * Indicates that the packet was recognized as ICMP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ICMP \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * PtP packet wo/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_WO_TIMESTAMP \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * PtP packet w/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet and that a timestamp was taken for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP \
+		(UINT32_C(0x9) << 12)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP
+	/*
+	 * This is the length of the data for the packet stored in the
+	 * buffer(s) identified by the opaque value. This includes
+	 * the packet BD and any associated buffer BDs. This does not include
+	 * the length of any data places in aggregation BDs.
 	 */
 	uint16_t	len;
 	/*
@@ -3102,19 +3519,597 @@ struct rx_tpa_start_cmpl {
 	 * corresponds to.
 	 */
 	uint32_t	opaque;
+	uint8_t	agg_bufs_v1;
 	/*
 	 * This value is written by the NIC such that it will be different
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	uint8_t	v1;
+	#define RX_PKT_V2_CMPL_V1           UINT32_C(0x1)
 	/*
-	 * This value is written by the NIC such that it will be different
-	 * for each pass through the completion queue. The even passes
-	 * will write 1. The odd passes will write 0.
+	 * This value is the number of aggregation buffers that follow this
+	 * entry in the completion ring that are a part of this packet.
+	 * If the value is zero, then the packet is completely contained
+	 * in the buffer space provided for the packet in the RX ring.
 	 */
-	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
-	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
+	#define RX_PKT_V2_CMPL_AGG_BUFS_MASK UINT32_C(0x3e)
+	#define RX_PKT_V2_CMPL_AGG_BUFS_SFT 1
+	/* unused1 is 2 b */
+	#define RX_PKT_V2_CMPL_UNUSED1_MASK UINT32_C(0xc0)
+	#define RX_PKT_V2_CMPL_UNUSED1_SFT  6
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	uint16_t	metadata1_payload_offset;
+	/*
+	 * This is data from the CFA as indicated by the meta_format field.
+	 * If truncation placement is not used, this value indicates the offset
+	 * in bytes from the beginning of the packet where the inner payload
+	 * starts. This value is valid for TCP, UDP, FCoE, and RoCE packets. If
+	 * truncation placement is used, this value represents the placed
+	 * (truncated) length of the packet.
+	 */
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_MASK    UINT32_C(0x1ff)
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_SFT     0
+	/* This is data from the CFA as indicated by the meta_format field. */
+	#define RX_PKT_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC. When vee_cmpl_mode
+	 * is set in VNIC context, this is the lower 32b of the host address
+	 * from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/* Last 16 bytes of RX Packet V2 Completion Record */
+/* rx_pkt_v2_cmpl_hi (size:128b/16B) */
+struct rx_pkt_v2_cmpl_hi {
+	uint32_t	flags2;
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_LAST \
+		RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_PKT_V2_CMPL_HI_V2 \
+		UINT32_C(0x1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_SFT                               1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packet that was found after part of the
+	 * packet was already placed. The packet should be treated as
+	 * invalid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_SFT                   1
+	/* No buffer error */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit: Packet did not fit into packet buffer provided.
+	 * For regular placement, this means the packet did not fit in
+	 * the buffer provided. For HDS and jumbo placement, this means
+	 * that the packet could not be placed into 8 physical buffers
+	 * (if fixed-size buffers are used), or that the packet could
+	 * not be placed in the number of physical buffers configured
+	 * for the VNIC (if variable-size buffers are used)
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Not On Chip: All BDs needed for the packet were not on-chip
+	 * when the packet arrived. For regular placement, this error is
+	 * not valid. For HDS and jumbo placement, this means that not
+	 * enough agg BDs were posted to place the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
+		(UINT32_C(0x2) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This indicates that there was an error in the outer tunnel
+	 * portion of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_MASK \
+		UINT32_C(0x70)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_SFT                   4
+	/*
+	 * No additional error occurred on the outer tunnel portion
+	 * of the packet or the packet does not have a outer tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the outer tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * Indicates that header length is out of range in the outer
+	 * tunnel header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the outer tunnel l3 header length. Valid for IPv4, or
+	 * IPv6 outer tunnel packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the outer tunnel UDP header length for a outer
+	 * tunnel UDP packet that is not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 4)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have
+	 * failed (e.g. TTL = 0) in the outer tunnel header. Valid for
+	 * IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_TTL \
+		(UINT32_C(0x5) << 4)
+	/*
+	 * Indicates that the IP checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_CS_ERROR \
+		(UINT32_C(0x6) << 4)
+	/*
+	 * Indicates that the L4 checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR \
+		(UINT32_C(0x7) << 4)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR
+	/*
+	 * This indicates that there was a CRC error on either an FCoE
+	 * or RoCE packet. The itype indicates the packet type.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_CRC_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that there was an error in the tunnel portion
+	 * of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_MASK \
+		UINT32_C(0xe00)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_SFT                    9
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * of the packet or the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 9)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 9)
+	/*
+	 * Indicates that header length is out of range in the tunnel
+	 * header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 9)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the tunnel l3 header length. Valid for IPv4, or IPv6 tunnel
+	 * packet packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 9)
+	/*
+	 * Indicates that the physical packet is shorter than that claimed
+	 * by the tunnel UDP header length for a tunnel UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 9)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have failed
+	 * (e.g. TTL = 0) in the tunnel header. Valid for IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \
+		(UINT32_C(0x5) << 9)
+	/*
+	 * Indicates that the IP checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \
+		(UINT32_C(0x6) << 9)
+	/*
+	 * Indicates that the L4 checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \
+		(UINT32_C(0x7) << 9)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR
+	/*
+	 * This indicates that there was an error in the inner
+	 * portion of the packet when this
+	 * field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_MASK \
+		UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_SFT                      12
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * or the packet of the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * Indicates that IP header version does not match
+	 * expectation from L2 Ethertype for IPv4 and IPv6 or that
+	 * option other than VFT was parsed on
+	 * FCoE packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 12)
+	/*
+	 * indicates that header length is out of range. Valid for
+	 * IPv4 and RoCE
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 12)
+	/*
+	 * indicates that the IPv4 TTL or IPv6 hop limit check
+	 * have failed (e.g. TTL = 0). Valid for IPv4, and IPv6
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_TTL \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * Indicates that physical packet is shorter than that
+	 * claimed by the l3 header length. Valid for IPv4,
+	 * IPv6 packet or RoCE packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the UDP header length for a UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_UDP_TOTAL_ERROR \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * Indicates that TCP header length > IP payload. Valid for
+	 * TCP packets only.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN \
+		(UINT32_C(0x6) << 12)
+	/* Indicates that TCP header length < 5. Valid for TCP. */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN_TOO_SMALL \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * Indicates that TCP option headers result in a TCP header
+	 * size that does not match data offset in TCP header. Valid
+	 * for TCP.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * Indicates that the IP checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \
+		(UINT32_C(0x9) << 12)
+	/*
+	 * Indicates that the L4 checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \
+		(UINT32_C(0xa) << 12)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format=1, this value is the VLAN VID. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_SFT 0
+	/* When meta_format=1, this value is the VLAN DE. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format=1, this value is the VLAN PRI. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_SFT 13
+	/*
+	 * The timestamp field contains the 32b timestamp for the packet from
+	 * the MAC.
+	 */
+	uint32_t	timestamp;
+} __rte_packed;
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_cmpl (size:128b/16B) */
+struct rx_tpa_start_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
+	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	/*
+	 * RX L2 TPA Start Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 */
+	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
+	#define RX_TPA_START_CMPL_TYPE_LAST \
+		RX_TPA_START_CMPL_TYPE_RX_TPA_START
+	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
+	#define RX_TPA_START_CMPL_FLAGS_SFT                6
+	/* This bit will always be '0' for TPA start completions. */
+	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
+	/* unused is 1 b */
+	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
 	/*
 	 * This is the RSS hash type for the packet. The value is packed
 	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
@@ -3285,6 +4280,430 @@ struct rx_tpa_start_cmpl_hi {
 	#define RX_TPA_START_CMPL_INNER_L4_SIZE_SFT   27
 } __rte_packed;
 
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ * RX L2 TPA Start V2 Completion Record (32 bytes split to 2 16-byte
+ * struct)
+ */
+/* rx_tpa_start_v2_cmpl (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define RX_TPA_START_V2_CMPL_TYPE_SFT                       0
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2 \
+		UINT32_C(0xd)
+	#define RX_TPA_START_V2_CMPL_TYPE_LAST \
+		RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2
+	#define RX_TPA_START_V2_CMPL_FLAGS_MASK \
+		UINT32_C(0xffc0)
+	#define RX_TPA_START_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an error
+	 * of some type. Type of error is indicated in error_flags.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ERROR \
+		UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_MASK \
+		UINT32_C(0x380)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_RSS_VALID \
+		UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement. It
+	 * starts at the first 32B boundary after the end of the header for
+	 * HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PKT_METADATA_PRESENT \
+		UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to. If the VNIC is configured to not use an Rx BD for
+	 * the TPA Start completion, then this is a copy of the opaque field
+	 * from the first BD used to place the TPA Start packet.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_LAST RX_TPA_START_V2_CMPL_V1
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	uint16_t	agg_id;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	#define RX_TPA_START_V2_CMPL_AGG_ID_MASK            UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_AGG_ID_SFT             0
+	#define RX_TPA_START_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN valid. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC.
+	 * When vee_cmpl_mode is set in VNIC context, this is the lower
+	 * 32b of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/*
+ * Last 16 bytes of RX L2 TPA Start V2 Completion Record
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_v2_cmpl_hi (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl_hi {
+	uint32_t	flags2;
+	/* This indicates that the aggregation was done using GRO rules. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_AGG_GRO \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet in the affregation.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 * CS status for TPA packets is always valid. This means that "all_ok"
+	 * status will always be set. The ok count status will be set
+	 * appropriately for the packet header, such that all existing CS
+	 * values are ok.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set. For TPA Start completions,
+	 * the complete checksum is calculated for the first packet in the
+	 * aggregation only.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V2 \
+		UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_SFT                     1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packetThe packet should be treated as
+	 * invalid.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	/* No buffer error */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit:
+	 * Packet did not fit into packet buffer provided. This means
+	 * that the TPA Start packet was too big to be placed into the
+	 * per-packet maximum number of physical buffers configured for
+	 * the VNIC, or that it was too big to be placed into the
+	 * per-aggregation maximum number of physical buffers configured
+	 * for the VNIC. This error only occurs when the VNIC is
+	 * configured for variable size receive buffers.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_LAST \
+		RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format != 0, this value is the VLAN VID. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_SFT 0
+	/* When meta_format != 0, this value is the VLAN DE. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format != 0, this value is the VLAN PRI. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_SFT 13
+	/*
+	 * This field contains the outer_l3_offset, inner_l2_offset,
+	 * inner_l3_offset, and inner_l4_size.
+	 *
+	 * hdr_offsets[8:0] contains the outer_l3_offset.
+	 * hdr_offsets[17:9] contains the inner_l2_offset.
+	 * hdr_offsets[26:18] contains the inner_l3_offset.
+	 * hdr_offsets[31:27] contains the inner_l4_size.
+	 */
+	uint32_t	hdr_offsets;
+} __rte_packed;
+
 /*
  * This TPA completion structure is used on devices where the
  * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
@@ -3299,27 +4718,27 @@ struct rx_tpa_end_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_END_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_END_CMPL_TYPE_SFT                 0
+	#define RX_TPA_END_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_TPA_END_CMPL_TYPE_SFT                       0
 	/*
 	 * RX L2 TPA End Completion:
 	 * Completion at the end of a TPA operation.
 	 * Length = 32B
 	 */
-	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END            UINT32_C(0x15)
+	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END                  UINT32_C(0x15)
 	#define RX_TPA_END_CMPL_TYPE_LAST \
 		RX_TPA_END_CMPL_TYPE_RX_TPA_END
-	#define RX_TPA_END_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_END_CMPL_FLAGS_SFT                6
+	#define RX_TPA_END_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_TPA_END_CMPL_FLAGS_SFT                      6
 	/*
 	 * When this bit is '1', it indicates a packet that has an
 	 * error of some type. Type of error is indicated in
 	 * error_flags.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_TPA_END_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT             7
 	/*
 	 * Jumbo:
 	 * TPA Packet was placed using jumbo algorithm. This means
@@ -3337,6 +4756,15 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
 	/*
 	 * GRO/Jumbo:
 	 * Packet will be placed using GRO/Jumbo where the first
@@ -3358,11 +4786,28 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS \
 		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* unused is 2 b */
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_MASK         UINT32_C(0xc00)
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_SFT          10
+		RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* unused is 1 b */
+	#define RX_TPA_END_CMPL_FLAGS_UNUSED                    UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
@@ -3372,8 +4817,9 @@ struct rx_tpa_end_cmpl {
 	 *     field is valid and contains the TCP checksum.
 	 *     This also indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT                 12
 	/*
 	 * This value is zero for TPA End completions.
 	 * There is no data in the buffer that corresponds to the opaque
@@ -4243,6 +5689,52 @@ struct rx_abuf_cmpl {
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* VEE FLUSH Completion Record (16 bytes) */
+/* vee_flush (size:128b/16B) */
+struct vee_flush {
+	uint32_t	downstream_path_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define VEE_FLUSH_TYPE_MASK           UINT32_C(0x3f)
+	#define VEE_FLUSH_TYPE_SFT            0
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by the Primate and processed
+	 * by the VEE hardware to ensure that all completions on a VEE
+	 * function have been processed by the VEE hardware before FLR
+	 * process is completed.
+	 */
+	#define VEE_FLUSH_TYPE_VEE_FLUSH        UINT32_C(0x1c)
+	#define VEE_FLUSH_TYPE_LAST            VEE_FLUSH_TYPE_VEE_FLUSH
+	/* downstream_path is 1 b */
+	#define VEE_FLUSH_DOWNSTREAM_PATH     UINT32_C(0x40)
+	/* This completion is associated with VEE Transmit */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_TX    (UINT32_C(0x0) << 6)
+	/* This completion is associated with VEE Receive */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_RX    (UINT32_C(0x1) << 6)
+	#define VEE_FLUSH_DOWNSTREAM_PATH_LAST VEE_FLUSH_DOWNSTREAM_PATH_RX
+	/*
+	 * This is an opaque value that is passed through the completion
+	 * to the VEE handler SW and is used to indicate what VEE VQ or
+	 * function has completed FLR processing.
+	 */
+	uint32_t	opaque;
+	uint32_t	v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes will
+	 * write 1. The odd passes will write 0.
+	 */
+	#define VEE_FLUSH_V     UINT32_C(0x1)
+	/* unused3 is 32 b */
+	uint32_t	unused_3;
+} __rte_packed;
+
 /* eject_cmpl (size:128b/16B) */
 struct eject_cmpl {
 	uint16_t	type;
@@ -6562,7 +8054,7 @@ struct hwrm_async_event_cmpl_deferred_response {
 	/*
 	 * The PF's mailbox is clear to issue another command.
 	 * A command with this seq_id is still in progress
-	 * and will return a regular HWRM completion when done.
+	 * and will return a regualr HWRM completion when done.
 	 * 'event_data1' field, if non-zero, contains the estimated
 	 * execution time for the command.
 	 */
@@ -7476,6 +8968,8 @@ struct hwrm_func_qcaps_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -7729,6 +9223,12 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
 		UINT32_C(0x40000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_QCAPS command
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_DBG_QCAPS_CMD_SUPPORTED \
+		UINT32_C(0x80000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7854,6 +9354,19 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
 		UINT32_C(0x2)
+	/*
+	 * If 1, the device can report extended hw statistics (including
+	 * additional tpa statistics).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_EXT_HW_STATS_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then the core firmware has support to enable/
+	 * disable hot reset support for interface dynamically through
+	 * HWRM_FUNC_CFG.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x8)
 	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -7904,6 +9417,8 @@ struct hwrm_func_qcfg_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -8013,6 +9528,15 @@ struct hwrm_func_qcfg_output {
 	 */
 	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x100)
+	/*
+	 * If set to 1, then the firmware and all currently registered driver
+	 * instances support hot reset. The hot reset support will be updated
+	 * dynamically based on the driver interface advertisement.
+	 * If set to 0, then the adapter is not currently able to initiate
+	 * hot reset.
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_HOT_RESET_ALLOWED \
+		UINT32_C(0x200)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -8565,6 +10089,17 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x2000000)
+	/*
+	 * If this bit is set to 0, then the interface does not support hot
+	 * reset capability which it advertised with the hot_reset_support
+	 * flag in HWRM_FUNC_DRV_RGTR. If any of the function has set this
+	 * flag to 0, adapter cannot do the hot reset. In this state, if the
+	 * firmware receives a hot reset request, firmware must fail the
+	 * request. If this bit is set to 1, then interface is renabling the
+	 * hot reset capability.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS_HOT_RESET_IF_EN_DIS \
+		UINT32_C(0x4000000)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the mtu field to be
@@ -8704,6 +10239,12 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_ENABLES_ADMIN_LINK_STATE \
 		UINT32_C(0x400000)
+	/*
+	 * This bit must be '1' for the hot_reset_if_en_dis field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x800000)
 	/*
 	 * The maximum transmission unit of the function.
 	 * The HWRM should make sure that the mtu of
@@ -9036,15 +10577,21 @@ struct hwrm_func_qstats_input {
 	/* This flags indicates the type of statistics request. */
 	uint8_t	flags;
 	/* This value is not used to avoid backward compatibility issues. */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED    UINT32_C(0x0)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
 	/*
 	 * flags should be set to 1 when request is for only RoCE statistics.
 	 * This will be honored only if the caller_fid is a privileged PF.
 	 * In all other cases FID and caller_fid should be the same.
 	 */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY UINT32_C(0x1)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
 	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_LAST \
-		HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY
+		HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK
 	uint8_t	unused_0[5];
 } __rte_packed;
 
@@ -9130,6 +10677,132 @@ struct hwrm_func_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/************************
+ * hwrm_func_qstats_ext *
+ ************************/
+
+
+/* hwrm_func_qstats_ext_input (size:192b/24B) */
+struct hwrm_func_qstats_ext_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being queried.
+	 * 0xFF... (All Fs) if the query is for the requesting
+	 * function.
+	 * A privileged PF can query for other function's statistics.
+	 */
+	uint16_t	fid;
+	/* This flags indicates the type of statistics request. */
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * flags should be set to 1 when request is for only RoCE statistics.
+	 * This will be honored only if the caller_fid is a privileged PF.
+	 * In all other cases FID and caller_fid should be the same.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
+} __rte_packed;
+
+/* hwrm_func_qstats_ext_output (size:1472b/184B) */
+struct hwrm_func_qstats_ext_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on received path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***********************
  * hwrm_func_clr_stats *
  ***********************/
@@ -10116,7 +11789,7 @@ struct hwrm_func_backing_store_qcaps_output {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -11039,7 +12712,7 @@ struct hwrm_func_backing_store_cfg_input {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -16149,7 +17822,18 @@ struct hwrm_port_qstats_input {
 	uint64_t	resp_addr;
 	/* Port ID of port that is being queried. */
 	uint16_t	port_id;
-	uint8_t	unused_0[6];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -16382,7 +18066,7 @@ struct rx_port_stats_ext {
  * Port Rx Statistics extended PFC WatchDog Format.
  * StormDetect and StormRevert event determination is based
  * on an integration period and a percentage threshold.
- * StormDetect event - when percentage of XOFF frames received
+ * StormDetect event - when percentage of XOFF frames receieved
  * within an integration period exceeds the configured threshold.
  * StormRevert event - when percentage of XON frames received
  * within an integration period exceeds the configured threshold.
@@ -16843,7 +18527,18 @@ struct hwrm_port_qstats_ext_input {
 	 * statistics block in bytes
 	 */
 	uint16_t	rx_stat_size;
-	uint8_t	unused_0[2];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for the counter mask,
+	 * representing width of each of the stats counters, rather than
+	 * counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0;
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -25312,95 +27007,104 @@ struct hwrm_ring_free_input {
 	/* Ring Type. */
 	uint8_t	ring_type;
 	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	/* TX Ring (TR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	/* RX Ring (RR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	/* RoCE Notification Completion Ring (ROCE_CR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	/* RX Aggregation Ring */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
+	/* Notification Queue */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
+		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
+	uint8_t	unused_0;
+	/* Physical number of ring allocated. */
+	uint16_t	ring_id;
+	uint8_t	unused_1[4];
+} __rte_packed;
+
+/* hwrm_ring_free_output (size:128b/16B) */
+struct hwrm_ring_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/*******************
+ * hwrm_ring_reset *
+ *******************/
+
+
+/* hwrm_ring_reset_input (size:192b/24B) */
+struct hwrm_ring_reset_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Ring Type. */
+	uint8_t	ring_type;
+	/* L2 Completion Ring (CR) */
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL     UINT32_C(0x0)
 	/* TX Ring (TR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX          UINT32_C(0x1)
 	/* RX Ring (RR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX          UINT32_C(0x2)
 	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
-	/* RX Aggregation Ring */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
-	/* Notification Queue */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
-		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
-	uint8_t	unused_0;
-	/* Physical number of ring allocated. */
-	uint16_t	ring_id;
-	uint8_t	unused_1[4];
-} __rte_packed;
-
-/* hwrm_ring_free_output (size:128b/16B) */
-struct hwrm_ring_free_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL   UINT32_C(0x3)
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * Rx Ring Group.  This is to reset rx and aggregation in an atomic
+	 * operation. Completion ring associated with this ring group is
+	 * not reset.
 	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/*******************
- * hwrm_ring_reset *
- *******************/
-
-
-/* hwrm_ring_reset_input (size:192b/24B) */
-struct hwrm_ring_reset_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
-	 */
-	uint64_t	resp_addr;
-	/* Ring Type. */
-	uint8_t	ring_type;
-	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
-	/* TX Ring (TR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX        UINT32_C(0x1)
-	/* RX Ring (RR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX        UINT32_C(0x2)
-	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP UINT32_C(0x6)
 	#define HWRM_RING_RESET_INPUT_RING_TYPE_LAST \
-		HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL
+		HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP
 	uint8_t	unused_0;
-	/* Physical number of the ring. */
+	/*
+	 * Physical number of the ring. When ring type is rx_ring_grp, ring id
+	 * actually refers to ring group id.
+	 */
 	uint16_t	ring_id;
 	uint8_t	unused_1[4];
 } __rte_packed;
@@ -25615,7 +27319,18 @@ struct hwrm_ring_cmpl_ring_qaggint_params_input {
 	uint64_t	resp_addr;
 	/* Physical number of completion ring. */
 	uint16_t	ring_id;
-	uint8_t	unused_0[6];
+	uint16_t	flags;
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_MASK \
+		UINT32_C(0x3)
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_SFT 0
+	/*
+	 * Set this flag to 1 when querying parameters on a notification
+	 * queue. Set this flag to 0 when querying parameters on a
+	 * completion queue or completion ring.
+	 */
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
+		UINT32_C(0x4)
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_ring_cmpl_ring_qaggint_params_output (size:256b/32B) */
@@ -25652,19 +27367,19 @@ struct hwrm_ring_cmpl_ring_qaggint_params_output {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA when in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
+	 * Maximum wait time spent aggregating
 	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
@@ -25738,7 +27453,7 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	/*
 	 * Set this flag to 1 when configuring parameters on a
 	 * notification queue. Set this flag to 0 when configuring
-	 * parameters on a completion queue.
+	 * parameters on a completion queue or completion ring.
 	 */
 	#define HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
 		UINT32_C(0x4)
@@ -25753,20 +27468,20 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA while in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
-	 * cmpls before signaling the interrupt after the
+	 * Maximum wait time spent aggregating
+	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
 	uint16_t	int_lat_tmr_max;
@@ -33339,78 +35054,246 @@ struct hwrm_tf_version_get_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-} __rte_packed;
-
-/* hwrm_tf_version_get_output (size:128b/16B) */
-struct hwrm_tf_version_get_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	/* Version Major number. */
-	uint8_t	major;
-	/* Version Minor number. */
-	uint8_t	minor;
-	/* Version Update number. */
-	uint8_t	update;
-	/* unused. */
-	uint8_t	unused0[4];
+} __rte_packed;
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_open_output (size:192b/24B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware.
+	 */
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the first client on
+	 * the newly created session.
+	 */
+	uint32_t	fw_session_client_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* unused. */
+	uint8_t	unused1[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/****************************
+ * hwrm_tf_session_register *
+ ****************************/
+
+
+/* hwrm_tf_session_register_input (size:704b/88B) */
+struct hwrm_tf_session_register_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field is
-	 * written last.
-	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/************************
- * hwrm_tf_session_open *
- ************************/
-
-
-/* hwrm_tf_session_open_input (size:640b/80B) */
-struct hwrm_tf_session_open_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * Unique session identifier for the session that the
+	 * register request want to create a new client on. This
+	 * value originates from the first open request.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
 	 */
-	uint64_t	resp_addr;
-	/* Name of the session. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session client. */
+	uint8_t	session_client_name[64];
 } __rte_packed;
 
-/* hwrm_tf_session_open_output (size:128b/16B) */
-struct hwrm_tf_session_open_output {
+/* hwrm_tf_session_register_output (size:128b/16B) */
+struct hwrm_tf_session_register_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33420,12 +35303,11 @@ struct hwrm_tf_session_open_output {
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
 	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session.
+	 * Unique session client identifier for the session created
+	 * by the firmware. It includes the session the client it
+	 * attached to and session client info.
 	 */
-	uint32_t	fw_session_id;
+	uint32_t	fw_session_client_id;
 	/* unused. */
 	uint8_t	unused0[3];
 	/*
@@ -33439,13 +35321,13 @@ struct hwrm_tf_session_open_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/**************************
- * hwrm_tf_session_attach *
- **************************/
+/******************************
+ * hwrm_tf_session_unregister *
+ ******************************/
 
 
-/* hwrm_tf_session_attach_input (size:704b/88B) */
-struct hwrm_tf_session_attach_input {
+/* hwrm_tf_session_unregister_input (size:192b/24B) */
+struct hwrm_tf_session_unregister_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -33475,24 +35357,19 @@ struct hwrm_tf_session_attach_input {
 	 */
 	uint64_t	resp_addr;
 	/*
-	 * Unique session identifier for the session that the attach
-	 * request want to attach to. This value originates from the
-	 * shared session memory that the attach request opened by
-	 * way of the 'attach name' that was passed in to the core
-	 * attach API.
-	 * The fw_session_id of the attach session includes PCIe bus
-	 * info to distinguish the PF and session info to identify
-	 * the associated TruFlow session.
+	 * Unique session identifier for the session that the
+	 * unregister request want to close a session client on.
 	 */
-	uint32_t	attach_fw_session_id;
-	/* unused. */
-	uint32_t	unused0;
-	/* Name of the session it self. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the session that the
+	 * unregister request want to close.
+	 */
+	uint32_t	fw_session_client_id;
 } __rte_packed;
 
-/* hwrm_tf_session_attach_output (size:128b/16B) */
-struct hwrm_tf_session_attach_output {
+/* hwrm_tf_session_unregister_output (size:128b/16B) */
+struct hwrm_tf_session_unregister_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33501,16 +35378,8 @@ struct hwrm_tf_session_attach_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session. This fw_session_id is unique to the attach
-	 * request.
-	 */
-	uint32_t	fw_session_id;
 	/* unused. */
-	uint8_t	unused0[3];
+	uint8_t	unused0[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33746,15 +35615,17 @@ struct hwrm_tf_session_resc_qcaps_input {
 	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * Defines the size of the provided qcaps_addr array
 	 * buffer. The size should be set to the Resource Manager
-	 * provided max qcaps value that is device specific. This is
-	 * the max size possible.
+	 * provided max number of qcaps entries which is device
+	 * specific. Resource Manager gets the max size from HCAPI
+	 * RM.
 	 */
-	uint16_t	size;
+	uint16_t	qcaps_size;
 	/*
-	 * This is the DMA address for the qcaps output data
-	 * array. Array is of tf_rm_cap type and is device specific.
+	 * This is the DMA address for the qcaps output data array
+	 * buffer. Array is of tf_rm_resc_req_entry type and is
+	 * device specific.
 	 */
 	uint64_t	qcaps_addr;
 } __rte_packed;
@@ -33772,29 +35643,28 @@ struct hwrm_tf_session_resc_qcaps_output {
 	/* Control flags. */
 	uint32_t	flags;
 	/* Session reservation strategy. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_SFT \
 		0
 	/* Static partitioning. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_STATIC \
 		UINT32_C(0x0)
 	/* Strategy 1. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_1 \
 		UINT32_C(0x1)
 	/* Strategy 2. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_2 \
 		UINT32_C(0x2)
 	/* Strategy 3. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3 \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
-		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3
 	/*
-	 * Size of the returned tf_rm_cap data array. The value
-	 * cannot exceed the size defined by the input msg. The data
-	 * array is returned using the qcaps_addr specified DMA
-	 * address also provided by the input msg.
+	 * Size of the returned qcaps_addr data array buffer. The
+	 * value cannot exceed the size defined by the input msg,
+	 * qcaps_size.
 	 */
 	uint16_t	size;
 	/* unused. */
@@ -33817,7 +35687,7 @@ struct hwrm_tf_session_resc_qcaps_output {
  ******************************/
 
 
-/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+/* hwrm_tf_session_resc_alloc_input (size:320b/40B) */
 struct hwrm_tf_session_resc_alloc_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -33860,16 +35730,25 @@ struct hwrm_tf_session_resc_alloc_input {
 	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided num_addr
-	 * buffer.
+	 * Defines the array size of the provided req_addr and
+	 * resv_addr array buffers. Should be set to the number of
+	 * request entries.
 	 */
-	uint16_t	size;
+	uint16_t	req_size;
+	/*
+	 * This is the DMA address for the request input data array
+	 * buffer. Array is of tf_rm_resc_req_entry type. Size of the
+	 * array buffer is provided by the 'req_size' field in this
+	 * message.
+	 */
+	uint64_t	req_addr;
 	/*
-	 * This is the DMA address for the num input data array
-	 * buffer. Array is of tf_rm_num type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * This is the DMA address for the resc output data array
+	 * buffer. Array is of tf_rm_resc_entry type. Size of the array
+	 * buffer is provided by the 'req_size' field in this
+	 * message.
 	 */
-	uint64_t	num_addr;
+	uint64_t	resc_addr;
 } __rte_packed;
 
 /* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
@@ -33882,8 +35761,15 @@ struct hwrm_tf_session_resc_alloc_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
+	/*
+	 * Size of the returned tf_rm_resc_entry data array. The value
+	 * cannot exceed the req_size defined by the input msg. The data
+	 * array is returned using the resv_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
 	/* unused. */
-	uint8_t	unused0[7];
+	uint8_t	unused0[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33946,11 +35832,12 @@ struct hwrm_tf_session_resc_free_input {
 	 * Defines the size, in bytes, of the provided free_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	free_size;
 	/*
 	 * This is the DMA address for the free input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size field of this message.
+	 * buffer.  Array is of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'free_size' field of this
+	 * message.
 	 */
 	uint64_t	free_addr;
 } __rte_packed;
@@ -34029,11 +35916,12 @@ struct hwrm_tf_session_resc_flush_input {
 	 * Defines the size, in bytes, of the provided flush_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	flush_size;
 	/*
 	 * This is the DMA address for the flush input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * buffer.  Array of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'flush_size' field in this
+	 * message.
 	 */
 	uint64_t	flush_addr;
 } __rte_packed;
@@ -34062,12 +35950,9 @@ struct hwrm_tf_session_resc_flush_output {
 } __rte_packed;
 
 /* TruFlow RM capability of a resource. */
-/* tf_rm_cap (size:64b/8B) */
-struct tf_rm_cap {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_req_entry (size:64b/8B) */
+struct tf_rm_resc_req_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Minimum value. */
 	uint16_t	min;
@@ -34075,25 +35960,10 @@ struct tf_rm_cap {
 	uint16_t	max;
 } __rte_packed;
 
-/* TruFlow RM number of a resource. */
-/* tf_rm_num (size:64b/8B) */
-struct tf_rm_num {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
-	uint32_t	type;
-	/* Number of resources. */
-	uint32_t	num;
-} __rte_packed;
-
 /* TruFlow RM reservation information. */
-/* tf_rm_res (size:64b/8B) */
-struct tf_rm_res {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_entry (size:64b/8B) */
+struct tf_rm_resc_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Start offset. */
 	uint16_t	start;
@@ -34925,6 +36795,162 @@ struct hwrm_tf_ext_em_qcfg_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/*********************
+ * hwrm_tf_em_insert *
+ *********************/
+
+
+/* hwrm_tf_em_insert_input (size:832b/104B) */
+struct hwrm_tf_em_insert_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware Session Id. */
+	uint32_t	fw_session_id;
+	/* Control Flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX
+	/* Reported match strength. */
+	uint16_t	strength;
+	/* Index to action. */
+	uint32_t	action_ptr;
+	/* Index of EM record. */
+	uint32_t	em_record_idx;
+	/* EM Key value. */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
+/* hwrm_tf_em_insert_output (size:128b/16B) */
+struct hwrm_tf_em_insert_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* EM record pointer index. */
+	uint16_t	rptr_index;
+	/* EM record offset 0~3. */
+	uint8_t	rptr_entry;
+	/* Number of word entries consumed by the key. */
+	uint8_t	num_of_entries;
+	/* unused. */
+	uint32_t	unused0;
+} __rte_packed;
+
+/*********************
+ * hwrm_tf_em_delete *
+ *********************/
+
+
+/* hwrm_tf_em_delete_input (size:832b/104B) */
+struct hwrm_tf_em_delete_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Session Id. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX
+	/* Unused0 */
+	uint16_t	unused0;
+	/* EM internal flow hanndle. */
+	uint64_t	flow_handle;
+	/* EM Key value */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused1[3];
+} __rte_packed;
+
+/* hwrm_tf_em_delete_output (size:128b/16B) */
+struct hwrm_tf_em_delete_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Original stack allocation index. */
+	uint16_t	em_index;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
 /********************
  * hwrm_tf_tcam_set *
  ********************/
@@ -35582,10 +37608,10 @@ struct ctx_hw_stats {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35598,10 +37624,10 @@ struct ctx_hw_stats {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35618,7 +37644,11 @@ struct ctx_hw_stats {
 	uint64_t	tpa_aborts;
 } __rte_packed;
 
-/* Periodic statistics context DMA to host. */
+/*
+ * Extended periodic statistics context DMA to host. On cards that
+ * support TPA v2, additional TPA related stats exist and can be retrieved
+ * by DMA of ctx_hw_stats_ext, rather than legacy ctx_hw_stats structure.
+ */
 /* ctx_hw_stats_ext (size:1344b/168B) */
 struct ctx_hw_stats_ext {
 	/* Number of received unicast packets */
@@ -35627,10 +37657,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35643,10 +37673,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35912,7 +37942,14 @@ struct hwrm_stat_ctx_query_input {
 	uint64_t	resp_addr;
 	/* ID of the statistics context that is being queried. */
 	uint32_t	stat_ctx_id;
-	uint8_t	unused_0[4];
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK     UINT32_C(0x1)
+	uint8_t	unused_0[3];
 } __rte_packed;
 
 /* hwrm_stat_ctx_query_output (size:1408b/176B) */
@@ -35949,7 +37986,7 @@ struct hwrm_stat_ctx_query_output {
 	uint64_t	rx_bcast_pkts;
 	/* Number of received packets with error */
 	uint64_t	rx_err_pkts;
-	/* Number of dropped packets on received path */
+	/* Number of dropped packets on receive path */
 	uint64_t	rx_drop_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
@@ -35976,6 +38013,117 @@ struct hwrm_stat_ctx_query_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/***************************
+ * hwrm_stat_ext_ctx_query *
+ ***************************/
+
+
+/* hwrm_stat_ext_ctx_query_input (size:192b/24B) */
+struct hwrm_stat_ext_ctx_query_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* ID of the extended statistics context that is being queried. */
+	uint32_t	stat_ctx_id;
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_EXT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK \
+		UINT32_C(0x1)
+	uint8_t	unused_0[3];
+} __rte_packed;
+
+/* hwrm_stat_ext_ctx_query_output (size:1472b/184B) */
+struct hwrm_stat_ext_ctx_query_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on receive path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***************************
  * hwrm_stat_ctx_eng_query *
  ***************************/
@@ -37565,6 +39713,13 @@ struct hwrm_nvm_install_update_input {
 	 */
 	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_ALLOWED_TO_DEFRAG \
 		UINT32_C(0x4)
+	/*
+	 * If set to 1, FW will verify the package in the "UPDATE" NVM item
+	 * without installing it. This flag is for FW internal use only.
+	 * Users should not set this flag. The request will otherwise fail.
+	 */
+	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_VERIFY_ONLY \
+		UINT32_C(0x8)
 	uint8_t	unused_0[2];
 } __rte_packed;
 
@@ -38115,6 +40270,72 @@ struct hwrm_nvm_validate_option_cmd_err {
 	uint8_t	unused_0[7];
 } __rte_packed;
 
+/****************
+ * hwrm_oem_cmd *
+ ****************/
+
+
+/* hwrm_oem_cmd_input (size:1024b/128B) */
+struct hwrm_oem_cmd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific command data. */
+	uint32_t	oem_data[26];
+} __rte_packed;
+
+/* hwrm_oem_cmd_output (size:768b/96B) */
+struct hwrm_oem_cmd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific response data. */
+	uint32_t	oem_data[18];
+	uint8_t	unused_1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /*****************
  * hwrm_fw_reset *
  ******************/
@@ -38338,6 +40559,55 @@ struct hwrm_port_ts_query_output {
 	uint8_t		valid;
 } __rte_packed;
 
+/*
+ * This structure is fixed at the beginning of the ChiMP SRAM (GRC
+ * offset: 0x31001F0). Host software is expected to read from this
+ * location for a defined signature. If it exists, the software can
+ * assume the presence of this structure and the validity of the
+ * FW_STATUS location in the next field.
+ */
+/* hcomm_status (size:64b/8B) */
+struct hcomm_status {
+	uint32_t	sig_ver;
+	/*
+	 * This field defines the version of the structure. The latest
+	 * version value is 1.
+	 */
+	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
+	#define HCOMM_STATUS_VER_SFT		0
+	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
+	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
+	/*
+	 * This field is to store the signature value to indicate the
+	 * presence of the structure.
+	 */
+	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
+	#define HCOMM_STATUS_SIGNATURE_SFT	8
+	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
+	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
+	uint32_t	fw_status_loc;
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
+	/* PCIE configuration space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
+	/* GRC space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
+	/* BAR0 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
+	/* BAR1 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
+		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
+	/*
+	 * This offset where the fw_status register is located. The value
+	 * is generally 4-byte aligned.
+	 */
+	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
+	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
+} __rte_packed;
+/* This is the GRC offset where the hcomm_status struct resides. */
+#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
+
 /**************************
  * hwrm_cfa_counter_qcaps *
  **************************/
@@ -38622,53 +40892,4 @@ struct hwrm_cfa_counter_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/*
- * This structure is fixed at the beginning of the ChiMP SRAM (GRC
- * offset: 0x31001F0). Host software is expected to read from this
- * location for a defined signature. If it exists, the software can
- * assume the presence of this structure and the validity of the
- * FW_STATUS location in the next field.
- */
-/* hcomm_status (size:64b/8B) */
-struct hcomm_status {
-	uint32_t	sig_ver;
-	/*
-	 * This field defines the version of the structure. The latest
-	 * version value is 1.
-	 */
-	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
-	#define HCOMM_STATUS_VER_SFT		0
-	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
-	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
-	/*
-	 * This field is to store the signature value to indicate the
-	 * presence of the structure.
-	 */
-	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
-	#define HCOMM_STATUS_SIGNATURE_SFT	8
-	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
-	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
-	uint32_t	fw_status_loc;
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
-	/* PCIE configuration space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
-	/* GRC space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
-	/* BAR0 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
-	/* BAR1 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
-		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
-	/*
-	 * This offset where the fw_status register is located. The value
-	 * is generally 4-byte aligned.
-	 */
-	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
-	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
-} __rte_packed;
-/* This is the GRC offset where the hcomm_status struct resides. */
-#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
-
 #endif /* _HSI_STRUCT_DEF_DPDK_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 341909573..439950e02 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -86,6 +86,7 @@ struct tf_tbl_type_get_output;
 struct tf_em_internal_insert_input;
 struct tf_em_internal_insert_output;
 struct tf_em_internal_delete_input;
+struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -949,6 +950,8 @@ typedef struct tf_em_internal_insert_output {
 	uint16_t			 rptr_index;
 	/* EM record offset 0~3 */
 	uint8_t			  rptr_entry;
+	/* Number of word entries consumed by the key */
+	uint8_t			  num_of_entries;
 } tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
 
 /* Input params for EM INTERNAL rule delete */
@@ -969,4 +972,10 @@ typedef struct tf_em_internal_delete_input {
 	uint16_t			 em_key_bitlen;
 } tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
 
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_output {
+	/* Original stack allocation index */
+	uint16_t			 em_index;
+} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
index e5abcc2f2..b1fd2cd43 100644
--- a/drivers/net/bnxt/tf_core/lookup3.h
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -152,7 +152,6 @@ static inline uint32_t hashword(const uint32_t *k,
 		final(a, b, c);
 		/* Falls through. */
 	case 0:	    /* case 0: nothing left to add */
-		/* FALLTHROUGH */
 		break;
 	}
 	/*------------------------------------------------- report the result */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 9cfbd244f..954806377 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -27,6 +27,14 @@ stack_init(int num_entries, uint32_t *items, struct stack *st)
 	return 0;
 }
 
+/*
+ * Return the address of the items
+ */
+uint32_t *stack_items(struct stack *st)
+{
+	return st->items;
+}
+
 /* Return the size of the stack
  */
 int32_t
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
index ebd055592..6732e0313 100644
--- a/drivers/net/bnxt/tf_core/stack.h
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -36,6 +36,16 @@ int stack_init(int num_entries,
 	       uint32_t *items,
 	       struct stack *st);
 
+/** Return the address of the stack contents
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    pointer to the stack contents
+ */
+uint32_t *stack_items(struct stack *st);
+
 /** Return the size of the stack
  *
  *  [in] st
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index cf9f36adb..1f6c33ab5 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -45,6 +45,100 @@ static void tf_seeds_init(struct tf_session *session)
 	}
 }
 
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	tfp_free(ptr);
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -54,6 +148,7 @@ tf_open_session(struct tf                    *tfp,
 	struct tfp_calloc_parms alloc_parms;
 	unsigned int domain, bus, slot, device;
 	uint8_t fw_session_id;
+	int dir;
 
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
@@ -110,7 +205,7 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup;
 	}
 
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+	tfp->session = alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -175,6 +270,16 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize EM pool */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EM Pool initialization failed\n");
+			goto cleanup_close;
+		}
+	}
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -239,6 +344,7 @@ tf_close_session(struct tf *tfp)
 	int rc_close = 0;
 	struct tf_session *tfs;
 	union tf_session_id session_id;
+	int dir;
 
 	if (tfp == NULL || tfp->session == NULL)
 		return -EINVAL;
@@ -268,6 +374,10 @@ tf_close_session(struct tf *tfp)
 
 	/* Final cleanup as we're last user of the session */
 	if (tfs->ref_count == 0) {
+		/* Free EM pool */
+		for (dir = 0; dir < TF_DIR_MAX; dir++)
+			tf_free_em_pool(tfs, dir);
+
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
 		tfp->session = NULL;
@@ -301,16 +411,25 @@ int tf_insert_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
 	/* Process the EM entry per Table Scope type */
-	return tf_insert_eem_entry((struct tf_session *)tfp->session->core_data,
-				   tbl_scope_cb,
-				   parms);
+	if (parms->mem == TF_MEM_EXTERNAL) {
+		/* External EEM */
+		return tf_insert_eem_entry((struct tf_session *)
+					   (tfp->session->core_data),
+					   tbl_scope_cb,
+					   parms);
+	} else if (parms->mem == TF_MEM_INTERNAL) {
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
+	}
+
+	return -EINVAL;
 }
 
 /** Delete EM hash entry API
@@ -327,13 +446,16 @@ int tf_delete_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
-	return tf_delete_eem_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tfp, parms);
+	else
+		return tf_delete_em_internal_entry(tfp, parms);
 }
 
 /** allocate identifier resource
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1eedd80e7..81ff7602f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -44,44 +44,7 @@ enum tf_mem {
 };
 
 /**
- * The size of the external action record (Wh+/Brd2)
- *
- * Currently set to 512.
- *
- * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
- * + stats (16) = 304 aligned on a 16B boundary
- *
- * Theoretically, the size should be smaller. ~304B
- */
-#define TF_ACTION_RECORD_SZ 512
-
-/**
- * External pool size
- *
- * Defines a single pool of external action records of
- * fixed size.  Currently, this is an index.
- */
-#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
-
-/**
- *  External pool entry count
- *
- *  Defines the number of entries in the external action pool
- */
-#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
-
-/**
- * Number of external pools
- */
-#define TF_EXT_POOL_CNT_MAX 1
-
-/**
- * External pool Id
- */
-#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
-#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
-
-/** EEM record AR helper
+ * EEM record AR helper
  *
  * Helper to handle the Action Record Pointer in the EEM Record Entry.
  *
@@ -109,7 +72,8 @@ enum tf_mem {
  */
 
 
-/** Session Version defines
+/**
+ * Session Version defines
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -119,7 +83,8 @@ enum tf_mem {
 #define TF_SESSION_VER_MINOR  0   /**< Minor Version */
 #define TF_SESSION_VER_UPDATE 0   /**< Update Version */
 
-/** Session Name
+/**
+ * Session Name
  *
  * Name of the TruFlow control channel interface.  Expects
  * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
@@ -128,7 +93,8 @@ enum tf_mem {
 
 #define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Session ID define */
 
-/** Session Identifier
+/**
+ * Session Identifier
  *
  * Unique session identifier which includes PCIe bus info to
  * distinguish the PF and session info to identify the associated
@@ -146,7 +112,8 @@ union tf_session_id {
 	} internal;
 };
 
-/** Session Version
+/**
+ * Session Version
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -160,8 +127,8 @@ struct tf_session_version {
 	uint8_t update;
 };
 
-/** Session supported device types
- *
+/**
+ * Session supported device types
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
@@ -171,6 +138,147 @@ enum tf_device_type {
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
+/** Identifier resource types
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for SR2
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC,
+	TF_IDENT_TYPE_MAX
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/* Internal */
+
+	/** Wh+/SR Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Wh+/SR/Th Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Wh+/SR Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Wh+/SR Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 32 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Wh+/SR Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Wh+/SR Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Wh+/SR Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Wh+/SR Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Wh+/SR Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Wh+/SR Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Wh+/SR Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** SR2 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** SR2 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** SR2 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** SR2 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** SR2 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** SR2 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** SR2 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** SR2 VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+	/** Th/SR2 EM Flexible Key builder */
+	TF_TBL_TYPE_EM_FKB,
+	/** Th/SR2 WC Flexible Key builder */
+	TF_TBL_TYPE_WC_FKB,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	TF_TBL_TYPE_MAX
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	/** L2 Context TCAM */
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	/** Profile TCAM */
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	/** Wildcard TCAM */
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	/** Source Properties TCAM */
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	/** Connection Tracking Rule TCAM */
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	/** Virtual Edge Bridge TCAM */
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+};
+
+/**
+ * EM Resources
+ * These defines are provisioned during
+ * tf_open_session()
+ */
+enum tf_em_tbl_type {
+	/** The number of internal EM records for the session */
+	TF_EM_TBL_TYPE_EM_RECORD,
+	/** The number of table scopes reequested */
+	TF_EM_TBL_TYPE_TBL_SCOPE,
+	TF_EM_TBL_TYPE_MAX
+};
+
 /** TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
@@ -309,6 +417,30 @@ struct tf_open_session_parms {
 	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
 	 */
 	enum tf_device_type device_type;
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
 };
 
 /**
@@ -417,31 +549,6 @@ int tf_close_session(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
-	 *  and can be used in WC TCAM or EM keys to virtualize further
-	 *  lookups.
-	 */
-	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
-	 *  to enable virtualization of the profile TCAM.
-	 */
-	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
-	 *  to enable virtualization of the WC TCAM hardware.
-	 */
-	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
-	 *  to enable virtualization of the EM hardware. (not required for Brd4
-	 *  as it has table scope)
-	 */
-	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
-	 *  enable virtualization of further lookups.
-	 */
-	TF_IDENT_TYPE_L2_FUNC
-};
-
 /** tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
@@ -631,19 +738,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-/**
- * TCAM table type
- */
-enum tf_tcam_tbl_type {
-	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	TF_TCAM_TBL_TYPE_PROF_TCAM,
-	TF_TCAM_TBL_TYPE_WC_TCAM,
-	TF_TCAM_TBL_TYPE_SP_TCAM,
-	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
-	TF_TCAM_TBL_TYPE_VEB_TCAM,
-	TF_TCAM_TBL_TYPE_MAX
-
-};
 
 /**
  * @page tcam TCAM Access
@@ -813,7 +907,8 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** get TCAM entry
+/*
+ * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -824,7 +919,8 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/** tf_free_tcam_entry parameter definition
+/*
+ * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
 	/**
@@ -845,8 +941,7 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free TCAM entry
- *
+/*
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -873,84 +968,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  */
 
 /**
- * Enumeration of TruFlow table types. A table type is used to identify a
- * resource object.
- *
- * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
- * the only table type that is connected with a table scope.
- */
-enum tf_tbl_type {
-	/** Wh+/Brd2 Action Record */
-	TF_TBL_TYPE_FULL_ACT_RECORD,
-	/** Multicast Groups */
-	TF_TBL_TYPE_MCAST_GROUPS,
-	/** Action Encap 8 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_8B,
-	/** Action Encap 16 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_16B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_32B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_64B,
-	/** Action Source Properties SMAC */
-	TF_TBL_TYPE_ACT_SP_SMAC,
-	/** Action Source Properties SMAC IPv4 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
-	/** Action Source Properties SMAC IPv6 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
-	/** Action Statistics 64 Bits */
-	TF_TBL_TYPE_ACT_STATS_64,
-	/** Action Modify L4 Src Port */
-	TF_TBL_TYPE_ACT_MODIFY_SPORT,
-	/** Action Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_DPORT,
-	/** Action Modify IPv4 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
-	/** Action _Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
-
-	/* HW */
-
-	/** Meter Profiles */
-	TF_TBL_TYPE_METER_PROF,
-	/** Meter Instance */
-	TF_TBL_TYPE_METER_INST,
-	/** Mirror Config */
-	TF_TBL_TYPE_MIRROR_CONFIG,
-	/** UPAR */
-	TF_TBL_TYPE_UPAR,
-	/** Brd4 Epoch 0 table */
-	TF_TBL_TYPE_EPOCH0,
-	/** Brd4 Epoch 1 table  */
-	TF_TBL_TYPE_EPOCH1,
-	/** Brd4 Metadata  */
-	TF_TBL_TYPE_METADATA,
-	/** Brd4 CT State  */
-	TF_TBL_TYPE_CT_STATE,
-	/** Brd4 Range Profile  */
-	TF_TBL_TYPE_RANGE_PROF,
-	/** Brd4 Range Entry  */
-	TF_TBL_TYPE_RANGE_ENTRY,
-	/** Brd4 LAG Entry  */
-	TF_TBL_TYPE_LAG,
-	/** Brd4 only VNIC/SVIF Table */
-	TF_TBL_TYPE_VNIC_SVIF,
-
-	/* External */
-
-	/** External table type - initially 1 poolsize entries.
-	 * All External table types are associated with a table
-	 * scope. Internal types are not.
-	 */
-	TF_TBL_TYPE_EXT,
-	TF_TBL_TYPE_MAX
-};
-
-/** tf_alloc_tbl_entry parameter definition
+ * tf_alloc_tbl_entry parameter definition
  */
 struct tf_alloc_tbl_entry_parms {
 	/**
@@ -993,7 +1011,8 @@ struct tf_alloc_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** allocate index table entries
+/**
+ * allocate index table entries
  *
  * Internal types:
  *
@@ -1023,7 +1042,8 @@ struct tf_alloc_tbl_entry_parms {
 int tf_alloc_tbl_entry(struct tf *tfp,
 		       struct tf_alloc_tbl_entry_parms *parms);
 
-/** tf_free_tbl_entry parameter definition
+/**
+ * tf_free_tbl_entry parameter definition
  */
 struct tf_free_tbl_entry_parms {
 	/**
@@ -1049,7 +1069,8 @@ struct tf_free_tbl_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free index table entry
+/**
+ * free index table entry
  *
  * Used to free a previously allocated table entry.
  *
@@ -1075,7 +1096,8 @@ struct tf_free_tbl_entry_parms {
 int tf_free_tbl_entry(struct tf *tfp,
 		      struct tf_free_tbl_entry_parms *parms);
 
-/** tf_set_tbl_entry parameter definition
+/**
+ * tf_set_tbl_entry parameter definition
  */
 struct tf_set_tbl_entry_parms {
 	/**
@@ -1104,7 +1126,8 @@ struct tf_set_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** set index table entry
+/**
+ * set index table entry
  *
  * Used to insert an application programmed index table entry into a
  * previous allocated table location.  A shadow copy of the table
@@ -1115,7 +1138,8 @@ struct tf_set_tbl_entry_parms {
 int tf_set_tbl_entry(struct tf *tfp,
 		     struct tf_set_tbl_entry_parms *parms);
 
-/** tf_get_tbl_entry parameter definition
+/**
+ * tf_get_tbl_entry parameter definition
  */
 struct tf_get_tbl_entry_parms {
 	/**
@@ -1140,7 +1164,8 @@ struct tf_get_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** get index table entry
+/**
+ * get index table entry
  *
  * Used to retrieve a previous set index table entry.
  *
@@ -1163,7 +1188,8 @@ int tf_get_tbl_entry(struct tf *tfp,
  * @ref tf_search_em_entry
  *
  */
-/** tf_insert_em_entry parameter definition
+/**
+ * tf_insert_em_entry parameter definition
  */
 struct tf_insert_em_entry_parms {
 	/**
@@ -1239,6 +1265,10 @@ struct tf_delete_em_entry_parms {
 	 * 2 element array with 2 ids. (Brd4 only)
 	 */
 	uint16_t *epochs;
+	/**
+	 * [out] The index of the entry
+	 */
+	uint16_t index;
 	/**
 	 * [in] structure containing flow delete handle information
 	 */
@@ -1291,7 +1321,8 @@ struct tf_search_em_entry_parms {
 	uint64_t flow_handle;
 };
 
-/** insert em hash entry in internal table memory
+/**
+ * insert em hash entry in internal table memory
  *
  * Internal:
  *
@@ -1328,7 +1359,8 @@ struct tf_search_em_entry_parms {
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms);
 
-/** delete em hash entry table memory
+/**
+ * delete em hash entry table memory
  *
  * Internal:
  *
@@ -1353,7 +1385,8 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms);
 
-/** search em hash entry table memory
+/**
+ * search em hash entry table memory
  *
  * Internal:
 
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index bd8e2ba8a..fd1797e39 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -287,7 +287,7 @@ static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
 }
 
 static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t	       *in_key,
+				    uint8_t *in_key,
 				    struct tf_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
@@ -308,7 +308,7 @@ static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
  * EEXIST  - Key does exist in table at "index" in table "table".
  * TF_ERR     - Something went horribly wrong.
  */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 					  enum tf_dir dir,
 					  struct tf_eem_64b_entry *entry,
 					  uint32_t key0_hash,
@@ -368,8 +368,8 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session	   *session,
-			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
@@ -457,6 +457,96 @@ int tf_insert_eem_entry(struct tf_session	   *session,
 	return -EINVAL;
 }
 
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms)
+{
+	int       rc;
+	uint32_t  gfid;
+	uint16_t  rptr_index = 0;
+	uint8_t   rptr_entry = 0;
+	uint8_t   num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG
+		   (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc != 0)
+		return -1;
+
+	PMD_DRV_LOG
+		   (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int tf_delete_em_internal_entry(struct tf *tfp,
+				struct tf_delete_em_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+
 /** delete EEM hash entry API
  *
  * returns:
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 8a3584fbd..c1805df73 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -12,6 +12,20 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+/*
+ * Used to build GFID:
+ *
+ *   15           2  0
+ *  +--------------+--+
+ *  |   Index      |E |
+ *  +--------------+--+
+ *
+ * E = Entry (bucket inndex)
+ */
+#define TF_EM_INTERNAL_INDEX_SHIFT 2
+#define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
+#define TF_EM_INTERNAL_ENTRY_MASK  0x3
+
 /** EEM Entry header
  *
  */
@@ -53,6 +67,17 @@ struct tf_eem_64b_entry {
 	struct tf_eem_entry_hdr hdr;
 };
 
+/** EM Entry
+ *  Each EM entry is 512-bit (64-bytes) but ordered differently to
+ *  EEM.
+ */
+struct tf_em_64b_entry {
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+};
+
 /**
  * Allocates EEM Table scope
  *
@@ -106,9 +131,15 @@ int tf_insert_eem_entry(struct tf_session *session,
 			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms);
 
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms);
+
 int tf_delete_eem_entry(struct tf *tfp,
 			struct tf_delete_em_entry_parms *parms);
 
+int tf_delete_em_internal_entry(struct tf                       *tfp,
+				struct tf_delete_em_entry_parms *parms);
+
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
index 417a99cda..1491539ca 100644
--- a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -90,6 +90,18 @@ do {									\
 		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
 } while (0)
 
+#define TF_GET_NUM_KEY_ENTRIES_FROM_FLOW_HANDLE(flow_handle,		\
+					  num_key_entries)		\
+	(num_key_entries =						\
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		     TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT))		\
+
+#define TF_GET_ENTRY_NUM_FROM_FLOW_HANDLE(flow_handle,		\
+					  entry_num)		\
+	(entry_num =						\
+		(((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT))		\
+
 /*
  * 32 bit Flow ID handlers
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index beecafdeb..554a8491d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -16,6 +16,7 @@
 #include "tf_msg.h"
 #include "hsi_struct_def_dpdk.h"
 #include "hwrm_tf.h"
+#include "tf_em.h"
 
 /**
  * Endian converts min and max values from the HW response to the query
@@ -1013,15 +1014,94 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_insert_input req = { 0 };
+	struct tf_em_internal_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
+		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_INSERT,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends EM delete insert request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_delete_input req = { 0 };
+	struct tf_em_internal_delete_output resp = { 0 };
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.tf_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_DELETE,
+		 req,
+		resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 /**
  * Sends EM operation request to Firmware
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op)
+		 int dir,
+		 uint16_t op)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_input req = {0};
 	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
 	struct tfp_send_msg_parms parms = { 0 };
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 030d1881e..89f7370cc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -121,6 +121,19 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				    struct tf_insert_em_entry_parms *params,
+				    uint16_t *rptr_index,
+				    uint8_t *rptr_entry,
+				    uint8_t *num_of_entries);
+/**
+ * Sends EM internal delete request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms);
 /**
  * Sends EM mem register request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 50ef2d530..c9f4f8f04 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -13,12 +13,25 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "stack.h"
 
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
+/**
+ * Number of EM entries. Static for now will be removed
+ * when parameter added at a later date. At this stage we
+ * are using fixed size entries so that each stack entry
+ * represents 4 RT (f/n)blocks. So we take the total block
+ * allocation for truflow and divide that by 4.
+ */
+#define TF_SESSION_TOTAL_FN_BLOCKS (1024 * 8) /* 8K blocks */
+#define TF_SESSION_EM_ENTRY_SIZE 4 /* 4 blocks per entry */
+#define TF_SESSION_EM_POOL_SIZE \
+	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
+
 /** Session
  *
  * Shared memory containing private TruFlow session information.
@@ -289,6 +302,11 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/**
+	 * EM Pools
+	 */
+	struct stack em_pool[TF_DIR_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d900c9c09..dda72c3d5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -156,7 +156,7 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
 		if (tfp_calloc(&parms) != 0)
 			goto cleanup;
 
-		tp->pg_pa_tbl[i] = (uint64_t)(uintptr_t)parms.mem_pa;
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
 		tp->pg_va_tbl[i] = parms.mem_va;
 
 		memset(tp->pg_va_tbl[i], 0, pg_size);
@@ -792,7 +792,8 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	index = parms->idx;
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1179,7 +1180,8 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1330,7 +1332,8 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1801,3 +1804,91 @@ tf_free_tbl_entry(struct tf *tfp,
 			    rc);
 	return rc;
 }
+
+
+static void
+tf_dump_link_page_table(struct tf_em_page_tbl *tp,
+			struct tf_em_page_tbl *tp_next)
+{
+	uint64_t *pg_va;
+	uint32_t i;
+	uint32_t j;
+	uint32_t k = 0;
+
+	printf("pg_count:%d pg_size:0x%x\n",
+	       tp->pg_count,
+	       tp->pg_size);
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+		printf("\t%p\n", (void *)pg_va);
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
+			if (((pg_va[j] & 0x7) ==
+			     tfp_cpu_to_le_64(PTU_PTE_LAST |
+					      PTU_PTE_VALID)))
+				return;
+
+			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
+				printf("** Invalid entry **\n");
+				return;
+			}
+
+			if (++k >= tp_next->pg_count) {
+				printf("** Shouldn't get here **\n");
+				return;
+			}
+		}
+	}
+}
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
+{
+	struct tf_session      *session;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_page_tbl *tp;
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_table *tbl;
+	int i;
+	int j;
+	int dir;
+
+	printf("called %s\n", __func__);
+
+	/* find session struct */
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* find control block for table scope */
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		PMD_DRV_LOG(ERR, "No table scope\n");
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
+
+		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
+			printf
+	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
+			       j,
+			       tbl->type,
+			       tbl->num_entries,
+			       tbl->entry_size,
+			       tbl->num_lvl);
+			if (tbl->pg_tbl[0].pg_va_tbl &&
+			    tbl->pg_tbl[0].pg_pa_tbl)
+				printf("%p %p\n",
+			       tbl->pg_tbl[0].pg_va_tbl[0],
+			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
+			for (i = 0; i < tbl->num_lvl - 1; i++) {
+				printf("Level:%d\n", i);
+				tp = &tbl->pg_tbl[i];
+				tp_next = &tbl->pg_tbl[i + 1];
+				tf_dump_link_page_table(tp, tp_next);
+			}
+			printf("\n");
+		}
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index bdc6288ee..7a5443678 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -76,38 +76,51 @@ struct tf_tbl_scope_cb {
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
  */
-#define BNXT_PAGE_SHIFT 22 /** 2M */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
 
-#if (BNXT_PAGE_SHIFT < 12)				/** < 4K >> 4K */
-#define TF_EM_PAGE_SHIFT 12
+#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_PAGE_SHIFT <= 13)			/** 4K, 8K */
-#define TF_EM_PAGE_SHIFT 13
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
-#define TF_EM_PAGE_SHIFT 15
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
-#elif (BNXT_PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
-#define TF_EM_PAGE_SHIFT 16
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
-#define TF_EM_PAGE_SHIFT 18
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_PAGE_SHIFT <= 21)			/** 1M */
-#define TF_EM_PAGE_SHIFT 20
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_PAGE_SHIFT <= 22)			/** 2M, 4M */
-#define TF_EM_PAGE_SHIFT 21
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
-#define TF_EM_PAGE_SHIFT 22
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#else						/** >= 1G >> 1G */
-#define TF_EM_PAGE_SHIFT	30
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8d5e94e1a..fe49b6304 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -3,14 +3,23 @@
  * All rights reserved.
  */
 
-/* This header file defines the Portability structures and APIs for
+/*
+ * This header file defines the Portability structures and APIs for
  * TruFlow.
  */
 
 #ifndef _TFP_H_
 #define _TFP_H_
 
+#include <rte_config.h>
 #include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+
+/**
+ * DPDK/Driver specific log level for the BNXT Eth driver.
+ */
+extern int bnxt_logtype_driver;
 
 /** Spinlock
  */
@@ -18,13 +27,21 @@ struct tfp_spinlock_parms {
 	rte_spinlock_t slock;
 };
 
+#define TFP_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define TFP_DRV_LOG(level, fmt, args...) \
+	TFP_DRV_LOG_RAW(level, fmt, ## args)
+
 /**
  * @file
  *
  * TrueFlow Portability API Header File
  */
 
-/** send message parameter definition
+/**
+ * send message parameter definition
  */
 struct tfp_send_msg_parms {
 	/**
@@ -62,7 +79,8 @@ struct tfp_send_msg_parms {
 	uint32_t *resp_data;
 };
 
-/** calloc parameter definition
+/**
+ * calloc parameter definition
  */
 struct tfp_calloc_parms {
 	/**
@@ -96,43 +114,15 @@ struct tfp_calloc_parms {
  * @ref tfp_send_msg_tunneled
  *
  * @ref tfp_calloc
- * @ref tfp_free
  * @ref tfp_memcpy
+ * @ref tfp_free
  *
  * @ref tfp_spinlock_init
  * @ref tfp_spinlock_lock
  * @ref tfp_spinlock_unlock
  *
- * @ref tfp_cpu_to_le_16
- * @ref tfp_le_to_cpu_16
- * @ref tfp_cpu_to_le_32
- * @ref tfp_le_to_cpu_32
- * @ref tfp_cpu_to_le_64
- * @ref tfp_le_to_cpu_64
- * @ref tfp_cpu_to_be_16
- * @ref tfp_be_to_cpu_16
- * @ref tfp_cpu_to_be_32
- * @ref tfp_be_to_cpu_32
- * @ref tfp_cpu_to_be_64
- * @ref tfp_be_to_cpu_64
  */
 
-#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
-#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
-#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
-#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
-#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
-#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
-#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
-#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
-#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
-#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
-#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
-#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
-#define tfp_bswap_16(val) rte_bswap16(val)
-#define tfp_bswap_32(val) rte_bswap32(val)
-#define tfp_bswap_64(val) rte_bswap64(val)
-
 /**
  * Provides communication capability from the TrueFlow API layer to
  * the TrueFlow firmware. The portability layer internally provides
@@ -162,9 +152,24 @@ int tfp_send_msg_direct(struct tf *tfp,
  *   -1             - Global error like not supported
  *   -EINVAL        - Parameter Error
  */
-int tfp_send_msg_tunneled(struct tf                 *tfp,
+int tfp_send_msg_tunneled(struct tf *tfp,
 			  struct tfp_send_msg_parms *parms);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
@@ -179,10 +184,58 @@ int tfp_send_msg_tunneled(struct tf                 *tfp,
  *   -EINVAL        - Parameter error
  */
 int tfp_calloc(struct tfp_calloc_parms *parms);
-
-void tfp_free(void *addr);
 void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_free(void *addr);
+
 void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
+
+/*
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 10/51] net/bnxt: modify EM insert and delete to use HWRM direct
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (8 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 09/51] net/bnxt: add support for exact match Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 11/51] net/bnxt: add multi device support Ajit Khaparde
                           ` (40 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Modify Exact Match insert and delete to use the HWRM messages directly.
Remove tunneled EM insert and delete message types.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h | 70 ++----------------------------
 drivers/net/bnxt/tf_core/tf_msg.c  | 66 ++++++++++++++++------------
 2 files changed, 43 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 439950e02..d342c695c 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -23,8 +23,6 @@ typedef enum tf_subtype {
 	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
 	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
 	HWRM_TFT_TBL_SCOPE_CFG = 731,
-	HWRM_TFT_EM_RULE_INSERT = 739,
-	HWRM_TFT_EM_RULE_DELETE = 740,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
@@ -83,10 +81,6 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_em_internal_insert_input;
-struct tf_em_internal_insert_output;
-struct tf_em_internal_delete_input;
-struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -351,7 +345,7 @@ typedef struct tf_session_hw_resc_alloc_output {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -453,7 +447,7 @@ typedef struct tf_session_hw_resc_free_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -555,7 +549,7 @@ typedef struct tf_session_hw_resc_flush_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -922,60 +916,4 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
-/* Input params for EM internal rule insert */
-typedef struct tf_em_internal_insert_input {
-	/* Firmware Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* strength */
-	uint16_t			 strength;
-	/* index to action */
-	uint32_t			 action_ptr;
-	/* index of em record */
-	uint32_t			 em_record_idx;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
-
-/* Output params for EM internal rule insert */
-typedef struct tf_em_internal_insert_output {
-	/* EM record pointer index */
-	uint16_t			 rptr_index;
-	/* EM record offset 0~3 */
-	uint8_t			  rptr_entry;
-	/* Number of word entries consumed by the key */
-	uint8_t			  num_of_entries;
-} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_input {
-	/* Session Id */
-	uint32_t			 tf_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* EM internal flow hanndle */
-	uint64_t			 flow_handle;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_output {
-	/* Original stack allocation index */
-	uint16_t			 em_index;
-} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
-
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 554a8491d..c8f6b88d3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1023,32 +1023,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				uint8_t *rptr_entry,
 				uint8_t *num_of_entries)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_insert_input req = { 0 };
-	struct tf_em_internal_insert_output resp = { 0 };
+	int                         rc;
+	struct tfp_send_msg_parms        parms = { 0 };
+	struct hwrm_tf_em_insert_input   req = { 0 };
+	struct hwrm_tf_em_insert_output  resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
 		TF_LKUP_RECORD_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_INSERT,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1056,7 +1062,7 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 	*rptr_index = resp.rptr_index;
 	*num_of_entries = resp.num_of_entries;
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
@@ -1065,32 +1071,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_delete_input req = { 0 };
-	struct tf_em_internal_delete_output resp = { 0 };
+	int                             rc;
+	struct tfp_send_msg_parms       parms = { 0 };
+	struct hwrm_tf_em_delete_input  req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
-	req.tf_session_id =
+	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_DELETE,
-		 req,
-		resp);
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
 	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 11/51] net/bnxt: add multi device support
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (9 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
                           ` (39 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Randy Schacher, Venkat Duvvuru

From: Michael Wildt <michael.wildt@broadcom.com>

Introduce new modules for Device, Resource Manager, Identifier,
Table Types, and TCAM for multi device support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   8 +
 drivers/net/bnxt/tf_core/Makefile             |   9 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 266 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c            |   2 +
 drivers/net/bnxt/tf_core/tf_core.h            |  56 +--
 drivers/net/bnxt/tf_core/tf_device.c          |  50 +++
 drivers/net/bnxt/tf_core/tf_device.h          | 331 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  24 ++
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  64 +++
 drivers/net/bnxt/tf_core/tf_identifier.c      |  47 +++
 drivers/net/bnxt/tf_core/tf_identifier.h      | 140 +++++++
 drivers/net/bnxt/tf_core/tf_rm.c              |  54 +--
 drivers/net/bnxt/tf_core/tf_rm.h              |  18 -
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 102 +++++
 drivers/net/bnxt/tf_core/tf_rm_new.h          | 368 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_session.c         |  31 ++
 drivers/net/bnxt/tf_core/tf_session.h         |  54 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      | 240 ++++++++++++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     | 239 ++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c             |   1 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  78 ++++
 drivers/net/bnxt/tf_core/tf_tbl_type.h        | 309 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_tcam.c            |  78 ++++
 drivers/net/bnxt/tf_core/tf_tcam.h            | 314 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_util.c            | 145 +++++++
 drivers/net/bnxt/tf_core/tf_util.h            |  41 ++
 28 files changed, 3101 insertions(+), 94 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 5c7859cb5..a50cb261d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,14 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_device_p4.c',
+	'tf_core/tf_identifier.c',
+	'tf_core/tf_shadow_tbl.c',
+	'tf_core/tf_shadow_tcam.c',
+	'tf_core/tf_tbl_type.c',
+	'tf_core/tf_tcam.c',
+	'tf_core/tf_util.c',
+	'tf_core/tf_rm_new.c',
 
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index aa2d964e9..7a3c325a6 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,3 +14,12 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
new file mode 100644
index 000000000..c0c1e754e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -0,0 +1,266 @@
+/*
+ * Copyright(c) 2001-2020, Broadcom. All rights reserved. The
+ * term Broadcom refers to Broadcom Inc. and/or its subsidiaries.
+ * Proprietary and Confidential Information.
+ *
+ * This source file is the property of Broadcom Corporation, and
+ * may not be copied or distributed in any isomorphic form without
+ * the prior written consent of Broadcom Corporation.
+ *
+ * DO NOT MODIFY!!! This file is automatically generated.
+ */
+
+#ifndef _CFA_RESOURCE_TYPES_H_
+#define _CFA_RESOURCE_TYPES_H_
+
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+/* Wildcard TCAM Profile Id */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+/* Meter Profile */
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+/* Source Properties TCAM */
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+/* L2 Func */
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+/* EPOCH */
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+/* Metadata */
+#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+/* Connection Tracking Rule TCAM */
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+/* Range Profile */
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+/* Range */
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+/* Link Aggrigation */
+#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+
+
+#endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 1f6c33ab5..6e15a4c5c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -6,6 +6,7 @@
 #include <stdio.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
 #include "tf_em.h"
@@ -229,6 +230,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	session->dev = NULL;
 	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 81ff7602f..becc50c7f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -371,6 +371,35 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * tf_session_resources parameter definition.
+ */
+struct tf_session_resources {
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+};
 
 /**
  * tf_open_session parameters definition.
@@ -414,33 +443,14 @@ struct tf_open_session_parms {
 	union tf_session_id session_id;
 	/** [in] device type
 	 *
-	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
 	enum tf_device_type device_type;
-	/** [in] Requested Identifier Resources
-	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
-	 */
-	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
-	/** [in] Requested Index Table resource counts
-	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
-	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
-	/** [in] Requested TCAM Table resource counts
-	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
-	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
-	/** [in] Requested EM resource counts
+	/** [in] resources
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * Resource allocation
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
+	struct tf_session_resources resources;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
new file mode 100644
index 000000000..3b368313e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_device_p4.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+struct tf;
+
+/**
+ * Device specific bind function
+ */
+static int
+dev_bind_p4(struct tf *tfp __rte_unused,
+	    struct tf_session_resources *resources __rte_unused,
+	    struct tf_dev_info *dev_info)
+{
+	/* Initialize the modules */
+
+	dev_info->ops = &tf_dev_ops_p4;
+	return 0;
+}
+
+int
+dev_bind(struct tf *tfp __rte_unused,
+	 enum tf_device_type type,
+	 struct tf_session_resources *resources,
+	 struct tf_dev_info *dev_info)
+{
+	switch (type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_bind_p4(tfp,
+				   resources,
+				   dev_info);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "Device type not supported\n");
+		return -ENOTSUP;
+	}
+}
+
+int
+dev_unbind(struct tf *tfp __rte_unused,
+	   struct tf_dev_info *dev_handle __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
new file mode 100644
index 000000000..8b63ff178
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -0,0 +1,331 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_H_
+#define _TF_DEVICE_H_
+
+#include "tf_core.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * The Device module provides a general device template. A supported
+ * device type should implement one or more of the listed function
+ * pointers according to its capabilities.
+ *
+ * If a device function pointer is NULL the device capability is not
+ * supported.
+ */
+
+/**
+ * TF device information
+ */
+struct tf_dev_info {
+	const struct tf_dev_ops *ops;
+};
+
+/**
+ * @page device Device
+ *
+ * @ref tf_dev_bind
+ *
+ * @ref tf_dev_unbind
+ */
+
+/**
+ * Device bind handles the initialization of the specified device
+ * type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] type
+ *   Device type
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int dev_bind(struct tf *tfp,
+	     enum tf_device_type type,
+	     struct tf_session_resources *resources,
+	     struct tf_dev_info *dev_handle);
+
+/**
+ * Device release handles cleanup of the device specific information.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dev_handle
+ *   Device handle
+ */
+int dev_unbind(struct tf *tfp,
+	       struct tf_dev_info *dev_handle);
+
+/**
+ * Truflow device specific function hooks structure
+ *
+ * The following device hooks can be defined; unless noted otherwise,
+ * they are optional and can be filled with a null pointer. The
+ * purpose of these hooks is to support Truflow device operations for
+ * different device variants.
+ */
+struct tf_dev_ops {
+	/**
+	 * Allocation of an identifier element.
+	 *
+	 * This API allocates the specified identifier element from a
+	 * device specific identifier DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ident)(struct tf *tfp,
+				  struct tf_ident_alloc_parms *parms);
+
+	/**
+	 * Free of an identifier element.
+	 *
+	 * This API free's a previous allocated identifier element from a
+	 * device specific identifier DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ident)(struct tf *tfp,
+				 struct tf_ident_free_parms *parms);
+
+	/**
+	 * Allocation of a table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
+				     struct tf_tbl_type_alloc_parms *parms);
+
+	/**
+	 * Free of a table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tbl_type)(struct tf *tfp,
+				    struct tf_tbl_type_free_parms *parms);
+
+	/**
+	 * Searches for the specified table type element in a shadow DB.
+	 *
+	 * This API searches for the specified table type element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the table type
+	 * DB and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tbl_type)
+			(struct tf *tfp,
+			struct tf_tbl_type_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_set_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_get_parms *parms);
+
+	/**
+	 * Allocation of a tcam element.
+	 *
+	 * This API allocates the specified tcam element from a device
+	 * specific tcam DB. The allocated element is returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tcam)(struct tf *tfp,
+				 struct tf_tcam_alloc_parms *parms);
+
+	/**
+	 * Free of a tcam element.
+	 *
+	 * This API free's a previous allocated tcam element from a
+	 * device specific tcam DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tcam)(struct tf *tfp,
+				struct tf_tcam_free_parms *parms);
+
+	/**
+	 * Searches for the specified tcam element in a shadow DB.
+	 *
+	 * This API searches for the specified tcam element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the tcam DB
+	 * and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tcam)
+			(struct tf *tfp,
+			struct tf_tcam_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified tcam element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tcam)(struct tf *tfp,
+			       struct tf_tcam_set_parms *parms);
+
+	/**
+	 * Retrieves the specified tcam element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tcam)(struct tf *tfp,
+			       struct tf_tcam_get_parms *parms);
+};
+
+/**
+ * Supported device operation structures
+ */
+extern const struct tf_dev_ops tf_dev_ops_p4;
+
+#endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
new file mode 100644
index 000000000..c3c4d1e05
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_alloc_ident = tf_ident_alloc,
+	.tf_dev_free_ident = tf_ident_free,
+	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
+	.tf_dev_free_tbl_type = tf_tbl_type_free,
+	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
+	.tf_dev_set_tbl_type = tf_tbl_type_set,
+	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tcam = tf_tcam_alloc,
+	.tf_dev_free_tcam = tf_tcam_free,
+	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_set_tcam = tf_tcam_set,
+	.tf_dev_get_tcam = tf_tcam_get,
+};
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
new file mode 100644
index 000000000..84d90e3a7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_P4_H_
+#define _TF_DEVICE_P4_H_
+
+#include <cfa_resource_types.h>
+
+#include "tf_core.h"
+#include "tf_rm_new.h"
+
+struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+};
+
+struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
+	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+};
+
+struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+};
+
+#endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
new file mode 100644
index 000000000..726d0b406
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_identifier.h"
+
+struct tf;
+
+/**
+ * Identifier DBs.
+ */
+/* static void *ident_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+int
+tf_ident_bind(struct tf *tfp __rte_unused,
+	      struct tf_ident_cfg *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_alloc(struct tf *tfp __rte_unused,
+	       struct tf_ident_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_free(struct tf *tfp __rte_unused,
+	      struct tf_ident_free_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
new file mode 100644
index 000000000..b77c91b9d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_IDENTIFIER_H_
+#define _TF_IDENTIFIER_H_
+
+#include "tf_core.h"
+
+/**
+ * The Identifier module provides processing of Identifiers.
+ */
+
+struct tf_ident_cfg {
+	/**
+	 * Number of identifier types in each of the configuration
+	 * arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Identifier allcoation parameter definition
+ */
+struct tf_ident_alloc_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/**
+ * Identifier free parameter definition
+ */
+struct tf_ident_free_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/**
+ * @page ident Identity Management
+ *
+ * @ref tf_ident_bind
+ *
+ * @ref tf_ident_unbind
+ *
+ * @ref tf_ident_alloc
+ *
+ * @ref tf_ident_free
+ */
+
+/**
+ * Initializes the Identifier module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_bind(struct tf *tfp,
+		  struct tf_ident_cfg *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_unbind(struct tf *tfp);
+
+/**
+ * Allocates a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_alloc(struct tf *tfp,
+		   struct tf_ident_alloc_parms *parms);
+
+/**
+ * Free's a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_free(struct tf *tfp,
+		  struct tf_ident_free_parms *parms);
+
+#endif /* _TF_IDENTIFIER_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 38b1e71cd..2264704d2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -9,6 +9,7 @@
 
 #include "tf_rm.h"
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_resources.h"
 #include "tf_msg.h"
@@ -76,59 +77,6 @@
 			(dtype) = type ## _TX;	\
 	} while (0)
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
-{
-	switch (dir) {
-	case TF_DIR_RX:
-		return "RX";
-	case TF_DIR_TX:
-		return "TX";
-	default:
-		return "Invalid direction";
-	}
-}
-
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
-{
-	switch (id_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		return "l2_ctxt_remap";
-	case TF_IDENT_TYPE_PROF_FUNC:
-		return "prof_func";
-	case TF_IDENT_TYPE_WC_PROF:
-		return "wc_prof";
-	case TF_IDENT_TYPE_EM_PROF:
-		return "em_prof";
-	case TF_IDENT_TYPE_L2_FUNC:
-		return "l2_func";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
-{
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		return "l2_ctxt_tcam";
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		return "prof_tcam";
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		return "wc_tcam";
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		return "veb_tcam";
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		return "sp_tcam";
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		return "ct_rule_tcam";
-	default:
-		return "Invalid tcam table type";
-	}
-}
-
 const char
 *tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index e69d443a8..1a09f13a7 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -124,24 +124,6 @@ struct tf_rm_db {
 	struct tf_rm_resc tx;
 };
 
-/**
- * Helper function converting direction to text string
- */
-const char
-*tf_dir_2_str(enum tf_dir dir);
-
-/**
- * Helper function converting identifier to text string
- */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
-
-/**
- * Helper function converting tcam type to text string
- */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
-
 /**
  * Helper function used to convert HW HCAPI resource type to a string.
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
new file mode 100644
index 000000000..51bb9ba3a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_rm_new.h"
+
+/**
+ * Resource query single entry. Used when accessing HCAPI RM on the
+ * firmware.
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Generic RM Element data type that an RM DB is build upon.
+ */
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type type;
+
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
+
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
+
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
+
+/**
+ * TF RM DB definition
+ */
+struct tf_rm_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
+
+int
+tf_rm_create_db(struct tf *tfp __rte_unused,
+		struct tf_rm_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free_db(struct tf *tfp __rte_unused,
+	      struct tf_rm_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
new file mode 100644
index 000000000..72dba0984
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -0,0 +1,368 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_core.h"
+#include "bitalloc.h"
+
+struct tf;
+
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
+ *
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
+ */
+
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
+	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	TF_RM_TYPE_MAX
+};
+
+/**
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
+ */
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Allocation information for a single element.
+ */
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_entry entry;
+};
+
+/**
+ * Create RM DB parameters
+ */
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements in the parameter structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure
+	 */
+	struct tf_rm_element_cfg *parms;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Free RM DB parameters
+ */
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Allocate RM parameters for a single element
+ */
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+};
+
+/**
+ * Free RM parameters for a single element
+ */
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+};
+
+/**
+ * Is Allocated parameters for a single element
+ */
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	uint8_t *allocated;
+};
+
+/**
+ * Get Allocation information for a single element
+ */
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * @page rm Resource Manager
+ *
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ */
+
+/**
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
+
+/**
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
+
+/**
+ * Allocates a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to allocate parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
+
+/**
+ * Free's a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EpINVAL) on failure.
+ */
+int tf_rm_free(struct tf_rm_free_parms *parms);
+
+/**
+ * Performs an allocation verification check on a specified element.
+ *
+ * [in] parms
+ *   Pointer to is allocated parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
+
+/**
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
+ *
+ * [in] parms
+ *   Pointer to get info parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
+
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
new file mode 100644
index 000000000..c74994546
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session *tfs)
+{
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		TFP_DRV_LOG(ERR, "Session not created\n");
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	return 0;
+}
+
+int
+tf_session_get_device(struct tf_session *tfs,
+		      struct tf_device *tfd)
+{
+	if (tfs->dev == NULL) {
+		TFP_DRV_LOG(ERR, "Device not created\n");
+		return -EINVAL;
+	}
+	tfd = tfs->dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index c9f4f8f04..b1cc7a4a7 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -11,10 +11,21 @@
 
 #include "bitalloc.h"
 #include "tf_core.h"
+#include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
 #include "stack.h"
 
+/**
+ * The Session module provides session control support. A session is
+ * to the ULP layer known as a session_info instance. The session
+ * private data is the actual session.
+ *
+ * Session manages:
+ *   - The device and all the resources related to the device.
+ *   - Any session sharing between ULP applications
+ */
+
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
@@ -90,6 +101,9 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Device */
+	struct tf_dev_info *dev;
+
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
 
@@ -309,4 +323,44 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * @page session Session Management
+ *
+ * @ref tf_session_get_session
+ *
+ * @ref tf_session_get_device
+ */
+
+/**
+ * Looks up the private session information from the TF session info.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session(struct tf *tfp,
+			   struct tf_session *tfs);
+
+/**
+ * Looks up the device information from the TF Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfd
+ *   Pointer to the device
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_device(struct tf_session *tfs,
+			  struct tf_dev_info *tfd);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
new file mode 100644
index 000000000..8f2b6de70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tbl.h"
+
+/**
+ * Shadow table DB element
+ */
+struct tf_shadow_tbl_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of table type entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow table DB definition
+ */
+struct tf_shadow_tbl_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tbl_element *db;
+};
+
+int
+tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
new file mode 100644
index 000000000..dfd336e53
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TBL_H_
+#define _TF_SHADOW_TBL_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow Table module provides shadow DB handling for table based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow table DB is intended to be used by the Table Type module
+ * only.
+ */
+
+/**
+ * Shadow DB configuration information for a single table type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of table types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tbl_cfg_parms {
+	/**
+	 * TF Table type
+	 */
+	enum tf_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow table DB creation parameters
+ */
+struct tf_shadow_tbl_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tbl_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table DB free parameters
+ */
+struct tf_shadow_tbl_free_db_parms {
+	/**
+	 * Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table search parameters
+ */
+struct tf_shadow_tbl_search_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Table type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table insert parameters
+ */
+struct tf_shadow_tbl_insert_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table remove parameters
+ */
+struct tf_shadow_tbl_remove_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tbl Shadow table DB
+ *
+ * @ref tf_shadow_tbl_create_db
+ *
+ * @ref tf_shadow_tbl_free_db
+ *
+ * @reg tf_shadow_tbl_search
+ *
+ * @reg tf_shadow_tbl_insert
+ *
+ * @reg tf_shadow_tbl_remove
+ */
+
+/**
+ * Creates and fills a Shadow table DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms);
+
+/**
+ * Closes the Shadow table DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms);
+
+/**
+ * Search Shadow table db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow table DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow table DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TBL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.c b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
new file mode 100644
index 000000000..c61b833d7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tcam.h"
+
+/**
+ * Shadow tcam DB element
+ */
+struct tf_shadow_tcam_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of tcam entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow tcam DB definition
+ */
+struct tf_shadow_tcam_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tcam_element *db;
+};
+
+int
+tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.h b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
new file mode 100644
index 000000000..e2c4e06c0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TCAM_H_
+#define _TF_SHADOW_TCAM_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow tcam module provides shadow DB handling for tcam based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow tcam DB is intended to be used by the Tcam module only.
+ */
+
+/**
+ * Shadow DB configuration information for a single tcam type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of tcam types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tcam_cfg_parms {
+	/**
+	 * TF tcam type
+	 */
+	enum tf_tcam_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow tcam DB creation parameters
+ */
+struct tf_shadow_tcam_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tcam_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam DB free parameters
+ */
+struct tf_shadow_tcam_free_db_parms {
+	/**
+	 * Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam search parameters
+ */
+struct tf_shadow_tcam_search_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam insert parameters
+ */
+struct tf_shadow_tcam_insert_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam remove parameters
+ */
+struct tf_shadow_tcam_remove_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tcam Shadow tcam DB
+ *
+ * @ref tf_shadow_tcam_create_db
+ *
+ * @ref tf_shadow_tcam_free_db
+ *
+ * @reg tf_shadow_tcam_search
+ *
+ * @reg tf_shadow_tcam_insert
+ *
+ * @reg tf_shadow_tcam_remove
+ */
+
+/**
+ * Creates and fills a Shadow tcam DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms);
+
+/**
+ * Closes the Shadow tcam DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms);
+
+/**
+ * Search Shadow tcam db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow tcam DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow tcam DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TCAM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index dda72c3d5..17399a5b2 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,6 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
new file mode 100644
index 000000000..a57a5ddf2
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tbl_type.h"
+
+struct tf;
+
+/**
+ * Table Type DBs.
+ */
+/* static void *tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Table Type Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_type_bind(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc(struct tf *tfp __rte_unused,
+		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_free(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
+			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_set(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_get(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
new file mode 100644
index 000000000..c880b368b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Table Type module provides processing of Internal TF table types.
+ */
+
+/**
+ * Table Type configuration parameters
+ */
+struct tf_tbl_type_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Table Type allocation parameters
+ */
+struct tf_tbl_type_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type free parameters
+ */
+struct tf_tbl_type_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+struct tf_tbl_type_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type set parameters
+ */
+struct tf_tbl_type_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type get parameters
+ */
+struct tf_tbl_type_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tbl_type Table Type
+ *
+ * @ref tf_tbl_type_bind
+ *
+ * @ref tf_tbl_type_unbind
+ *
+ * @ref tf_tbl_type_alloc
+ *
+ * @ref tf_tbl_type_free
+ *
+ * @ref tf_tbl_type_alloc_search
+ *
+ * @ref tf_tbl_type_set
+ *
+ * @ref tf_tbl_type_get
+ */
+
+/**
+ * Initializes the Table Type module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_bind(struct tf *tfp,
+		     struct tf_tbl_type_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc(struct tf *tfp,
+		      struct tf_tbl_type_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_free(struct tf *tfp,
+		     struct tf_tbl_type_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc_search(struct tf *tfp,
+			     struct tf_tbl_type_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_set(struct tf *tfp,
+		    struct tf_tbl_type_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_get(struct tf *tfp,
+		    struct tf_tbl_type_get_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
new file mode 100644
index 000000000..3ad99dd0d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tcam.h"
+
+struct tf;
+
+/**
+ * TCAM DBs.
+ */
+/* static void *tcam_db[TF_DIR_MAX]; */
+
+/**
+ * TCAM Shadow DBs
+ */
+/* static void *shadow_tcam_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tcam_bind(struct tf *tfp __rte_unused,
+	     struct tf_tcam_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc(struct tf *tfp __rte_unused,
+	      struct tf_tcam_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_free(struct tf *tfp __rte_unused,
+	     struct tf_tcam_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc_search(struct tf *tfp __rte_unused,
+		     struct tf_tcam_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_set(struct tf *tfp __rte_unused,
+	    struct tf_tcam_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_get(struct tf *tfp __rte_unused,
+	    struct tf_tcam_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
new file mode 100644
index 000000000..1420c9ed5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -0,0 +1,314 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TCAM_H_
+#define _TF_TCAM_H_
+
+#include "tf_core.h"
+
+/**
+ * The TCAM module provides processing of Internal TCAM types.
+ */
+
+/**
+ * TCAM configuration parameters
+ */
+struct tf_tcam_cfg_parms {
+	/**
+	 * Number of tcam types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * TCAM allocation parameters
+ */
+struct tf_tcam_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM free parameters
+ */
+struct tf_tcam_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/**
+ * TCAM allocate search parameters
+ */
+struct tf_tcam_alloc_search_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/**
+ * TCAM set parameters
+ */
+struct tf_tcam_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM get parameters
+ */
+struct tf_tcam_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tcam TCAM
+ *
+ * @ref tf_tcam_bind
+ *
+ * @ref tf_tcam_unbind
+ *
+ * @ref tf_tcam_alloc
+ *
+ * @ref tf_tcam_free
+ *
+ * @ref tf_tcam_alloc_search
+ *
+ * @ref tf_tcam_set
+ *
+ * @ref tf_tcam_get
+ *
+ */
+
+/**
+ * Initializes the TCAM module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_bind(struct tf *tfp,
+		 struct tf_tcam_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested tcam type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc(struct tf *tfp,
+		  struct tf_tcam_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_free(struct tf *tfp,
+		 struct tf_tcam_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc_search(struct tf *tfp,
+			 struct tf_tcam_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_set(struct tf *tfp,
+		struct tf_tcam_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_get(struct tf *tfp,
+		struct tf_tcam_get_parms *parms);
+
+#endif /* _TF_TCAM_H */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
new file mode 100644
index 000000000..a9010543d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include "tf_util.h"
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+{
+	switch (tbl_type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		return "Full Action record";
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		return "Multicast Groups";
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		return "Encap 8B";
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		return "Encap 16B";
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+		return "Encap 32B";
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		return "Encap 64B";
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		return "Source Properties SMAC";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		return "Source Properties SMAC IPv4";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		return "Source Properties SMAC IPv6";
+	case TF_TBL_TYPE_ACT_STATS_64:
+		return "Stats 64B";
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		return "NAT Source Port";
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		return "NAT Destination Port";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		return "NAT IPv4 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		return "NAT IPv4 Destination";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+		return "NAT IPv6 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+		return "NAT IPv6 Destination";
+	case TF_TBL_TYPE_METER_PROF:
+		return "Meter Profile";
+	case TF_TBL_TYPE_METER_INST:
+		return "Meter";
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		return "Mirror";
+	case TF_TBL_TYPE_UPAR:
+		return "UPAR";
+	case TF_TBL_TYPE_EPOCH0:
+		return "EPOCH0";
+	case TF_TBL_TYPE_EPOCH1:
+		return "EPOCH1";
+	case TF_TBL_TYPE_METADATA:
+		return "Metadata";
+	case TF_TBL_TYPE_CT_STATE:
+		return "Connection State";
+	case TF_TBL_TYPE_RANGE_PROF:
+		return "Range Profile";
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		return "Range";
+	case TF_TBL_TYPE_LAG:
+		return "Link Aggregation";
+	case TF_TBL_TYPE_VNIC_SVIF:
+		return "VNIC SVIF";
+	case TF_TBL_TYPE_EM_FKB:
+		return "EM Flexible Key Builder";
+	case TF_TBL_TYPE_WC_FKB:
+		return "WC Flexible Key Builder";
+	case TF_TBL_TYPE_EXT:
+		return "External";
+	default:
+		return "Invalid tbl type";
+	}
+}
+
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+{
+	switch (em_type) {
+	case TF_EM_TBL_TYPE_EM_RECORD:
+		return "EM Record";
+	case TF_EM_TBL_TYPE_TBL_SCOPE:
+		return "Table Scope";
+	default:
+		return "Invalid EM type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
new file mode 100644
index 000000000..4099629ea
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_UTIL_H_
+#define _TF_UTIL_H_
+
+#include "tf_core.h"
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function converting tbl type to text string
+ */
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+
+/**
+ * Helper function converting em tbl type to text string
+ */
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+
+#endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 12/51] net/bnxt: support bulk table get and mirror
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (10 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 11/51] net/bnxt: add multi device support Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 13/51] net/bnxt: update multi device design support Ajit Khaparde
                           ` (38 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add new bulk table type get using FW
  to DMA the data back to host.
- Add flag to allow records to be cleared if possible
- Set mirror using tf_alloc_tbl_entry

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h      |  37 ++++++++-
 drivers/net/bnxt/tf_core/tf_common.h    |  54 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |   2 +
 drivers/net/bnxt/tf_core/tf_core.h      |  55 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  70 ++++++++++++----
 drivers/net/bnxt/tf_core/tf_msg.h       |  15 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |   5 +-
 drivers/net/bnxt/tf_core/tf_tbl.c       | 103 ++++++++++++++++++++++++
 8 files changed, 319 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index d342c695c..c04d1034a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Broadcom
+ * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -27,7 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -81,6 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
+struct tf_tbl_type_get_bulk_input;
+struct tf_tbl_type_get_bulk_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -902,6 +905,8 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -916,4 +921,32 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_bulk_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Starting index to get from */
+	uint32_t			 start_index;
+	/* Number of entries to get */
+	uint32_t			 num_entries;
+	/* Host memory where data will be stored */
+	uint64_t			 host_addr;
+} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_bulk_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
new file mode 100644
index 000000000..2aa4b8640
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_COMMON_H_
+#define _TF_COMMON_H_
+
+/* Helper to check the parms */
+#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "%s: session error\n", \
+				    tf_dir_2_str((parms)->dir)); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_TFP_SESSION(tfp) do { \
+		if ((tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6e15a4c5c..a8236aec9 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -16,6 +16,8 @@
 #include "bitalloc.h"
 #include "bnxt.h"
 #include "rand.h"
+#include "tf_common.h"
+#include "hwrm_tf.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index becc50c7f..96a1a794f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1165,7 +1165,7 @@ struct tf_get_tbl_entry_parms {
 	 */
 	uint8_t *data;
 	/**
-	 * [out] Entry size
+	 * [in] Entry size
 	 */
 	uint16_t data_sz_in_bytes;
 	/**
@@ -1188,6 +1188,59 @@ struct tf_get_tbl_entry_parms {
 int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
+/**
+ * tf_get_bulk_tbl_entry parameter definition
+ */
+struct tf_get_bulk_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Clear hardware entries on reads only
+	 * supported for TF_TBL_TYPE_ACT_STATS_64
+	 */
+	bool clear_on_read;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * Bulk get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_bulk_tbl_entry(struct tf *tfp,
+		     struct tf_get_bulk_tbl_entry_parms *parms);
+
 /**
  * @page exact_match Exact Match Table
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c8f6b88d3..c755c8555 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1216,12 +1216,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 static int
-tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
 {
 	struct tfp_calloc_parms alloc_parms;
 	int rc;
@@ -1229,15 +1225,10 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	/* Allocate session */
 	alloc_parms.nitems = 1;
 	alloc_parms.size = size;
-	alloc_parms.alignment = 0;
+	alloc_parms.alignment = 4096;
 	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate tcam dma entry, rc:%d\n",
-			    rc);
+	if (rc)
 		return -ENOMEM;
-	}
 
 	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
 	buf->va_addr = alloc_parms.mem_va;
@@ -1245,6 +1236,52 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	return 0;
 }
 
+int
+tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *params)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_bulk_input req = { 0 };
+	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	int data_size = 0;
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16((params->dir) |
+		((params->clear_on_read) ?
+		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+
+	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < data_size)
+		return -EINVAL;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
 		      struct tf_set_tcam_entry_parms *parms)
@@ -1282,9 +1319,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	} else {
 		/* use dma buffer */
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_get_dma_buf(&buf, data_size);
-		if (rc != 0)
-			return rc;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
 		data = buf.va_addr;
 		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
 	}
@@ -1303,8 +1340,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
+cleanup:
 	if (buf.va_addr != NULL)
 		tfp_free(buf.va_addr);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 89f7370cc..8d050c402 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -267,4 +267,19 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 			 uint8_t *data,
 			 uint32_t index);
 
+/**
+ * Sends bulk get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to table get bulk parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 05e131f8b..9b7f5a069 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -149,11 +149,10 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_RX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_TX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 17399a5b2..26313ed3c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
@@ -794,6 +795,7 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -915,6 +917,76 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_bulk_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->starting_idx;
+
+	/*
+	 * Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->starting_idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return rc;
+}
+
 #if (TF_SHADOW == 1)
 /**
  * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
@@ -1182,6 +1254,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -1663,6 +1736,36 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
+/* API defined in tf_core.h */
+int
+tf_get_bulk_tbl_entry(struct tf *tfp,
+		 struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type,
+				    strerror(-rc));
+	}
+
+	return rc;
+}
+
 /* API defined in tf_core.h */
 int
 tf_alloc_tbl_scope(struct tf *tfp,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 13/51] net/bnxt: update multi device design support
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (11 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
                           ` (37 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Implement the modules RM, Device (WH+), Identifier.
- Update Session module.
- Implement new HWRMs for RM direct messaging.
- Add new parameter check macro's and clean up the header includes for
  i.e. tfp such that bnxt.h is not directly included in the new modules.
- Add cfa_resource_types, required for RM design.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   2 +
 drivers/net/bnxt/tf_core/Makefile             |   1 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 291 ++++++------
 drivers/net/bnxt/tf_core/tf_common.h          |  24 +
 drivers/net/bnxt/tf_core/tf_core.c            | 286 +++++++++++-
 drivers/net/bnxt/tf_core/tf_core.h            |  12 +-
 drivers/net/bnxt/tf_core/tf_device.c          | 150 +++++-
 drivers/net/bnxt/tf_core/tf_device.h          |  79 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  78 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  79 ++--
 drivers/net/bnxt/tf_core/tf_identifier.c      | 142 +++++-
 drivers/net/bnxt/tf_core/tf_identifier.h      |  25 +-
 drivers/net/bnxt/tf_core/tf_msg.c             | 268 +++++++++--
 drivers/net/bnxt/tf_core/tf_msg.h             |  59 +++
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 434 ++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h          |  72 ++-
 drivers/net/bnxt/tf_core/tf_session.c         | 256 ++++++++++-
 drivers/net/bnxt/tf_core/tf_session.h         | 118 ++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   4 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  30 +-
 drivers/net/bnxt/tf_core/tf_tbl_type.h        |  95 ++--
 drivers/net/bnxt/tf_core/tf_tcam.h            |  14 +-
 22 files changed, 2120 insertions(+), 399 deletions(-)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index a50cb261d..1f7df9d06 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_session.c',
+	'tf_core/tf_device.c',
 	'tf_core/tf_device_p4.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 7a3c325a6..2c02e29e7 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,6 +14,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index c0c1e754e..11e8892f4 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -12,6 +12,11 @@
 
 #ifndef _CFA_RESOURCE_TYPES_H_
 #define _CFA_RESOURCE_TYPES_H_
+/*
+ * This is the constant used to define invalid CFA
+ * resource types across all devices.
+ */
+#define CFA_RESOURCE_TYPE_INVALID 65535
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
@@ -58,209 +63,205 @@
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index 2aa4b8640..ec3bca835 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -51,4 +51,28 @@
 		} \
 	} while (0)
 
+
+#define TF_CHECK_PARMS1(parms) do {					\
+		if ((parms) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS2(parms1, parms2) do {				\
+		if ((parms1) == NULL || (parms2) == NULL) {		\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
+		if ((parms1) == NULL ||					\
+		    (parms2) == NULL ||					\
+		    (parms3) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
 #endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a8236aec9..81a88e211 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -85,7 +85,7 @@ tf_create_em_pool(struct tf_session *session,
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
 
 	if (rc != 0) {
 		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
@@ -231,7 +231,6 @@ tf_open_session(struct tf                    *tfp,
 		   TF_SESSION_NAME_MAX);
 
 	/* Initialize Session */
-	session->device_type = parms->device_type;
 	session->dev = NULL;
 	tf_rm_init(tfp);
 
@@ -276,7 +275,9 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		rc = tf_create_em_pool(session,
+				       (enum tf_dir)dir,
+				       TF_SESSION_EM_POOL_SIZE);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "EM Pool initialization failed\n");
@@ -313,6 +314,64 @@ tf_open_session(struct tf                    *tfp,
 	return -EINVAL;
 }
 
+int
+tf_open_session_new(struct tf *tfp,
+		    struct tf_open_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_open_session_parms oparms;
+
+	TF_CHECK_PARMS(tfp, parms);
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
+		return -ENOTSUP;
+	}
+
+	/* Verify control channel and build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	oparms.open_cfg = parms;
+
+	rc = tf_session_open_session(tfp, &oparms);
+	/* Logging handled by tf_session_open_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+}
+
 int
 tf_attach_session(struct tf *tfp __rte_unused,
 		  struct tf_attach_session_parms *parms __rte_unused)
@@ -341,6 +400,69 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_attach_session_new(struct tf *tfp,
+		      struct tf_attach_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_attach_session_parms aparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Verify control channel */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Verify 'attach' channel */
+	rc = sscanf(parms->attach_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device attach_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Prepare return value of session_id, using ctrl_chan_name
+	 * device values as it becomes the session id.
+	 */
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	aparms.attach_cfg = parms;
+	rc = tf_session_attach_session(tfp,
+				       &aparms);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Attached to session, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return rc;
+}
+
 int
 tf_close_session(struct tf *tfp)
 {
@@ -380,7 +502,7 @@ tf_close_session(struct tf *tfp)
 	if (tfs->ref_count == 0) {
 		/* Free EM pool */
 		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, dir);
+			tf_free_em_pool(tfs, (enum tf_dir)dir);
 
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
@@ -401,6 +523,39 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+int
+tf_close_session_new(struct tf *tfp)
+{
+	int rc;
+	struct tf_session_close_session_parms cparms = { 0 };
+	union tf_session_id session_id = { 0 };
+	uint8_t ref_count;
+
+	TF_CHECK_PARMS1(tfp);
+
+	cparms.ref_count = &ref_count;
+	cparms.session_id = &session_id;
+	rc = tf_session_close_session(tfp,
+				      &cparms);
+	/* Logging handled by tf_session_close_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    cparms.session_id->id,
+		    *cparms.ref_count);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    cparms.session_id->internal.domain,
+		    cparms.session_id->internal.bus,
+		    cparms.session_id->internal.device,
+		    cparms.session_id->internal.fw_session_id);
+
+	return rc;
+}
+
 /** insert EM hash entry API
  *
  *    returns:
@@ -539,10 +694,67 @@ int tf_alloc_identifier(struct tf *tfp,
 	return 0;
 }
 
-/** free identifier resource
- *
- * Returns success or failure code.
- */
+int
+tf_alloc_identifier_new(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_alloc_parms aparms;
+	uint16_t id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_ident_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.ident_type = parms->ident_type;
+	aparms.id = &id;
+	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->id = id;
+
+	return 0;
+}
+
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms)
 {
@@ -618,6 +830,64 @@ int tf_free_identifier(struct tf *tfp,
 	return 0;
 }
 
+int
+tf_free_identifier_new(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_ident_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.ident_type = parms->ident_type;
+	fparms.id = parms->id;
+	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 96a1a794f..74ed24e5a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -380,7 +380,7 @@ struct tf_session_resources {
 	 * The number of identifier resources requested for the session.
 	 * The index used is tf_identifier_type.
 	 */
-	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
 	/** [in] Requested Index Table resource counts
 	 *
 	 * The number of index table resources requested for the session.
@@ -480,6 +480,9 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+int tf_open_session_new(struct tf *tfp,
+			struct tf_open_session_parms *parms);
+
 struct tf_attach_session_parms {
 	/** [in] ctrl_chan_name
 	 *
@@ -542,6 +545,8 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
+int tf_attach_session_new(struct tf *tfp,
+			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -551,6 +556,7 @@ int tf_attach_session(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
+int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -602,6 +608,8 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
+int tf_alloc_identifier_new(struct tf *tfp,
+			    struct tf_alloc_identifier_parms *parms);
 
 /** free identifier resource
  *
@@ -613,6 +621,8 @@ int tf_alloc_identifier(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
+int tf_free_identifier_new(struct tf *tfp,
+			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 3b368313e..4c46cadc6 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,45 +6,169 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
-#include "bnxt.h"
 
 struct tf;
 
+/* Forward declarations */
+static int dev_unbind_p4(struct tf *tfp);
+
 /**
- * Device specific bind function
+ * Device specific bind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] shadow_copy
+ *   Flag controlling shadow copy DB creation
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp __rte_unused,
-	    struct tf_session_resources *resources __rte_unused,
-	    struct tf_dev_info *dev_info)
+dev_bind_p4(struct tf *tfp,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
+	int rc;
+	int frc;
+	struct tf_ident_cfg_parms ident_cfg;
+	struct tf_tbl_cfg_parms tbl_cfg;
+	struct tf_tcam_cfg_parms tcam_cfg;
+
 	/* Initialize the modules */
 
-	dev_info->ops = &tf_dev_ops_p4;
+	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
+	ident_cfg.cfg = tf_ident_p4;
+	ident_cfg.shadow_copy = shadow_copy;
+	ident_cfg.resources = resources;
+	rc = tf_ident_bind(tfp, &ident_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier initialization failure\n");
+		goto fail;
+	}
+
+	tbl_cfg.num_elements = TF_TBL_TYPE_MAX;
+	tbl_cfg.cfg = tf_tbl_p4;
+	tbl_cfg.shadow_copy = shadow_copy;
+	tbl_cfg.resources = resources;
+	rc = tf_tbl_bind(tfp, &tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Table initialization failure\n");
+		goto fail;
+	}
+
+	tcam_cfg.num_elements = TF_TCAM_TBL_TYPE_MAX;
+	tcam_cfg.cfg = tf_tcam_p4;
+	tcam_cfg.shadow_copy = shadow_copy;
+	tcam_cfg.resources = resources;
+	rc = tf_tcam_bind(tfp, &tcam_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM initialization failure\n");
+		goto fail;
+	}
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	dev_handle->ops = &tf_dev_ops_p4;
+
 	return 0;
+
+ fail:
+	/* Cleanup of already created modules */
+	frc = dev_unbind_p4(tfp);
+	if (frc)
+		return frc;
+
+	return rc;
+}
+
+/**
+ * Device specific unbind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+dev_unbind_p4(struct tf *tfp)
+{
+	int rc = 0;
+	bool fail = false;
+
+	/* Unbind all the support modules. As this is only done on
+	 * close we only report errors as everything has to be cleaned
+	 * up regardless.
+	 */
+	rc = tf_ident_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Identifier\n");
+		fail = true;
+	}
+
+	rc = tf_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Table Type\n");
+		fail = true;
+	}
+
+	rc = tf_tcam_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, TCAM\n");
+		fail = true;
+	}
+
+	if (fail)
+		return -1;
+
+	return rc;
 }
 
 int
 dev_bind(struct tf *tfp __rte_unused,
 	 enum tf_device_type type,
+	 bool shadow_copy,
 	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_info)
+	 struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
 		return dev_bind_p4(tfp,
+				   shadow_copy,
 				   resources,
-				   dev_info);
+				   dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
-			    "Device type not supported\n");
-		return -ENOTSUP;
+			    "No such device\n");
+		return -ENODEV;
 	}
 }
 
 int
-dev_unbind(struct tf *tfp __rte_unused,
-	   struct tf_dev_info *dev_handle __rte_unused)
+dev_unbind(struct tf *tfp,
+	   struct tf_dev_info *dev_handle)
 {
-	return 0;
+	switch (dev_handle->type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_unbind_p4(tfp);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "No such device\n");
+		return -ENODEV;
+	}
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 8b63ff178..6aeb6fedb 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -27,6 +27,7 @@ struct tf_session;
  * TF device information
  */
 struct tf_dev_info {
+	enum tf_device_type type;
 	const struct tf_dev_ops *ops;
 };
 
@@ -56,10 +57,12 @@ struct tf_dev_info {
  *
  * Returns
  *   - (0) if successful.
- *   - (-EINVAL) on failure.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_bind(struct tf *tfp,
 	     enum tf_device_type type,
+	     bool shadow_copy,
 	     struct tf_session_resources *resources,
 	     struct tf_dev_info *dev_handle);
 
@@ -71,6 +74,11 @@ int dev_bind(struct tf *tfp,
  *
  * [in] dev_handle
  *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_unbind(struct tf *tfp,
 	       struct tf_dev_info *dev_handle);
@@ -84,6 +92,44 @@ int dev_unbind(struct tf *tfp,
  * different device variants.
  */
 struct tf_dev_ops {
+	/**
+	 * Retrives the MAX number of resource types that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] max_types
+	 *   Pointer to MAX number of types the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_max_types)(struct tf *tfp,
+				    uint16_t *max_types);
+
+	/**
+	 * Retrieves the WC TCAM slice information that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] slice_size
+	 *   Pointer to slice size the device supports
+	 *
+	 * [out] num_slices_per_row
+	 *   Pointer to number of slices per row the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
+					 uint16_t *slice_size,
+					 uint16_t *num_slices_per_row);
+
 	/**
 	 * Allocation of an identifier element.
 	 *
@@ -134,14 +180,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation parameters
+	 *   Pointer to table allocation parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
-				     struct tf_tbl_type_alloc_parms *parms);
+	int (*tf_dev_alloc_tbl)(struct tf *tfp,
+				struct tf_tbl_alloc_parms *parms);
 
 	/**
 	 * Free of a table type element.
@@ -153,14 +199,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type free parameters
+	 *   Pointer to table free parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_free_tbl_type)(struct tf *tfp,
-				    struct tf_tbl_type_free_parms *parms);
+	int (*tf_dev_free_tbl)(struct tf *tfp,
+			       struct tf_tbl_free_parms *parms);
 
 	/**
 	 * Searches for the specified table type element in a shadow DB.
@@ -175,15 +221,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation and search parameters
+	 *   Pointer to table allocation and search parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_search_tbl_type)
-			(struct tf *tfp,
-			struct tf_tbl_type_alloc_search_parms *parms);
+	int (*tf_dev_alloc_search_tbl)(struct tf *tfp,
+				       struct tf_tbl_alloc_search_parms *parms);
 
 	/**
 	 * Sets the specified table type element.
@@ -195,14 +240,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type set parameters
+	 *   Pointer to table set parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_set_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_set_parms *parms);
+	int (*tf_dev_set_tbl)(struct tf *tfp,
+			      struct tf_tbl_set_parms *parms);
 
 	/**
 	 * Retrieves the specified table type element.
@@ -214,14 +259,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type get parameters
+	 *   Pointer to table get parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_get_parms *parms);
+	int (*tf_dev_get_tbl)(struct tf *tfp,
+			       struct tf_tbl_get_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c3c4d1e05..c235976fe 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -3,19 +3,87 @@
  * All rights reserved.
  */
 
+#include <rte_common.h>
+#include <cfa_resource_types.h>
+
 #include "tf_device.h"
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
 
+/**
+ * Device specific function that retrieves the MAX number of HCAPI
+ * types the device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] max_types
+ *   Pointer to the MAX number of HCAPI types supported
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
+			uint16_t *max_types)
+{
+	if (max_types == NULL)
+		return -EINVAL;
+
+	*max_types = CFA_RESOURCE_TYPE_P4_LAST + 1;
+
+	return 0;
+}
+
+/**
+ * Device specific function that retrieves the WC TCAM slices the
+ * device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] slice_size
+ *   Pointer to the WC TCAM slice size
+ *
+ * [out] num_slices_per_row
+ *   Pointer to the WC TCAM row slice configuration
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
+			     uint16_t *slice_size,
+			     uint16_t *num_slices_per_row)
+{
+#define CFA_P4_WC_TCAM_SLICE_SIZE       12
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+
+	if (slice_size == NULL || num_slices_per_row == NULL)
+		return -EINVAL;
+
+	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
+	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+
+	return 0;
+}
+
+/**
+ * Truflow P4 device specific functions
+ */
 const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
-	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
-	.tf_dev_free_tbl_type = tf_tbl_type_free,
-	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
-	.tf_dev_set_tbl_type = tf_tbl_type_set,
-	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 84d90e3a7..5cd02b298 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,11 +12,12 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
@@ -24,41 +25,57 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
-	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
-	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+	/* CFA_RESOURCE_TYPE_P4_UPAR */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EPOC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_METADATA */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_CT_STATE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_PROF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_LAG */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EM_FBK */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_WC_FKB */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EXT */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 726d0b406..e89f9768b 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -6,42 +6,172 @@
 #include <rte_common.h>
 
 #include "tf_identifier.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Identifier DBs.
  */
-/* static void *ident_db[TF_DIR_MAX]; */
+static void *ident_db[TF_DIR_MAX];
 
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 int
-tf_ident_bind(struct tf *tfp __rte_unused,
-	      struct tf_ident_cfg *parms __rte_unused)
+tf_ident_bind(struct tf *tfp,
+	      struct tf_ident_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
+		db_cfg.rm_db = ident_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Identifier DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_ident_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = ident_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		ident_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_ident_alloc(struct tf *tfp __rte_unused,
-	       struct tf_ident_alloc_parms *parms __rte_unused)
+	       struct tf_ident_alloc_parms *parms)
 {
+	int rc;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = (uint32_t *)&parms->id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type);
+		return rc;
+	}
+
 	return 0;
 }
 
 int
 tf_ident_free(struct tf *tfp __rte_unused,
-	      struct tf_ident_free_parms *parms __rte_unused)
+	      struct tf_ident_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = parms->id;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = ident_db[parms->dir];
+	fparms.db_index = parms->ident_type;
+	fparms.index = parms->id;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index b77c91b9d..1c5319b5e 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -12,21 +12,28 @@
  * The Identifier module provides processing of Identifiers.
  */
 
-struct tf_ident_cfg {
+struct tf_ident_cfg_parms {
 	/**
-	 * Number of identifier types in each of the configuration
-	 * arrays
+	 * [in] Number of identifier types in each of the
+	 * configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
-	 * TCAM configuration array
+	 * [in] Identifier configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * [in] Boolean controlling the request shadow copy.
 	 */
-	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+	bool shadow_copy;
+	/**
+	 * [in] Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Identifier allcoation parameter definition
+ * Identifier allocation parameter definition
  */
 struct tf_ident_alloc_parms {
 	/**
@@ -40,7 +47,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [out] Identifier allocated
 	 */
-	uint16_t id;
+	uint16_t *id;
 };
 
 /**
@@ -88,7 +95,7 @@ struct tf_ident_free_parms {
  *   - (-EINVAL) on failure.
  */
 int tf_ident_bind(struct tf *tfp,
-		  struct tf_ident_cfg *parms);
+		  struct tf_ident_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c755c8555..e08a96f23 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -6,15 +6,13 @@
 #include <inttypes.h>
 #include <stdbool.h>
 #include <stdlib.h>
-
-#include "bnxt.h"
-#include "tf_core.h"
-#include "tf_session.h"
-#include "tfp.h"
+#include <string.h>
 
 #include "tf_msg_common.h"
 #include "tf_msg.h"
-#include "hsi_struct_def_dpdk.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tfp.h"
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
@@ -140,6 +138,51 @@ tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
 	return rc;
 }
 
+/**
+ * Allocates a DMA buffer that can be used for message transfer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ *
+ * [in] size
+ *   Requested size of the buffer in bytes
+ *
+ * Returns:
+ *    0      - Success
+ *   -ENOMEM - Unable to allocate buffer, no memory
+ */
+static int
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 4096;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc)
+		return -ENOMEM;
+
+	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+/**
+ * Free's a previous allocated DMA buffer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ */
+static void
+tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
+{
+	tfp_free(buf->va_addr);
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -154,7 +197,7 @@ tf_msg_session_open(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+	tfp_memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
 
 	parms.tf_type = HWRM_TF_SESSION_OPEN;
 	parms.req_data = (uint32_t *)&req;
@@ -870,6 +913,180 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_session_resc_qcaps(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *query,
+			  enum tf_rm_resc_resv_strategy *resv_strategy)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_qcaps_input req = { 0 };
+	struct hwrm_tf_session_resc_qcaps_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf qcaps_buf = { 0 };
+	struct tf_rm_resc_req_entry *data;
+	int dma_size;
+
+	if (size == 0 || query == NULL || resv_strategy == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Resource QCAPS parameter error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffer */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&qcaps_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.qcaps_size = size;
+	req.qcaps_addr = qcaps_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: QCAPS message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].min = tfp_le_to_cpu_16(data[i].min);
+		query[i].max = tfp_le_to_cpu_16(data[i].max);
+	}
+
+	*resv_strategy = resp.flags &
+	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
+
+	tf_msg_free_dma_buf(&qcaps_buf);
+
+	return rc;
+}
+
+int
+tf_msg_session_resc_alloc(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *request,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_alloc_input req = { 0 };
+	struct hwrm_tf_session_resc_alloc_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf req_buf = { 0 };
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_req_entry *req_data;
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&req_buf, dma_size);
+	if (rc)
+		return rc;
+
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.req_size = size;
+
+	req_data = (struct tf_rm_resc_req_entry *)req_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		req_data[i].type = tfp_cpu_to_le_32(request[i].type);
+		req_data[i].min = tfp_cpu_to_le_16(request[i].min);
+		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
+	}
+
+	req.req_addr = req_buf.pa_addr;
+	req.resp_addr = resv_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
+		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
+		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+	}
+
+	tf_msg_free_dma_buf(&req_buf);
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1034,7 +1251,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
@@ -1216,26 +1435,6 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-static int
-tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
-{
-	struct tfp_calloc_parms alloc_parms;
-	int rc;
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = size;
-	alloc_parms.alignment = 4096;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc)
-		return -ENOMEM;
-
-	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
-	buf->va_addr = alloc_parms.mem_va;
-
-	return 0;
-}
-
 int
 tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 			  struct tf_get_bulk_tbl_entry_parms *params)
@@ -1323,12 +1522,14 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		if (rc)
 			goto cleanup;
 		data = buf.va_addr;
-		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
 	}
 
-	memcpy(&data[0], parms->key, key_bytes);
-	memcpy(&data[key_bytes], parms->mask, key_bytes);
-	memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, key_bytes);
+	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
+	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1343,8 +1544,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		goto cleanup;
 
 cleanup:
-	if (buf.va_addr != NULL)
-		tfp_free(buf.va_addr);
+	tf_msg_free_dma_buf(&buf);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8d050c402..06f52ef00 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,8 +6,12 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include <rte_common.h>
+#include <hsi_struct_def_dpdk.h>
+
 #include "tf_tbl.h"
 #include "tf_rm.h"
+#include "tf_rm_new.h"
 
 struct tf;
 
@@ -121,6 +125,61 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the query. Should be set to the max
+ *   elements for the device type
+ *
+ * [out] query
+ *   Pointer to an array of query elements
+ *
+ * [out] resv_strategy
+ *   Pointer to the reservation strategy
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_qcaps(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *query,
+			      enum tf_rm_resc_resv_strategy *resv_strategy);
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] req
+ *   Pointer to an array of request elements
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_alloc(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *request,
+			      struct tf_rm_resc_entry *resv);
+
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 51bb9ba3a..7cadb231f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -3,20 +3,18 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
 #include <rte_common.h>
 
-#include "tf_rm_new.h"
+#include <cfa_resource_types.h>
 
-/**
- * Resource query single entry. Used when accessing HCAPI RM on the
- * firmware.
- */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
-};
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_msg.h"
 
 /**
  * Generic RM Element data type that an RM DB is build upon.
@@ -27,7 +25,7 @@ struct tf_rm_element {
 	 * hcapi_type can be ignored. If Null then the element is not
 	 * valid for the device.
 	 */
-	enum tf_rm_elem_cfg_type type;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/**
 	 * HCAPI RM Type for the element.
@@ -50,53 +48,435 @@ struct tf_rm_element {
 /**
  * TF RM DB definition
  */
-struct tf_rm_db {
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
+
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
 	struct tf_rm_element *db;
 };
 
+
+/**
+ * Resource Manager Adjust of base index definitions.
+ */
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
+
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
+{
+	int rc = 0;
+	uint32_t base_index;
+
+	base_index = db[db_index].alloc.entry.start;
+
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
 int
-tf_rm_create_db(struct tf *tfp __rte_unused,
-		struct tf_rm_create_db_parms *parms __rte_unused)
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
+
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against db requirements */
+
+	/* Alloc request, alignment already set */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			req[i].type = parms->cfg[i].hcapi_type;
+			/* Check that we can get the full amount allocated */
+			if (parms->alloc_num[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[i].min = parms->alloc_num[i];
+				req[i].max = parms->alloc_num[i];
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_num[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		} else {
+			/* Skip the element */
+			req[i].type = CFA_RESOURCE_TYPE_INVALID;
+		}
+	}
+
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       parms->num_elements,
+				       req,
+				       resv);
+	if (rc)
+		return rc;
+
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db = (void *)cparms.mem_va;
+
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
+
+	db = rm_db->db;
+	for (i = 0; i < parms->num_elements; i++) {
+		/* If allocation failed for a single entry the DB
+		 * creation is considered a failure.
+		 */
+		if (parms->alloc_num[i] != resv[i].stride) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    i);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_num[i],
+				    resv[i].stride);
+			goto fail;
+		}
+
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+		db[i].alloc.entry.start = resv[i].start;
+		db[i].alloc.entry.stride = resv[i].stride;
+
+		/* Create pool */
+		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
+			     sizeof(struct bitalloc));
+		/* Alloc request, alignment already set */
+		cparms.nitems = pool_size;
+		cparms.size = sizeof(struct bitalloc);
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+		db[i].pool = (struct bitalloc *)cparms.mem_va;
+	}
+
+	rm_db->num_entries = i;
+	rm_db->dir = parms->dir;
+	parms->rm_db = (void *)rm_db;
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
 	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
 int
 tf_rm_free_db(struct tf *tfp __rte_unused,
-	      struct tf_rm_free_db_parms *parms __rte_unused)
+	      struct tf_rm_free_db_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int i;
+	struct tf_rm_new_db *rm_db;
+
+	/* Traverse the DB and clear each pool.
+	 * NOTE:
+	 *   Firmware is not cleared. It will be cleared on close only.
+	 */
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db->pool);
+
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int id;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -ENOMEM;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				parms->index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -1;
+	}
+
+	return rc;
 }
 
 int
-tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
+
+	return rc;
 }
 
 int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	parms->info = &rm_db->db[parms->db_index].alloc;
+
+	return rc;
 }
 
 int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 72dba0984..6d8234ddc 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
 #include "tf_core.h"
 #include "bitalloc.h"
@@ -32,13 +32,16 @@ struct tf;
  * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
  * structure and several new APIs would need to be added to allow for
  * growth of a single TF resource type.
+ *
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
 
 /**
  * Resource reservation single entry result. Used when accessing HCAPI
  * RM on the firmware.
  */
-struct tf_rm_entry {
+struct tf_rm_new_entry {
 	/** Starting index of the allocated resource */
 	uint16_t start;
 	/** Number of allocated elements */
@@ -52,12 +55,32 @@ struct tf_rm_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
-	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
-	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
 	TF_RM_TYPE_MAX
 };
 
+/**
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
+ */
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
+};
+
 /**
  * RM Element configuration structure, used by the Device to configure
  * how an individual TF type is configured in regard to the HCAPI RM
@@ -68,7 +91,7 @@ struct tf_rm_element_cfg {
 	 * RM Element config controls how the DB for that element is
 	 * processed.
 	 */
-	enum tf_rm_elem_cfg_type cfg;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/* If a HCAPI to TF type conversion is required then TF type
 	 * can be added here.
@@ -92,7 +115,7 @@ struct tf_rm_alloc_info {
 	 * In case of dynamic allocation support this would have
 	 * to be changed to linked list of tf_rm_entry instead.
 	 */
-	struct tf_rm_entry entry;
+	struct tf_rm_new_entry entry;
 };
 
 /**
@@ -104,17 +127,21 @@ struct tf_rm_create_db_parms {
 	 */
 	enum tf_dir dir;
 	/**
-	 * [in] Number of elements in the parameter structure
+	 * [in] Number of elements.
 	 */
 	uint16_t num_elements;
 	/**
-	 * [in] Parameter structure
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Allocation number array. Array size is num_elements.
 	 */
-	struct tf_rm_element_cfg *parms;
+	uint16_t *alloc_num;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -128,7 +155,7 @@ struct tf_rm_free_db_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -138,7 +165,7 @@ struct tf_rm_allocate_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -159,7 +186,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -168,7 +195,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t index;
+	uint16_t index;
 };
 
 /**
@@ -178,7 +205,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -191,7 +218,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] Pointer to flag that indicates the state of the query
 	 */
-	uint8_t *allocated;
+	int *allocated;
 };
 
 /**
@@ -201,7 +228,7 @@ struct tf_rm_get_alloc_info_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -221,7 +248,7 @@ struct tf_rm_get_hcapi_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -306,6 +333,7 @@ int tf_rm_free_db(struct tf *tfp,
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
 int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
@@ -317,7 +345,7 @@ int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
  *
  * Returns
  *   - (0) if successful.
- *   - (-EpINVAL) on failure.
+ *   - (-EINVAL) on failure.
  */
 int tf_rm_free(struct tf_rm_free_parms *parms);
 
@@ -365,4 +393,4 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index c74994546..1917f8100 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -3,29 +3,269 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_session.h"
+#include "tf_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session = NULL;
+	struct tfp_calloc_parms cparms;
+	uint8_t fw_session_id;
+	union tf_session_id *session_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->open_cfg->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
+		else
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
+
+		parms->open_cfg->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_info);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session = (struct tf_session_info *)cparms.mem_va;
+
+	/* Allocate core data for the session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session->core_data = cparms.mem_va;
+
+	/* Initialize Session and Device */
+	session = (struct tf_session *)tfp->session->core_data;
+	session->ver.major = 0;
+	session->ver.minor = 0;
+	session->ver.update = 0;
+
+	session_id = &parms->open_cfg->session_id;
+	session->session_id.internal.domain = session_id->internal.domain;
+	session->session_id.internal.bus = session_id->internal.bus;
+	session->session_id.internal.device = session_id->internal.device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+	/* Return the allocated fw session id */
+	session_id->internal.fw_session_id = fw_session_id;
+
+	session->shadow_copy = parms->open_cfg->shadow_copy;
+
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->open_cfg->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	rc = dev_bind(tfp,
+		      parms->open_cfg->device_type,
+		      session->shadow_copy,
+		      &parms->open_cfg->resources,
+		      session->dev);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	/* Query for Session Config
+	 */
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	tf_close_session(tfp);
+	return -EINVAL;
+}
+
+int
+tf_session_attach_session(struct tf *tfp __rte_unused,
+			  struct tf_session_attach_session_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	TFP_DRV_LOG(ERR,
+		    "Attach not yet supported, rc:%s\n",
+		    strerror(-rc));
+	return rc;
+}
+
+int
+tf_session_close_session(struct tf *tfp,
+			 struct tf_session_close_session_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs = NULL;
+	struct tf_dev_info *tfd;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (tfs->session_id.id == TF_SESSION_ID_INVALID) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid session id, unable to close, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Record the session we're closing so the caller knows the
+	 * details.
+	 */
+	*parms->session_id = tfs->session_id;
+
+	rc = tf_session_get_device(tfs, &tfd);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* In case we're attached only the session client gets closed */
+	rc = tf_msg_session_close(tfp);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "FW Session close failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	tfs->ref_count--;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		/* Unbind the device */
+		rc = dev_unbind(tfp, tfd);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "Device unbind failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	return 0;
+}
+
 int
 tf_session_get_session(struct tf *tfp,
-		       struct tf_session *tfs)
+		       struct tf_session **tfs)
 {
+	int rc;
+
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		TFP_DRV_LOG(ERR, "Session not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	*tfs = (struct tf_session *)(tfp->session->core_data);
 
 	return 0;
 }
 
 int
 tf_session_get_device(struct tf_session *tfs,
-		      struct tf_device *tfd)
+		      struct tf_dev_info **tfd)
 {
+	int rc;
+
 	if (tfs->dev == NULL) {
-		TFP_DRV_LOG(ERR, "Device not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Device not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	*tfd = tfs->dev;
+
+	return 0;
+}
+
+int
+tf_session_get_fw_session_id(struct tf *tfp,
+			     uint8_t *fw_session_id)
+{
+	int rc;
+	struct tf_session *tfs = NULL;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
-	tfd = tfs->dev;
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*fw_session_id = tfs->session_id.internal.fw_session_id;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index b1cc7a4a7..92792518b 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -63,12 +63,7 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Device type, provided by tf_open_session().
-	 */
-	enum tf_device_type device_type;
-
-	/** Session ID, allocated by FW on tf_open_session().
-	 */
+	/** Session ID, allocated by FW on tf_open_session() */
 	union tf_session_id session_id;
 
 	/**
@@ -101,7 +96,7 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
-	/** Device */
+	/** Device handle */
 	struct tf_dev_info *dev;
 
 	/** Session HW and SRAM resources */
@@ -323,13 +318,97 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * Session open parameter definition
+ */
+struct tf_session_open_session_parms {
+	/**
+	 * [in] Pointer to the TF open session configuration
+	 */
+	struct tf_open_session_parms *open_cfg;
+};
+
+/**
+ * Session attach parameter definition
+ */
+struct tf_session_attach_session_parms {
+	/**
+	 * [in] Pointer to the TF attach session configuration
+	 */
+	struct tf_attach_session_parms *attach_cfg;
+};
+
+/**
+ * Session close parameter definition
+ */
+struct tf_session_close_session_parms {
+	uint8_t *ref_count;
+	union tf_session_id *session_id;
+};
+
 /**
  * @page session Session Management
  *
+ * @ref tf_session_open_session
+ *
+ * @ref tf_session_attach_session
+ *
+ * @ref tf_session_close_session
+ *
  * @ref tf_session_get_session
  *
  * @ref tf_session_get_device
+ *
+ * @ref tf_session_get_fw_session_id
+ */
+
+/**
+ * Creates a host session with a corresponding firmware session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
+int tf_session_open_session(struct tf *tfp,
+			    struct tf_session_open_session_parms *parms);
+
+/**
+ * Attaches a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_attach_session(struct tf *tfp,
+			      struct tf_session_attach_session_parms *parms);
+
+/**
+ * Closes a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in/out] parms
+ *   Pointer to the session close parameters.
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_close_session(struct tf *tfp,
+			     struct tf_session_close_session_parms *parms);
 
 /**
  * Looks up the private session information from the TF session info.
@@ -338,14 +417,14 @@ struct tf_session {
  *   Pointer to TF handle
  *
  * [out] tfs
- *   Pointer to the session
+ *   Pointer pointer to the session
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_session(struct tf *tfp,
-			   struct tf_session *tfs);
+			   struct tf_session **tfs);
 
 /**
  * Looks up the device information from the TF Session.
@@ -354,13 +433,30 @@ int tf_session_get_session(struct tf *tfp,
  *   Pointer to TF handle
  *
  * [out] tfd
- *   Pointer to the device
+ *   Pointer pointer to the device
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_device(struct tf_session *tfs,
-			  struct tf_dev_info *tfd);
+			  struct tf_dev_info **tfd);
+
+/**
+ * Looks up the FW session id of the firmware connection for the
+ * requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_fw_session_id(struct tf *tfp,
+				 uint8_t *fw_session_id);
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 7a5443678..a8bb0edab 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,8 +7,12 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+
+#include "tf_core.h"
 #include "stack.h"
 
+struct tf_session;
+
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
 	PT_LVL_1,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index a57a5ddf2..b79706f97 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -10,12 +10,12 @@
 struct tf;
 
 /**
- * Table Type DBs.
+ * Table DBs.
  */
 /* static void *tbl_db[TF_DIR_MAX]; */
 
 /**
- * Table Type Shadow DBs
+ * Table Shadow DBs
  */
 /* static void *shadow_tbl_db[TF_DIR_MAX]; */
 
@@ -30,49 +30,49 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_type_bind(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp __rte_unused,
+	    struct tf_tbl_cfg_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc(struct tf *tfp __rte_unused,
-		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_free(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_free_parms *parms __rte_unused)
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
-			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_set(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp __rte_unused,
+	   struct tf_tbl_set_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_get(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp __rte_unused,
+	   struct tf_tbl_get_parms *parms __rte_unused)
 {
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index c880b368b..11f2aa333 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -11,33 +11,39 @@
 struct tf;
 
 /**
- * The Table Type module provides processing of Internal TF table types.
+ * The Table module provides processing of Internal TF table types.
  */
 
 /**
- * Table Type configuration parameters
+ * Table configuration parameters
  */
-struct tf_tbl_type_cfg_parms {
+struct tf_tbl_cfg_parms {
 	/**
 	 * Number of table types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * Table Type element configuration array
 	 */
-	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Table Type allocation parameters
+ * Table allocation parameters
  */
-struct tf_tbl_type_alloc_parms {
+struct tf_tbl_alloc_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -53,9 +59,9 @@ struct tf_tbl_type_alloc_parms {
 };
 
 /**
- * Table Type free parameters
+ * Table free parameters
  */
-struct tf_tbl_type_free_parms {
+struct tf_tbl_free_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -75,7 +81,10 @@ struct tf_tbl_type_free_parms {
 	uint16_t ref_cnt;
 };
 
-struct tf_tbl_type_alloc_search_parms {
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -117,9 +126,9 @@ struct tf_tbl_type_alloc_search_parms {
 };
 
 /**
- * Table Type set parameters
+ * Table set parameters
  */
-struct tf_tbl_type_set_parms {
+struct tf_tbl_set_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -143,9 +152,9 @@ struct tf_tbl_type_set_parms {
 };
 
 /**
- * Table Type get parameters
+ * Table get parameters
  */
-struct tf_tbl_type_get_parms {
+struct tf_tbl_get_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -169,39 +178,39 @@ struct tf_tbl_type_get_parms {
 };
 
 /**
- * @page tbl_type Table Type
+ * @page tbl Table
  *
- * @ref tf_tbl_type_bind
+ * @ref tf_tbl_bind
  *
- * @ref tf_tbl_type_unbind
+ * @ref tf_tbl_unbind
  *
- * @ref tf_tbl_type_alloc
+ * @ref tf_tbl_alloc
  *
- * @ref tf_tbl_type_free
+ * @ref tf_tbl_free
  *
- * @ref tf_tbl_type_alloc_search
+ * @ref tf_tbl_alloc_search
  *
- * @ref tf_tbl_type_set
+ * @ref tf_tbl_set
  *
- * @ref tf_tbl_type_get
+ * @ref tf_tbl_get
  */
 
 /**
- * Initializes the Table Type module with the requested DBs. Must be
+ * Initializes the Table module with the requested DBs. Must be
  * invoked as the first thing before any of the access functions.
  *
  * [in] tfp
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table configuration parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_bind(struct tf *tfp,
-		     struct tf_tbl_type_cfg_parms *parms);
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
@@ -216,7 +225,7 @@ int tf_tbl_type_bind(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_unbind(struct tf *tfp);
+int tf_tbl_unbind(struct tf *tfp);
 
 /**
  * Allocates the requested table type from the internal RM DB.
@@ -225,14 +234,14 @@ int tf_tbl_type_unbind(struct tf *tfp);
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table allocation parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc(struct tf *tfp,
-		      struct tf_tbl_type_alloc_parms *parms);
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
 
 /**
  * Free's the requested table type and returns it to the DB. If shadow
@@ -244,14 +253,14 @@ int tf_tbl_type_alloc(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_free(struct tf *tfp,
-		     struct tf_tbl_type_free_parms *parms);
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
 
 /**
  * Supported if Shadow DB is configured. Searches the Shadow DB for
@@ -269,8 +278,8 @@ int tf_tbl_type_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc_search(struct tf *tfp,
-			     struct tf_tbl_type_alloc_search_parms *parms);
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
 
 /**
  * Configures the requested element by sending a firmware request which
@@ -280,14 +289,14 @@ int tf_tbl_type_alloc_search(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table set parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_set(struct tf *tfp,
-		    struct tf_tbl_type_set_parms *parms);
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
 
 /**
  * Retrieves the requested element by sending a firmware request to get
@@ -297,13 +306,13 @@ int tf_tbl_type_set(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table get parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_get(struct tf *tfp,
-		    struct tf_tbl_type_get_parms *parms);
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
 
 #endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 1420c9ed5..68c25eb1b 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -20,16 +20,22 @@ struct tf_tcam_cfg_parms {
 	 * Number of tcam types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * TCAM configuration array
 	 */
-	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tcam_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 14/51] net/bnxt: support two-level priority for TCAMs
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (12 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 13/51] net/bnxt: update multi device design support Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
                           ` (36 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru, Randy Schacher

From: Shahaji Bhosle <sbhosle@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 36 ++++++++++++++++++++++++------
 drivers/net/bnxt/tf_core/tf_core.h |  4 +++-
 drivers/net/bnxt/tf_core/tf_em.c   |  6 ++---
 drivers/net/bnxt/tf_core/tf_tbl.c  |  2 +-
 4 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 81a88e211..eac57e7bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -893,7 +893,7 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
+	int index = 0;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
 
@@ -916,12 +916,34 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority) {
+		for (index = session_pool->size - 1; index >= 0; index--) {
+			if (ba_inuse(session_pool,
+					  index) == BA_ENTRY_FREE) {
+				break;
+			}
+		}
+		if (ba_alloc_index(session_pool,
+				   index) == BA_FAIL) {
+			TFP_DRV_LOG(ERR,
+				    "%s: %s: ba_alloc index %d failed\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+				    index);
+			return -ENOMEM;
+		}
+	} else {
+		index = ba_alloc(session_pool);
+		if (index == BA_FAIL) {
+			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+			return -ENOMEM;
+		}
 	}
 
 	parms->idx = index;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 74ed24e5a..f1ef00b30 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -799,7 +799,9 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested (definition TBD)
+	 * [in] Priority of entry requested
+	 * 0: index from top i.e. highest priority first
+	 * !0: index from bottom i.e lowest priority first
 	 */
 	uint32_t priority;
 	/**
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index fd1797e39..91cbc6299 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -479,8 +479,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG
-		   (ERR,
+		TFP_DRV_LOG(ERR,
 		   "dir:%d, EM entry index allocation failed\n",
 		   parms->dir);
 		return rc;
@@ -495,8 +494,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	PMD_DRV_LOG
-		   (ERR,
+	TFP_DRV_LOG(INFO,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 26313ed3c..4e236d56c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -1967,7 +1967,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
+		TFP_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 15/51] net/bnxt: add HCAPI interface support
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (13 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
                           ` (35 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Add new hardware shim APIs to support multiple
device generations

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/Makefile           |  10 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h        | 271 +++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h   | 672 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     | 399 +++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h     | 451 +++++++++++++++
 drivers/net/bnxt/meson.build              |   2 +
 drivers/net/bnxt/tf_core/tf_em.c          |  28 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  94 +--
 drivers/net/bnxt/tf_core/tf_tbl.h         |  24 +-
 10 files changed, 1970 insertions(+), 73 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h

diff --git a/drivers/net/bnxt/hcapi/Makefile b/drivers/net/bnxt/hcapi/Makefile
new file mode 100644
index 000000000..65cddd789
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/Makefile
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019-2020 Broadcom Limited.
+# All rights reserved.
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_p4.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_defs.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_p4.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/cfa_p40_hw.h
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
new file mode 100644
index 000000000..f60af4e56
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -0,0 +1,271 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_H_
+#define _HCAPI_CFA_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#include "hcapi_cfa_defs.h"
+
+#define SUPPORT_CFA_HW_P4  1
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL  1
+#endif
+
+/**
+ * Index used for the sram_entries field
+ */
+enum hcapi_cfa_resc_type_sram {
+	HCAPI_CFA_RESC_TYPE_SRAM_FULL_ACTION,
+	HCAPI_CFA_RESC_TYPE_SRAM_MCG,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_8B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_16B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	HCAPI_CFA_RESC_TYPE_SRAM_COUNTER_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_SPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_DPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_S_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_D_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_MAX
+};
+
+/**
+ * Index used for the hw_entries field in struct cfa_rm_db
+ */
+enum hcapi_cfa_resc_type_hw {
+	/* common HW resources for all chip variants */
+	HCAPI_CFA_RESC_TYPE_HW_L2_CTXT_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_EM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_EM_REC,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_METER_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_METER_INST,
+	HCAPI_CFA_RESC_TYPE_HW_MIRROR,
+	HCAPI_CFA_RESC_TYPE_HW_UPAR,
+	/* Wh+/SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_SP_TCAM,
+	/* Thor, SR2 common HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_FKB,
+	/* SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_TBL_SCOPE,
+	HCAPI_CFA_RESC_TYPE_HW_L2_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH0,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH1,
+	HCAPI_CFA_RESC_TYPE_HW_METADATA,
+	HCAPI_CFA_RESC_TYPE_HW_CT_STATE,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_LAG_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_MAX
+};
+
+struct hcapi_cfa_key_result {
+	uint64_t bucket_mem_ptr;
+	uint8_t bucket_idx;
+};
+
+/* common CFA register access macros */
+#define CFA_REG(x)		OFFSETOF(cfa_reg_t, cfa_##x)
+
+#ifndef REG_WR
+#define REG_WR(_p, x, y)  (*((uint32_t volatile *)(x)) = (y))
+#endif
+#ifndef REG_RD
+#define REG_RD(_p, x)  (*((uint32_t volatile *)(x)))
+#endif
+#define CFA_REG_RD(_p, x)	\
+	REG_RD(0, (uint32_t)(_p)->base_addr + CFA_REG(x))
+#define CFA_REG_WR(_p, x, y)	\
+	REG_WR(0, (uint32_t)(_p)->base_addr + CFA_REG(x), y)
+
+
+/* Constants used by Resource Manager Registration*/
+#define RM_CLIENT_NAME_MAX_LEN          32
+
+/**
+ *  Resource Manager Data Structures used for resource requests
+ */
+struct hcapi_cfa_resc_req_entry {
+	uint16_t min;
+	uint16_t max;
+};
+
+struct hcapi_cfa_resc_req {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_req_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the starting index and the number of
+	 * resources in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_req_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_req_db {
+	struct hcapi_cfa_resc_req rx;
+	struct hcapi_cfa_resc_req tx;
+};
+
+struct hcapi_cfa_resc_entry {
+	uint16_t start;
+	uint16_t stride;
+	uint16_t tag;
+};
+
+struct hcapi_cfa_resc {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the startin index and the number of resources
+	 * in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_db {
+	struct hcapi_cfa_resc rx;
+	struct hcapi_cfa_resc tx;
+};
+
+/**
+ * This is the main data structure used by the CFA Resource
+ * Manager.  This data structure holds all the state and table
+ * management information.
+ */
+typedef struct hcapi_cfa_rm_data {
+	uint32_t dummy_data;
+} hcapi_cfa_rm_data_t;
+
+/* End RM support */
+
+struct hcapi_cfa_devops;
+
+struct hcapi_cfa_devinfo {
+	uint8_t global_cfg_data[CFA_GLOBAL_CFG_DATA_SZ];
+	struct hcapi_cfa_layout_tbl layouts;
+	struct hcapi_cfa_devops *devops;
+};
+
+int hcapi_cfa_dev_bind(enum hcapi_cfa_ver hw_ver,
+		       struct hcapi_cfa_devinfo *dev_info);
+
+int hcapi_cfa_key_compile_layout(struct hcapi_cfa_key_template *key_template,
+				 struct hcapi_cfa_key_layout *key_layout);
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data, uint16_t bitlen);
+int
+hcapi_cfa_action_compile_layout(struct hcapi_cfa_action_template *act_template,
+				struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_init_obj(uint64_t *act_obj,
+			      struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_compute_ptr(uint64_t *act_obj,
+				 struct hcapi_cfa_action_layout *act_layout,
+				 uint32_t base_ptr);
+
+int hcapi_cfa_action_hw_op(struct hcapi_cfa_hwop *op,
+			   uint8_t *act_tbl,
+			   struct hcapi_cfa_data *act_obj);
+int hcapi_cfa_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_rm_register_client(hcapi_cfa_rm_data_t *data,
+				 const char *client_name,
+				 int *client_id);
+int hcapi_cfa_rm_unregister_client(hcapi_cfa_rm_data_t *data,
+				   int client_id);
+int hcapi_cfa_rm_query_resources(hcapi_cfa_rm_data_t *data,
+				 int client_id,
+				 uint16_t chnl_id,
+				 struct hcapi_cfa_resc_req_db *req_db);
+int hcapi_cfa_rm_query_resources_one(hcapi_cfa_rm_data_t *data,
+				     int clien_id,
+				     struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_reserve_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_release_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_initialize(hcapi_cfa_rm_data_t *data);
+
+#if SUPPORT_CFA_HW_P4
+
+int hcapi_cfa_p4_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxt_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxtrmp_hwop(struct hcapi_cfa_hwop *op,
+				      struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcam_hwop(struct hcapi_cfa_hwop *op,
+				 struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcamrmp_hwop(struct hcapi_cfa_hwop *op,
+				    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
+			       struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+#endif /* SUPPORT_CFA_HW_P4 */
+/**
+ *  HCAPI CFA device HW operation function callback definition
+ *  This is standardized function callback hook to install different
+ *  CFA HW table programming function callback.
+ */
+
+struct hcapi_cfa_tbl_cb {
+	/**
+	 * This function callback provides the functionality to read/write
+	 * HW table entry from a HW table.
+	 *
+	 * @param[in] op
+	 *   A pointer to the Hardware operation parameter
+	 *
+	 * @param[in] obj_data
+	 *   A pointer to the HW data object for the hardware operation
+	 *
+	 * @return
+	 *   0 for SUCCESS, negative value for FAILURE
+	 */
+	int (*hwop_cb)(struct hcapi_cfa_hwop *op,
+		       struct hcapi_cfa_data *obj_data);
+};
+
+#endif  /* HCAPI_CFA_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
new file mode 100644
index 000000000..39afd4dbc
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
@@ -0,0 +1,92 @@
+/*
+ *   Copyright(c) 2019-2020 Broadcom Limited.
+ *   All rights reserved.
+ */
+
+#include "bitstring.h"
+#include "hcapi_cfa_defs.h"
+#include <errno.h>
+#include "assert.h"
+
+/* HCAPI CFA common PUT APIs */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val)
+{
+	assert(layout);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		bs_put_msb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	else
+		bs_put_lsb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	return 0;
+}
+
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz)
+{
+	int i;
+	uint16_t bitpos;
+	uint8_t bitlen;
+	uint16_t field_id;
+
+	assert(layout);
+	assert(field_tbl);
+
+	if (layout->is_msb_order) {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_msb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	} else {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_lsb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	}
+	return 0;
+}
+
+/* HCAPI CFA common GET APIs */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id,
+			uint64_t *val)
+{
+	assert(layout);
+	assert(val);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		*val = bs_get_msb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	else
+		*val = bs_get_lsb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	return 0;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
new file mode 100644
index 000000000..ea8d99d01
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -0,0 +1,672 @@
+
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+/*!
+ *   \file
+ *   \brief Exported functions for CFA HW programming
+ */
+#ifndef _HCAPI_CFA_DEFS_H_
+#define _HCAPI_CFA_DEFS_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#define SUPPORT_CFA_HW_ALL 0
+#define SUPPORT_CFA_HW_P4  1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+
+#define CFA_BITS_PER_BYTE (8)
+#define __CFA_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+#define CFA_ALIGN(x, a) __CFA_ALIGN_MASK(x, (a) - 1)
+#define CFA_ALIGN_128(x) CFA_ALIGN(x, 128)
+#define CFA_ALIGN_32(x) CFA_ALIGN(x, 32)
+
+#define NUM_WORDS_ALIGN_32BIT(x)                                               \
+	(CFA_ALIGN_32(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+#define NUM_WORDS_ALIGN_128BIT(x)                                              \
+	(CFA_ALIGN_128(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+
+#define CFA_GLOBAL_CFG_DATA_SZ (100)
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL (1)
+#endif
+
+#include "hcapi_cfa_p4.h"
+#define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+#define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
+#define CFA_PROF_MAX_KEY_CFG_SZ sizeof(struct cfa_p4_prof_key_cfg)
+#define CFA_KEY_MAX_FIELD_CNT 41
+#define CFA_ACT_MAX_TEMPLATE_SZ sizeof(struct cfa_p4_action_template)
+
+/**
+ * CFA HW version definition
+ */
+enum hcapi_cfa_ver {
+	HCAPI_CFA_P40 = 0, /**< CFA phase 4.0 */
+	HCAPI_CFA_P45 = 1, /**< CFA phase 4.5 */
+	HCAPI_CFA_P58 = 2, /**< CFA phase 5.8 */
+	HCAPI_CFA_P59 = 3, /**< CFA phase 5.9 */
+	HCAPI_CFA_PMAX = 4
+};
+
+/**
+ * CFA direction definition
+ */
+enum hcapi_cfa_dir {
+	HCAPI_CFA_DIR_RX = 0, /**< Receive */
+	HCAPI_CFA_DIR_TX = 1, /**< Transmit */
+	HCAPI_CFA_DIR_MAX = 2
+};
+
+/**
+ * CFA HW OPCODE definition
+ */
+enum hcapi_cfa_hwops {
+	HCAPI_CFA_HWOPS_PUT, /**< Write to HW operation */
+	HCAPI_CFA_HWOPS_GET, /**< Read from HW operation */
+	HCAPI_CFA_HWOPS_ADD, /**< For operations which require more than simple
+			      * writes to HW, this operation is used. The
+			      * distinction with this operation when compared
+			      * to the PUT ops is that this operation is used
+			      * in conjunction with the HCAPI_CFA_HWOPS_DEL
+			      * op to remove the operations issued by the
+			      * ADD OP.
+			      */
+	HCAPI_CFA_HWOPS_DEL, /**< This issues operations to clear the hardware.
+			      * This operation is used in conjunction
+			      * with the HCAPI_CFA_HWOPS_ADD op and is the
+			      * way to undo/clear the ADD op.
+			      */
+	HCAPI_CFA_HWOPS_MAX
+};
+
+/**
+ * CFA HW KEY CONTROL OPCODE definition
+ */
+enum hcapi_cfa_key_ctrlops {
+	HCAPI_CFA_KEY_CTRLOPS_INSERT, /**< insert control bits */
+	HCAPI_CFA_KEY_CTRLOPS_STRIP, /**< strip control bits */
+	HCAPI_CFA_KEY_CTRLOPS_MAX
+};
+
+/**
+ * CFA HW field structure definition
+ */
+struct hcapi_cfa_field {
+	/** [in] Starting bit position pf the HW field within a HW table
+	 *  entry.
+	 */
+	uint16_t bitpos;
+	/** [in] Number of bits for the HW field. */
+	uint8_t bitlen;
+};
+
+/**
+ * CFA HW table entry layout structure definition
+ */
+struct hcapi_cfa_layout {
+	/** [out] Bit order of layout */
+	bool is_msb_order;
+	/** [out] Size in bits of entry */
+	uint32_t total_sz_in_bits;
+	/** [out] data pointer of the HW layout fields array */
+	const struct hcapi_cfa_field *field_array;
+	/** [out] number of HW field entries in the HW layout field array */
+	uint32_t array_sz;
+};
+
+/**
+ * CFA HW data object definition
+ */
+struct hcapi_cfa_data_obj {
+	/** [in] HW field identifier. Used as an index to a HW table layout */
+	uint16_t field_id;
+	/** [in] Value of the HW field */
+	uint64_t val;
+};
+
+/**
+ * CFA HW definition
+ */
+struct hcapi_cfa_hw {
+	/** [in] HW table base address for the operation with optional device
+	 *  handle. For on-chip HW table operation, this is the either the TX
+	 *  or RX CFA HW base address. For off-chip table, this field is the
+	 *  base memory address of the off-chip table.
+	 */
+	uint64_t base_addr;
+	/** [in] Optional opaque device handle. It is generally used to access
+	 *  an GRC register space through PCIE BAR and passed to the BAR memory
+	 *  accessor routine.
+	 */
+	void *handle;
+};
+
+/**
+ * CFA HW operation definition
+ *
+ */
+struct hcapi_cfa_hwop {
+	/** [in] HW opcode */
+	enum hcapi_cfa_hwops opcode;
+	/** [in] CFA HW information used by accessor routines.
+	 */
+	struct hcapi_cfa_hw hw;
+};
+
+/**
+ * CFA HW data structure definition
+ */
+struct hcapi_cfa_data {
+	/** [in] physical offset to the HW table for the data to be
+	 *  written to.  If this is an array of registers, this is the
+	 *  index into the array of registers.  For writing keys, this
+	 *  is the byte offset into the memory wher the key should be
+	 *  written.
+	 */
+	union {
+		uint32_t index;
+		uint32_t byte_offset;
+	} u;
+	/** [in] HW data buffer pointer */
+	uint8_t *data;
+	/** [in] HW data mask buffer pointer */
+	uint8_t *data_mask;
+	/** [in] size of the HW data buffer in bytes */
+	uint16_t data_sz;
+};
+
+/*********************** Truflow start ***************************/
+enum hcapi_cfa_pg_tbl_lvl {
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
+};
+
+enum hcapi_cfa_em_table_type {
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
+};
+
+struct hcapi_cfa_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct hcapi_cfa_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct hcapi_cfa_em_page_tbl    pg_tbl[TF_PT_LVL_MAX];
+};
+
+struct hcapi_cfa_em_ctx_mem_info {
+	struct hcapi_cfa_em_table		em_tables[TF_MAX_TABLE];
+};
+
+/*********************** Truflow end ****************************/
+
+/**
+ * CFA HW key table definition
+ *
+ * Applicable to EEM and off-chip EM table only.
+ */
+struct hcapi_cfa_key_tbl {
+	/** [in] For EEM, this is the KEY0 base mem pointer. For off-chip EM,
+	 *  this is the base mem pointer of the key table.
+	 */
+	uint8_t *base0;
+	/** [in] total size of the key table in bytes. For EEM, this size is
+	 *  same for both KEY0 and KEY1 table.
+	 */
+	uint32_t size;
+	/** [in] number of key buckets, applicable for newer chips */
+	uint32_t num_buckets;
+	/** [in] For EEM, this is KEY1 base mem pointer. Fo off-chip EM,
+	 *  this is the key record memory base pointer within the key table,
+	 *  applicable for newer chip
+	 */
+	uint8_t *base1;
+};
+
+/**
+ * CFA HW key buffer definition
+ */
+struct hcapi_cfa_key_obj {
+	/** [in] pointer to the key data buffer */
+	uint32_t *data;
+	/** [in] buffer len in bits */
+	uint32_t len;
+	/** [in] Pointer to the key layout */
+	struct hcapi_cfa_key_layout *layout;
+};
+
+/**
+ * CFA HW key data definition
+ */
+struct hcapi_cfa_key_data {
+	/** [in] For on-chip key table, it is the offset in unit of smallest
+	 *  key. For off-chip key table, it is the byte offset relative
+	 *  to the key record memory base.
+	 */
+	uint32_t offset;
+	/** [in] HW key data buffer pointer */
+	uint8_t *data;
+	/** [in] size of the key in bytes */
+	uint16_t size;
+};
+
+/**
+ * CFA HW key location definition
+ */
+struct hcapi_cfa_key_loc {
+	/** [out] on-chip EM bucket offset or off-chip EM bucket mem pointer */
+	uint64_t bucket_mem_ptr;
+	/** [out] index within the EM bucket */
+	uint8_t bucket_idx;
+};
+
+/**
+ * CFA HW layout table definition
+ */
+struct hcapi_cfa_layout_tbl {
+	/** [out] data pointer to an array of fix formatted layouts supported.
+	 *  The index to the array is the CFA HW table ID
+	 */
+	const struct hcapi_cfa_layout *tbl;
+	/** [out] number of fix formatted layouts in the layout array */
+	uint16_t num_layouts;
+};
+
+/**
+ * Key template consists of key fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_key_template {
+	/** [in] key field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t field_en[CFA_KEY_MAX_FIELD_CNT];
+	/** [in] Identified if the key template is for TCAM. If false, the
+	 *  the key template is for EM. This field is mandantory for device that
+	 *  only support fix key formats.
+	 */
+	bool is_wc_tcam_key;
+};
+
+/**
+ * key layout consist of field array, key bitlen, key ID, and other meta data
+ * pertain to a key
+ */
+struct hcapi_cfa_key_layout {
+	/** [out] key layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual key size in number of bits */
+	uint16_t bitlen;
+	/** [out] key identifier and this field is only valid for device
+	 *  that supports fix key formats
+	 */
+	uint16_t id;
+	/** [out] Indentified the key layout is WC TCAM key */
+	bool is_wc_tcam_key;
+	/** [out] total slices size, valid for WC TCAM key only. It can be
+	 *  used by the user to determine the total size of WC TCAM key slices
+	 *  in bytes.
+	 */
+	uint16_t slices_size;
+};
+
+/**
+ * key layout memory contents
+ */
+struct hcapi_cfa_key_layout_contents {
+	/** key layouts */
+	struct hcapi_cfa_key_layout key_layout;
+
+	/** layout */
+	struct hcapi_cfa_layout layout;
+
+	/** fields */
+	struct hcapi_cfa_field field_array[CFA_KEY_MAX_FIELD_CNT];
+};
+
+/**
+ * Action template consists of action fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_action_template {
+	/** [in] CFA version for the action template */
+	enum hcapi_cfa_ver hw_ver;
+	/** [in] action field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t data[CFA_ACT_MAX_TEMPLATE_SZ];
+};
+
+/**
+ * action layout consist of field array, action wordlen and action format ID
+ */
+struct hcapi_cfa_action_layout {
+	/** [in] action identifier */
+	uint16_t id;
+	/** [out] action layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual action record size in number of bits */
+	uint16_t wordlen;
+};
+
+/**
+ *  \defgroup CFA_HCAPI_PUT_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to program a specified value to a
+ * HW field based on the provided programming layout.
+ *
+ * @param[in,out] obj_data
+ *   A data pointer to a CFA HW key/mask data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[in] val
+ *   Value of the HW field to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val);
+
+/**
+ * This API provides the functionality to program an array of field values
+ * with corresponding field IDs to a number of profiler sub-block fields
+ * based on the fixed profiler sub-block hardware programming layout.
+ *
+ * @param[in, out] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * This API provides the functionality to write a value to a
+ * field within the bit position and bit length of a HW data
+ * object based on a provided programming layout.
+ *
+ * @param[in, out] act_obj
+ *   A pointer of the action object to be initialized
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param field_id
+ *   [in] Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[in] val
+ *   HW field value to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t val);
+
+/*@}*/
+
+/**
+ *  \defgroup CFA_HCAPI_GET_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to get the word length of
+ * a layout object.
+ *
+ * @param[in] layout
+ *   A pointer of the HW layout
+ *
+ * @return
+ *   Word length of the layout object
+ */
+uint16_t hcapi_cfa_get_wordlen(const struct hcapi_cfa_layout *layout);
+
+/**
+ * The API provides the functionality to get bit offset and bit
+ * length information of a field from a programming layout.
+ *
+ * @param[in] layout
+ *   A pointer of the action layout
+ *
+ * @param[out] slice
+ *   A pointer to the action offset info data structure
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_slice(const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, struct hcapi_cfa_field *slice);
+
+/**
+ * This API provides the functionality to read the value of a
+ * CFA HW field from CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA HW key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t *val);
+
+/**
+ * This API provides the functionality to read a number of
+ * HW fields from a CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in, out] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * Get a value to a specific location relative to a HW field
+ *
+ * This API provides the functionality to read HW field from
+ * a section of a HW data object identified by the bit position
+ * and bit length from a given programming layout in order to avoid
+ * reading the entire HW data object.
+ *
+ * @param[in] obj_data
+ *   A pointer of the data object to read from
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param[in] field_id
+ *   Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t *val);
+
+/**
+ * This function is used to initialize a layout_contents structure
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * initialized.
+ *
+ * @param[in] layout_contents
+ *  A pointer of the layout contents to initialize
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_init_key_layout_contents(struct hcapi_cfa_key_layout_contents *cont);
+
+/**
+ * This function is used to validate a key template
+ *
+ * The struct hcapi_cfa_key_template is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_template
+ *  A pointer of the key template contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_is_valid_key_template(struct hcapi_cfa_key_template *key_template);
+
+/**
+ * This function is used to validate a key layout
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_layout
+ *  A pointer of the key layout contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_is_valid_key_layout(struct hcapi_cfa_key_layout *key_layout);
+
+/**
+ * This function is used to hash E/EM keys
+ *
+ *
+ * @param[in] key_data
+ *  A pointer of the key
+ *
+ * @param[in] bitlen
+ *  Number of bits in the key
+ *
+ * @return
+ *   CRC32 and Lookup3 hashes of the input key
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen);
+
+/**
+ * This function is used to execute an operation
+ *
+ *
+ * @param[in] op
+ *  Operation
+ *
+ * @param[in] key_tbl
+ *  Table
+ *
+ * @param[in] key_obj
+ *  Key data
+ *
+ * @param[in] key_key_loc
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc);
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset);
+#endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
new file mode 100644
index 000000000..ca0b1c923
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <string.h>
+#include "lookup3.h"
+#include "rand.h"
+
+#include "hcapi_cfa_defs.h"
+
+#define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
+#define TF_EM_PAGE_SIZE (1 << 21)
+uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
+uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
+bool hcapi_cfa_lkup_init;
+
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
+static void hcapi_cfa_seeds_init(void)
+{
+	int i;
+	uint32_t r;
+
+	if (hcapi_cfa_lkup_init)
+		return;
+
+	hcapi_cfa_lkup_init = true;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	hcapi_cfa_lkup_lkup3_init_cfg = SWAP_WORDS32(rand32());
+
+	for (i = 0; i < HCAPI_CFA_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2 + 1] = (r & 0x1);
+	}
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t hcapi_cfa_crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "CRC2:");
+#endif
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+#ifdef TF_EEM_DEBUG
+		TFP_DRV_LOG(DEBUG,
+			    "%02X %08X %08X\n",
+			    (buf[l] & 0xff),
+			    crc,
+			    ~crc);
+#endif
+	}
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "\n");
+#endif
+
+	return ~crc;
+}
+
+static uint32_t hcapi_cfa_crc32_hash(uint8_t *key)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = CFA_P4_EEM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = hcapi_cfa_lkup_em_seed_mem[index * 2];
+	val2 = hcapi_cfa_lkup_em_seed_mem[index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	val1 = hcapi_cfa_crc32i(~val1,
+		      (key - (CFA_P4_EEM_KEY_MAX_SIZE - 1)),
+		      CFA_P4_EEM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((const uint32_t *)(uintptr_t *)in_key) + 1,
+			 CFA_P4_EEM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 hcapi_cfa_lkup_lkup3_init_cfg);
+
+	return val1;
+}
+
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	uint64_t addr;
+
+	if (mem == NULL)
+		return 0;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = mem->num_lvl - 1;
+
+	addr = (uintptr_t)mem->pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Approximation of HCAPI hcapi_cfa_key_hash()
+ *
+ * Return:
+ *
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen)
+{
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+
+	/*
+	 * Init the seeds if needed
+	 */
+	if (!hcapi_cfa_lkup_init)
+		hcapi_cfa_seeds_init();
+
+	key0_hash = hcapi_cfa_crc32_hash(((uint8_t *)key_data) +
+					      (bitlen / 8) - 1);
+
+	key1_hash = hcapi_cfa_lookup3_hash((uint8_t *)key_data);
+
+	return ((uint64_t)key0_hash) << 32 | (uint64_t)key1_hash;
+}
+
+static int hcapi_cfa_key_hw_op_put(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_get(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy(key_obj->data,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_add(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Is entry free?
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is entry is valid then report failure
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT))
+		return -1;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_del(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Read entry
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is not a valid entry then report failure.
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT)) {
+		/*
+		 * If a key has been provided then verify the key matches
+		 * before deleting the entry.
+		 */
+		if (key_obj->data != NULL) {
+			if (memcmp(&table_entry,
+				   key_obj->data,
+				   key_obj->size) != 0)
+				return -1;
+		}
+	} else {
+		return -1;
+	}
+
+
+	/*
+	 * Delete entry
+	 */
+	memset((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       0,
+	       key_obj->size);
+
+	return rc;
+}
+
+
+/** Apporiximation of hcapi_cfa_key_hw_op()
+ *
+ *
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc)
+{
+	int rc = 0;
+
+	if (op == NULL ||
+	    key_tbl == NULL ||
+	    key_obj == NULL ||
+	    key_loc == NULL)
+		return -1;
+
+	op->hw.base_addr =
+		hcapi_get_table_page((struct hcapi_cfa_em_table *)
+				     key_tbl->base0,
+				     key_obj->offset);
+
+	if (op->hw.base_addr == 0)
+		return -1;
+
+	switch (op->opcode) {
+	case HCAPI_CFA_HWOPS_PUT: /**< Write to HW operation */
+		rc = hcapi_cfa_key_hw_op_put(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_GET: /**< Read from HW operation */
+		rc = hcapi_cfa_key_hw_op_get(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_ADD:
+		/**< For operations which require more than
+		 * simple writes to HW, this operation is used. The
+		 * distinction with this operation when compared
+		 * to the PUT ops is that this operation is used
+		 * in conjunction with the HCAPI_CFA_HWOPS_DEL
+		 * op to remove the operations issued by the
+		 * ADD OP.
+		 */
+
+		rc = hcapi_cfa_key_hw_op_add(op, key_obj);
+
+		break;
+	case HCAPI_CFA_HWOPS_DEL:
+		rc = hcapi_cfa_key_hw_op_del(op, key_obj);
+		break;
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
new file mode 100644
index 000000000..0661d6363
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -0,0 +1,451 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_P4_H_
+#define _HCAPI_CFA_P4_H_
+
+#include "cfa_p40_hw.h"
+
+/** CFA phase 4 fix formatted table(layout) ID definition
+ *
+ */
+enum cfa_p4_tbl_id {
+	CFA_P4_TBL_L2CTXT_TCAM = 0,
+	CFA_P4_TBL_L2CTXT_REMAP,
+	CFA_P4_TBL_PROF_TCAM,
+	CFA_P4_TBL_PROF_TCAM_REMAP,
+	CFA_P4_TBL_WC_TCAM,
+	CFA_P4_TBL_WC_TCAM_REC,
+	CFA_P4_TBL_WC_TCAM_REMAP,
+	CFA_P4_TBL_VEB_TCAM,
+	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_MAX
+};
+
+#define CFA_P4_PROF_MAX_KEYS 4
+enum cfa_p4_mac_sel_mode {
+	CFA_P4_MAC_SEL_MODE_FIRST = 0,
+	CFA_P4_MAC_SEL_MODE_LOWEST = 1,
+};
+
+struct cfa_p4_prof_key_cfg {
+	uint8_t mac_sel[CFA_P4_PROF_MAX_KEYS];
+#define CFA_P4_PROF_MAC_SEL_DMAC0 (1 << 0)
+#define CFA_P4_PROF_MAC_SEL_T_MAC0 (1 << 1)
+#define CFA_P4_PROF_MAC_SEL_OUTERMOST_MAC0 (1 << 2)
+#define CFA_P4_PROF_MAC_SEL_DMAC1 (1 << 3)
+#define CFA_P4_PROF_MAC_SEL_T_MAC1 (1 << 4)
+#define CFA_P4_PROF_MAC_OUTERMOST_MAC1 (1 << 5)
+	uint8_t pass_cnt;
+	enum cfa_p4_mac_sel_mode mode;
+};
+
+/**
+ * CFA action layout definition
+ */
+
+#define CFA_P4_ACTION_MAX_LAYOUT_SIZE 184
+
+/**
+ * Action object template structure
+ *
+ * Template structure presents data fields that are necessary to know
+ * at the beginning of Action Builder (AB) processing. Like before the
+ * AB compilation. One such example could be a template that is
+ * flexible in size (Encap Record) and the presence of these fields
+ * allows for determining the template size as well as where the
+ * fields are located in the record.
+ *
+ * The template may also present fields that are not made visible to
+ * the caller by way of the action fields.
+ *
+ * Template fields also allow for additional checking on user visible
+ * fields. One such example could be the encap pointer behavior on a
+ * CFA_P4_ACT_OBJ_TYPE_ACT or CFA_P4_ACT_OBJ_TYPE_ACT_SRAM.
+ */
+struct cfa_p4_action_template {
+	/** Action Object type
+	 *
+	 * Controls the type of the Action Template
+	 */
+	enum {
+		/** Select this type to build an Action Record Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT,
+		/** Select this type to build an Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT,
+		/** Select this type to build a SRAM Action Record
+		 * Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT_SRAM,
+		/** Select this type to build a SRAM Action
+		 * Encapsulation Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ENCAP_SRAM,
+		/** Select this type to build a SRAM Action Modify
+		 * Object, with IPv4 capability.
+		 */
+		/* In case of Stingray the term Modify is used for the 'NAT
+		 * action'. Action builder is leveraged to fill in the NAT
+		 * object which then can be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_MODIFY_IPV4_SRAM,
+		/** Select this type to build a SRAM Action Source
+		 * Property Object.
+		 */
+		/* In case of Stingray this is not a 'pure' action record.
+		 * Action builder is leveraged to full in the Source Property
+		 * object which can then be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM,
+		/** Select this type to build a SRAM Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT_SRAM,
+	} obj_type;
+
+	/** Action Control
+	 *
+	 * Controls the internals of the Action Template
+	 *
+	 * act is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_ACT)
+	 */
+	/*
+	 * Stat and encap are always inline for EEM as table scope
+	 * allocation does not allow for separate Stats allocation,
+	 * but has the xx_inline flags as to be forward compatible
+	 * with Stingray 2, always treated as TRUE.
+	 */
+	struct {
+		/** Set to CFA_HCAPI_TRUE to enable statistics
+		 */
+		uint8_t stat_enable;
+		/** Set to CFA_HCAPI_TRUE to enable statistics to be inlined
+		 */
+		uint8_t stat_inline;
+
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation
+		 */
+		uint8_t encap_enable;
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation to be inlined
+		 */
+		uint8_t encap_inline;
+	} act;
+
+	/** Modify Setting
+	 *
+	 * Controls the type of the Modify Action the template is
+	 * describing
+	 *
+	 * modify is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_MODIFY_SRAM)
+	 */
+	enum {
+		/** Set to enable Modify of Source IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_SOURCE_IPV4 = 0,
+		/** Set to enable Modify of Destination IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_DEST_IPV4
+	} modify;
+
+	/** Encap Control
+	 * Controls the type of encapsulation the template is
+	 * describing
+	 *
+	 * encap is valid when:
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_ACT) &&
+	 *   act.encap_enable) ||
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM)
+	 */
+	struct {
+		/* Direction is required as Stingray Encap on RX is
+		 * limited to l2 and VTAG only.
+		 */
+		/** Receive or Transmit direction
+		 */
+		uint8_t direction;
+		/** Set to CFA_HCAPI_TRUE to enable L2 capability in the
+		 *  template
+		 */
+		uint8_t l2_enable;
+		/** vtag controls the Encap Vector - VTAG Encoding, 4 bits
+		 *
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_0, default, no VLAN
+		 *      Tags applied
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_1, adds capability to
+		 *      set 1 VLAN Tag. Action Template compile adds
+		 *      the following field to the action object
+		 *      ::TF_ER_VLAN1
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_2, adds capability to
+		 *      set 2 VLAN Tags. Action Template compile adds
+		 *      the following fields to the action object
+		 *      ::TF_ER_VLAN1 and ::TF_ER_VLAN2
+		 * </ul>
+		 */
+		enum { CFA_P4_ACT_ENCAP_VTAGS_PUSH_0 = 0,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_1,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_2 } vtag;
+
+		/*
+		 * The remaining fields are NOT supported when
+		 * direction is RX and ((obj_type ==
+		 * CFA_P4_ACT_OBJ_TYPE_ACT) && act.encap_enable).
+		 * ab_compile_layout will perform the checking and
+		 * skip remaining fields.
+		 */
+		/** L3 Encap controls the Encap Vector - L3 Encoding,
+		 *  3 bits. Defines the type of L3 Encapsulation the
+		 *  template is describing.
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_L3_NONE, default, no L3
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV4, enables L3 IPv4
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV6, enables L3 IPv6
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8847, enables L3 MPLS
+		 *      8847 Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8848, enables L3 MPLS
+		 *      8848 Encapsulation.
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable any L3 encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_L3_NONE = 0,
+			/** Set to enable L3 IPv4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV4 = 4,
+			/** Set to enable L3 IPv6 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV6 = 5,
+			/** Set to enable L3 MPLS 8847 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8847 = 6,
+			/** Set to enable L3 MPLS 8848 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8848 = 7
+		} l3;
+
+#define CFA_P4_ACT_ENCAP_MAX_MPLS_LABELS 8
+		/** 1-8 labels, valid when
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8847) ||
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8848)
+		 *
+		 * MAX number of MPLS Labels 8.
+		 */
+		uint8_t l3_num_mpls_labels;
+
+		/** Set to CFA_HCAPI_TRUE to enable L4 capability in the
+		 * template.
+		 *
+		 * CFA_HCAPI_TRUE adds ::TF_EN_UDP_SRC_PORT and
+		 * ::TF_EN_UDP_DST_PORT to the template.
+		 */
+		uint8_t l4_enable;
+
+		/** Tunnel Encap controls the Encap Vector - Tunnel
+		 *  Encap, 3 bits. Defines the type of Tunnel
+		 *  encapsulation the template is describing
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NONE, default, no Tunnel
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL
+		 * <li> CFA_P4_ACT_ENCAP_TNL_VXLAN. NOTE: Expects
+		 *      l4_enable set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NGE. NOTE: Expects l4_enable
+		 *      set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NVGRE. NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GRE.NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable Tunnel header encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NONE = 0,
+			/** Set to enable Tunnel Generic Full header
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL,
+			/** Set to enable VXLAN header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_VXLAN,
+			/** Set to enable NGE (VXLAN2) header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NGE,
+			/** Set to enable NVGRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NVGRE,
+			/** Set to enable GRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GRE,
+			/** Set to enable Generic header after Tunnel
+			 * L4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4,
+			/** Set to enable Generic header after Tunnel
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		} tnl;
+
+		/** Number of bytes of generic tunnel header,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL)
+		 */
+		uint8_t tnl_generic_size;
+		/** Number of 32b words of nge options,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_NGE)
+		 */
+		uint8_t tnl_nge_op_len;
+		/* Currently not planned */
+		/* Custom Header */
+		/*	uint8_t custom_enable; */
+	} encap;
+};
+
+/**
+ * Enumeration of SRAM entry types, used for allocation of
+ * fixed SRAM entities. The memory model for CFA HCAPI
+ * determines if an SRAM entry type is supported.
+ */
+enum cfa_p4_action_sram_entry_type {
+	/* NOTE: Any additions to this enum must be reflected on FW
+	 * side as well.
+	 */
+
+	/** SRAM Action Record */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	/** SRAM Action Encap 8 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
+	/** SRAM Action Encap 16 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
+	/** SRAM Action Encap 64 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+	/** SRAM Action Modify IPv4 Source */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
+	/** SRAM Action Modify IPv4 Destination */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+	/** SRAM Action Source Properties SMAC */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
+	/** SRAM Action Source Properties SMAC IPv4 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV4,
+	/** SRAM Action Source Properties SMAC IPv6 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV6,
+	/** SRAM Action Statistics 64 Bits */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_STATS_64,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MAX
+};
+
+/**
+ * SRAM Action Record structure holding either an action index or an
+ * action ptr.
+ */
+union cfa_p4_action_sram_act_record {
+	/** SRAM Action idx specifies the offset of the SRAM
+	 * element within its SRAM Entry Type block. This
+	 * index can be written into i.e. an L2 Context. Use
+	 * this type for all SRAM Action Record types except
+	 * SRAM Full Action records. Use act_ptr instead.
+	 */
+	uint16_t act_idx;
+	/** SRAM Full Action is special in that it needs an
+	 * action record pointer. This pointer can be written
+	 * into i.e. a Wildcard TCAM entry.
+	 */
+	uint32_t act_ptr;
+};
+
+/**
+ * cfa_p4_action_param parameter definition
+ */
+struct cfa_p4_action_param {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	uint8_t dir;
+	/**
+	 * [in] type of the sram allocation type
+	 */
+	enum cfa_p4_action_sram_entry_type type;
+	/**
+	 * [in] action record to set. The 'type' specified lists the
+	 *	record definition to use in the passed in record.
+	 */
+	union cfa_p4_action_sram_act_record record;
+	/**
+	 * [in] number of elements in act_data
+	 */
+	uint32_t act_size;
+	/**
+	 * [in] ptr to array of action data
+	 */
+	uint64_t *act_data;
+};
+
+/**
+ * EEM Key entry sizes
+ */
+#define CFA_P4_EEM_KEY_MAX_SIZE 52
+#define CFA_P4_EEM_KEY_RECORD_SIZE 64
+
+/**
+ * cfa_eem_entry_hdr
+ */
+struct cfa_p4_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define CFA_P4_EEM_ENTRY_VALID_SHIFT 31
+#define CFA_P4_EEM_ENTRY_VALID_MASK 0x80000000
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_SHIFT 30
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_MASK 0x40000000
+#define CFA_P4_EEM_ENTRY_STRENGTH_SHIFT 28
+#define CFA_P4_EEM_ENTRY_STRENGTH_MASK 0x30000000
+#define CFA_P4_EEM_ENTRY_RESERVED_SHIFT 17
+#define CFA_P4_EEM_ENTRY_RESERVED_MASK 0x0FFE0000
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_SHIFT 8
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_MASK 0x0001FF00
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_SHIFT 3
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_MASK 0x000000F8
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_SHIFT 2
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK 0x00000004
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_SHIFT 1
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_MASK 0x00000002
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_SHIFT 0
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/**
+ *  cfa_p4_eem_key_entry
+ */
+struct cfa_p4_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[CFA_P4_EEM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct cfa_p4_eem_entry_hdr hdr;
+};
+
+#endif /* _CFA_HW_P4_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 1f7df9d06..33e6ebd66 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,6 +43,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_rm_new.c',
 
+	'hcapi/hcapi_cfa_p4.c',
+
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
 	'tf_ulp/ulp_flow_db.c',
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 91cbc6299..38f7fe419 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -189,7 +189,7 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
 		return NULL;
 
-	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
 		return NULL;
 
 	/*
@@ -325,7 +325,7 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key0_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key0_hash,
-					 KEY0_TABLE,
+					 TF_KEY0_TABLE,
 					 dir);
 
 	/*
@@ -334,23 +334,23 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key1_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key1_hash,
-					 KEY1_TABLE,
+					 TF_KEY1_TABLE,
 					 dir);
 
 	if (key0_entry == -EEXIST) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return -EEXIST;
 	} else if (key1_entry == -EEXIST) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return -EEXIST;
 	} else if (key0_entry == 0) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return 0;
 	} else if (key1_entry == 0) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return 0;
 	}
@@ -384,7 +384,7 @@ int tf_insert_eem_entry(struct tf_session *session,
 	int		   num_of_entry;
 
 	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
 
 	if (!mask)
 		return -EINVAL;
@@ -392,13 +392,13 @@ int tf_insert_eem_entry(struct tf_session *session,
 	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
 
 	key0_hash = tf_em_lkup_get_crc32_hash(session,
-				      &parms->key[num_of_entry] - 1,
-				      parms->dir);
+					      &parms->key[num_of_entry] - 1,
+					      parms->dir);
 	key0_index = key0_hash & mask;
 
 	key1_hash =
 	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-					parms->key);
+				       parms->key);
 	key1_index = key1_hash & mask;
 
 	/*
@@ -420,14 +420,14 @@ int tf_insert_eem_entry(struct tf_session *session,
 				      key1_index,
 				      &index,
 				      &table_type) == 0) {
-		if (table_type == KEY0_TABLE) {
+		if (table_type == TF_KEY0_TABLE) {
 			TF_SET_GFID(gfid,
 				    key0_index,
-				    KEY0_TABLE);
+				    TF_KEY0_TABLE);
 		} else {
 			TF_SET_GFID(gfid,
 				    key1_index,
-				    KEY1_TABLE);
+				    TF_KEY1_TABLE);
 		}
 
 		/*
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 4e236d56c..35a7cfab5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -285,8 +285,8 @@ tf_em_setup_page_table(struct tf_em_table *tbl)
 		tf_em_link_page_table(tp, tp_next, set_pte_last);
 	}
 
-	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
 /**
@@ -317,7 +317,7 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 			uint64_t *num_data_pages)
 {
 	uint64_t lvl_data_size = page_size;
-	int lvl = PT_LVL_0;
+	int lvl = TF_PT_LVL_0;
 	uint64_t data_size;
 
 	*num_data_pages = 0;
@@ -326,10 +326,10 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 	while (lvl_data_size < data_size) {
 		lvl++;
 
-		if (lvl == PT_LVL_1)
+		if (lvl == TF_PT_LVL_1)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				page_size;
-		else if (lvl == PT_LVL_2)
+		else if (lvl == TF_PT_LVL_2)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				MAX_PAGE_PTRS(page_size) * page_size;
 		else
@@ -386,18 +386,18 @@ tf_em_size_page_tbls(int max_lvl,
 		     uint32_t page_size,
 		     uint32_t *page_cnt)
 {
-	if (max_lvl == PT_LVL_0) {
-		page_cnt[PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == PT_LVL_1) {
-		page_cnt[PT_LVL_1] = num_data_pages;
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
-	} else if (max_lvl == PT_LVL_2) {
-		page_cnt[PT_LVL_2] = num_data_pages;
-		page_cnt[PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
 	} else {
 		return;
 	}
@@ -434,7 +434,7 @@ tf_em_size_table(struct tf_em_table *tbl)
 	/* Determine number of page table levels and the number
 	 * of data pages needed to process the given eem table.
 	 */
-	if (tbl->type == RECORD_TABLE) {
+	if (tbl->type == TF_RECORD_TABLE) {
 		/*
 		 * For action records just a memory size is provided. Work
 		 * backwards to resolve to number of entries
@@ -480,9 +480,9 @@ tf_em_size_table(struct tf_em_table *tbl)
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
 		    num_data_pages,
-		    page_cnt[PT_LVL_0],
-		    page_cnt[PT_LVL_1],
-		    page_cnt[PT_LVL_2]);
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
 
 	return 0;
 }
@@ -508,7 +508,7 @@ tf_em_ctx_unreg(struct tf *tfp,
 	struct tf_em_table *tbl;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
@@ -544,7 +544,7 @@ tf_em_ctx_reg(struct tf *tfp,
 	int rc = 0;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
@@ -719,41 +719,41 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		return -EINVAL;
 	}
 	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->rx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->tx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	return 0;
@@ -1572,11 +1572,11 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 
 		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
 		rc = tf_msg_em_cfg(tfp,
-				   em_tables[KEY0_TABLE].num_entries,
-				   em_tables[KEY0_TABLE].ctx_id,
-				   em_tables[KEY1_TABLE].ctx_id,
-				   em_tables[RECORD_TABLE].ctx_id,
-				   em_tables[EFC_TABLE].ctx_id,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
@@ -1600,9 +1600,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 * actions related to a single table scope.
 		 */
 		rc = tf_create_tbl_pool_external(dir,
-					    tbl_scope_cb,
-					    em_tables[RECORD_TABLE].num_entries,
-					    em_tables[RECORD_TABLE].entry_size);
+				    tbl_scope_cb,
+				    em_tables[TF_RECORD_TABLE].num_entries,
+				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
 			PMD_DRV_LOG(ERR,
 				    "%d TBL: Unable to allocate idx pools %s\n",
@@ -1672,7 +1672,7 @@ tf_set_tbl_entry(struct tf *tfp,
 		base_addr = tf_em_get_table_page(tbl_scope_cb,
 						 parms->dir,
 						 offset,
-						 RECORD_TABLE);
+						 TF_RECORD_TABLE);
 		if (base_addr == NULL) {
 			PMD_DRV_LOG(ERR,
 				    "dir:%d, Base address lookup failed\n",
@@ -1972,7 +1972,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
 
-		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
 			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
 			printf
 	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index a8bb0edab..ee8a14665 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -14,18 +14,18 @@
 struct tf_session;
 
 enum tf_pg_tbl_lvl {
-	PT_LVL_0,
-	PT_LVL_1,
-	PT_LVL_2,
-	PT_LVL_MAX
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
 };
 
 enum tf_em_table_type {
-	KEY0_TABLE,
-	KEY1_TABLE,
-	RECORD_TABLE,
-	EFC_TABLE,
-	MAX_TABLE
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
 };
 
 struct tf_em_page_tbl {
@@ -41,15 +41,15 @@ struct tf_em_table {
 	uint16_t			ctx_id;
 	uint32_t			entry_size;
 	int				num_lvl;
-	uint32_t			page_cnt[PT_LVL_MAX];
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
 	uint64_t			num_data_pages;
 	void				*l0_addr;
 	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
 };
 
 struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[MAX_TABLE];
+	struct tf_em_table		em_tables[TF_MAX_TABLE];
 };
 
 /** table scope control block content */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 16/51] net/bnxt: add core changes for EM and EEM lookups
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (14 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
@ 2020-07-02  4:10         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
                           ` (34 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:10 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru, Shahaji Bhosle

From: Randy Schacher <stuart.schacher@broadcom.com>

- Move External Exact and Exact Match to device module using HCAPI
  to add and delete entries
- Make EM active through the device interface.

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Shahaji Bhosle <shahaji.bhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   3 +-
 drivers/net/bnxt/hcapi/cfa_p40_hw.h       | 781 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 ---
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     |   2 +-
 drivers/net/bnxt/tf_core/Makefile         |   8 +
 drivers/net/bnxt/tf_core/hwrm_tf.h        |  24 +-
 drivers/net/bnxt/tf_core/tf_core.c        | 441 ++++++------
 drivers/net/bnxt/tf_core/tf_core.h        | 141 ++--
 drivers/net/bnxt/tf_core/tf_device.h      |  32 +
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   3 +
 drivers/net/bnxt/tf_core/tf_em.c          | 567 +++++-----------
 drivers/net/bnxt/tf_core/tf_em.h          |  72 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |  23 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |   4 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  25 +-
 drivers/net/bnxt/tf_core/tf_rm.c          | 156 +++--
 drivers/net/bnxt/tf_core/tf_tbl.c         | 437 +++++-------
 drivers/net/bnxt/tf_core/tf_tbl.h         |  49 +-
 18 files changed, 1627 insertions(+), 1233 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 delete mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 365627499..349b09c36 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -46,9 +46,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
+include $(SRCDIR)/hcapi/Makefile
 endif
 
 #
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
new file mode 100644
index 000000000..172706f12
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
@@ -0,0 +1,781 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_hw.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  taken from 12/16/19 17:18:12
+ *
+ * Note:  This file was first generated using  tflib_decode.py.
+ *
+ *        Changes have been made due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union table fields)
+ *        Changes not autogenerated are noted in comments.
+ */
+
+#ifndef _CFA_P40_HW_H_
+#define _CFA_P40_HW_H_
+
+/**
+ * Valid TCAM entry. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Key type (pass). (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS 164
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS 2
+/**
+ * Tunnel HDR type. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS 160
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Number of VLAN tags in tunnel l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS 158
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Number of VLAN tags in l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS 156
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS    108
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS  48
+/**
+ * Tunnel Outer VLAN Tag ID. (for idx 3 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS  96
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS 12
+/**
+ * Tunnel Inner VLAN Tag ID. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS  84
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS 12
+/**
+ * Source Partition. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  80
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+/**
+ * Source Virtual I/F. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  8
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS    24
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS  48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS    12
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS  12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS    0
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS  12
+
+enum cfa_p40_prof_l2_ctxt_tcam_flds {
+	CFA_P40_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 1,
+	CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 2,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 3,
+	CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 4,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC1_FLD = 5,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_FLD = 6,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_FLD = 7,
+	CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_FLD = 8,
+	CFA_P40_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P40_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P40_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 167
+
+/**
+ * Valid entry. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_VALID_BITPOS        79
+#define CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS      1
+/**
+ * reserved program to 0. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS     78
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS   1
+/**
+ * PF Parif Number. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS     74
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS   4
+/**
+ * Number of VLAN Tags. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS    72
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS  2
+/**
+ * Dest. MAC Address.
+ */
+#define CFA_P40_ACT_VEB_TCAM_MAC_BITPOS          24
+#define CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS        48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_OVID_BITPOS         12
+#define CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS       12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_IVID_BITPOS         0
+#define CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS       12
+
+enum cfa_p40_act_veb_tcam_flds {
+	CFA_P40_ACT_VEB_TCAM_VALID_FLD = 0,
+	CFA_P40_ACT_VEB_TCAM_RESERVED_FLD = 1,
+	CFA_P40_ACT_VEB_TCAM_PARIF_IN_FLD = 2,
+	CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_FLD = 3,
+	CFA_P40_ACT_VEB_TCAM_MAC_FLD = 4,
+	CFA_P40_ACT_VEB_TCAM_OVID_FLD = 5,
+	CFA_P40_ACT_VEB_TCAM_IVID_FLD = 6,
+	CFA_P40_ACT_VEB_TCAM_MAX_FLD
+};
+
+#define CFA_P40_ACT_VEB_TCAM_TOTAL_NUM_BITS 80
+
+/**
+ * Entry is valid.
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS 18
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS 1
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS 2
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS 16
+/**
+ * for resolving TCAM/EM conflicts
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS 0
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS 2
+
+enum cfa_p40_lkup_tcam_record_mem_flds {
+	CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_FLD = 0,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_FLD = 1,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_FLD = 2,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_MAX_FLD
+};
+
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_TOTAL_NUM_BITS 19
+
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS 62
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_tpid_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_MAX = 0x3UL
+};
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS 60
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_pri_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_MAX = 0x3UL
+};
+/**
+ * Bypass Source Properties Lookup. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS 59
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS 1
+/**
+ * SP Record Pointer. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS 43
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS 16
+/**
+ * BD Action pointer passing enable. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS 42
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS 1
+/**
+ * Default VLAN TPID. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS 39
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS 3
+/**
+ * Allowed VLAN TPIDs. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS 33
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS 6
+/**
+ * Default VLAN PRI.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS 30
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS 3
+/**
+ * Allowed VLAN PRIs.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS 22
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS 8
+/**
+ * Partition.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS 18
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS 4
+/**
+ * Bypass Lookup.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS 17
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS 1
+
+/**
+ * L2 Context Remap Data. Action bypass mode (1) {7'd0,prof_vnic[9:0]} Note:
+ * should also set byp_lkup_en. Action bypass mode (0) byp_lkup_en(0) -
+ * {prof_func[6:0],l2_context[9:0]} byp_lkup_en(1) - {1'b0,act_rec_ptr[15:0]}
+ */
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS 12
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS 10
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS 7
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS 10
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS 16
+
+enum cfa_p40_prof_ctxt_remap_mem_flds {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_FLD = 0,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_FLD = 1,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_FLD = 2,
+	CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_FLD = 3,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_FLD = 4,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_FLD = 5,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_FLD = 6,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_FLD = 7,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_FLD = 8,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_FLD = 9,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_FLD = 10,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_FLD = 11,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_FLD = 12,
+	CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_FLD = 13,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ARP_FLD = 14,
+	CFA_P40_PROF_CTXT_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TOTAL_NUM_BITS 64
+
+/**
+ * Bypass action pointer look up (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS 37
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS 1
+/**
+ * Exact match search enable (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS 36
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS 1
+/**
+ * Exact match profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS 28
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS 8
+/**
+ * Exact match key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS 5
+/**
+ * Exact match key mask
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS 13
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS 10
+/**
+ * TCAM search enable
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS 12
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS 1
+/**
+ * TCAM profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS 4
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS 8
+/**
+ * TCAM key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS 4
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS 2
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS 16
+
+enum cfa_p40_prof_profile_tcam_remap_mem_flds {
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TOTAL_NUM_BITS 38
+
+/**
+ * Valid TCAM entry (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS   80
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS 1
+/**
+ * Packet type (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS 76
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS 4
+/**
+ * Pass through CFA (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS 74
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS 2
+/**
+ * Aggregate error (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS 73
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS 1
+/**
+ * Profile function (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS 66
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS 7
+/**
+ * Reserved for future use. Set to 0.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS 57
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS 9
+/**
+ * non-tunnel(0)/tunneled(1) packet (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS 56
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS 1
+/**
+ * Tunnel L2 tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS 55
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L2 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS 53
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped tunnel L2 dest_type UC(0)/MC(2)/BC(3) (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS 51
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS 2
+/**
+ * Tunnel L2 1+ VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS 50
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * Tunnel L2 2 VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS 49
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS 1
+/**
+ * Tunnel L3 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS 48
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS 1
+/**
+ * Tunnel L3 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS 47
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS 1
+/**
+ * Tunnel L3 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS 43
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L3 header is IPV4 or IPV6. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS 42
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 src address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS 41
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 dest address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS 40
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * Tunnel L4 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS 39
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L4 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS 38
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS 1
+/**
+ * Tunnel L4 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS 34
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L4 header is UDP or TCP (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS 33
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS 1
+/**
+ * Tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS 32
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS 31
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS 1
+/**
+ * Tunnel header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS 27
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel header flags
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS 24
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS 3
+/**
+ * L2 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS 1
+/**
+ * L2 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS 22
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS 1
+/**
+ * L2 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS 20
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped L2 dest_type UC(0)/MC(2)/BC(3)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS 18
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS 2
+/**
+ * L2 header 1+ VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS 17
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * L2 header 2 VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS 1
+/**
+ * L3 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS 15
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS 1
+/**
+ * L3 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS 14
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS 1
+/**
+ * L3 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS 10
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS 4
+/**
+ * L3 header is IPV4 or IPV6.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS 9
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS 1
+/**
+ * L3 header IPV6 src address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS 8
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * L3 header IPV6 dest address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS 7
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * L4 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS 6
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS 1
+/**
+ * L4 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS 5
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS 1
+/**
+ * L4 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS 1
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS 4
+/**
+ * L4 header is UDP or TCP
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS 1
+
+enum cfa_p40_prof_profile_tcam_flds {
+	CFA_P40_PROF_PROFILE_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_RESERVED_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_FLD = 10,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_FLD = 11,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_FLD = 12,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_FLD = 13,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_FLD = 14,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_FLD = 15,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_FLD = 16,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_FLD = 17,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_FLD = 18,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_FLD = 19,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_FLD = 20,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_FLD = 21,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_FLD = 22,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_FLD = 23,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_FLD = 24,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_FLD = 25,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_FLD = 26,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_FLD = 27,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_FLD = 28,
+	CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_FLD = 29,
+	CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_FLD = 30,
+	CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_FLD = 31,
+	CFA_P40_PROF_PROFILE_TCAM_L3_VALID_FLD = 32,
+	CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_FLD = 33,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_FLD = 34,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_FLD = 35,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_FLD = 36,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_FLD = 37,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_FLD = 38,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_FLD = 39,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_FLD = 40,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_FLD = 41,
+	CFA_P40_PROF_PROFILE_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_TOTAL_NUM_BITS 81
+
+/**
+ * CFA flexible key layout definition
+ */
+enum cfa_p40_key_fld_id {
+	CFA_P40_KEY_FLD_ID_MAX
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+/**
+ * Valid
+ */
+#define CFA_P40_EEM_KEY_TBL_VALID_BITPOS 0
+#define CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS 1
+
+/**
+ * L1 Cacheable
+ */
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS 1
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS 1
+
+/**
+ * Strength
+ */
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS 2
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS 2
+
+/**
+ * Key Size
+ */
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS 15
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS 9
+
+/**
+ * Record Size
+ */
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS 24
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS 5
+
+/**
+ * Action Record Internal
+ */
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS 29
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS 1
+
+/**
+ * External Flow Counter
+ */
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS 30
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS 31
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS 33
+
+/**
+ * EEM Key omitted - create using keybuilder
+ * Fields here cannot be larger than a uint64_t
+ */
+
+#define CFA_P40_EEM_KEY_TBL_TOTAL_NUM_BITS 64
+
+enum cfa_p40_eem_key_tbl_flds {
+	CFA_P40_EEM_KEY_TBL_VALID_FLD = 0,
+	CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_FLD = 1,
+	CFA_P40_EEM_KEY_TBL_STRENGTH_FLD = 2,
+	CFA_P40_EEM_KEY_TBL_KEY_SZ_FLD = 3,
+	CFA_P40_EEM_KEY_TBL_REC_SZ_FLD = 4,
+	CFA_P40_EEM_KEY_TBL_ACT_REC_INT_FLD = 5,
+	CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_FLD = 6,
+	CFA_P40_EEM_KEY_TBL_AR_PTR_FLD = 7,
+	CFA_P40_EEM_KEY_TBL_MAX_FLD
+};
+
+/**
+ * Mirror Destination 0 Source Property Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_SP_PTR_BITPOS 0
+#define CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS 11
+
+/**
+ * igonore or honor drop
+ */
+#define CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS 13
+#define CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS 1
+
+/**
+ * ingress or egress copy
+ */
+#define CFA_P40_MIRROR_TBL_COPY_BITPOS 14
+#define CFA_P40_MIRROR_TBL_COPY_NUM_BITS 1
+
+/**
+ * Mirror Destination enable.
+ */
+#define CFA_P40_MIRROR_TBL_EN_BITPOS 15
+#define CFA_P40_MIRROR_TBL_EN_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_AR_PTR_BITPOS 16
+#define CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS 16
+
+#define CFA_P40_MIRROR_TBL_TOTAL_NUM_BITS 32
+
+enum cfa_p40_mirror_tbl_flds {
+	CFA_P40_MIRROR_TBL_SP_PTR_FLD = 0,
+	CFA_P40_MIRROR_TBL_IGN_DROP_FLD = 1,
+	CFA_P40_MIRROR_TBL_COPY_FLD = 2,
+	CFA_P40_MIRROR_TBL_EN_FLD = 3,
+	CFA_P40_MIRROR_TBL_AR_PTR_FLD = 4,
+	CFA_P40_MIRROR_TBL_MAX_FLD
+};
+
+/**
+ * P45 Specific Updates (SR) - Non-autogenerated
+ */
+/**
+ * Valid TCAM entry.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Source Partition.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  166
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+
+/**
+ * Source Virtual I/F.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  12
+
+
+/* The SR layout of the l2 ctxt key is different from the Wh+.  Switch to
+ * cfa_p45_hw.h definition when available.
+ */
+enum cfa_p45_prof_l2_ctxt_tcam_flds {
+	CFA_P45_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_FLD = 1,
+	CFA_P45_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 2,
+	CFA_P45_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 3,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 4,
+	CFA_P45_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 5,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC1_FLD = 6,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_OVID_FLD = 7,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_IVID_FLD = 8,
+	CFA_P45_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P45_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P45_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P45_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 171
+
+#endif /* _CFA_P40_HW_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
deleted file mode 100644
index 39afd4dbc..000000000
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- *   Copyright(c) 2019-2020 Broadcom Limited.
- *   All rights reserved.
- */
-
-#include "bitstring.h"
-#include "hcapi_cfa_defs.h"
-#include <errno.h>
-#include "assert.h"
-
-/* HCAPI CFA common PUT APIs */
-int hcapi_cfa_put_field(uint64_t *data_buf,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id, uint64_t val)
-{
-	assert(layout);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		bs_put_msb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	else
-		bs_put_lsb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	return 0;
-}
-
-int hcapi_cfa_put_fields(uint64_t *obj_data,
-			 const struct hcapi_cfa_layout *layout,
-			 struct hcapi_cfa_data_obj *field_tbl,
-			 uint16_t field_tbl_sz)
-{
-	int i;
-	uint16_t bitpos;
-	uint8_t bitlen;
-	uint16_t field_id;
-
-	assert(layout);
-	assert(field_tbl);
-
-	if (layout->is_msb_order) {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_msb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	} else {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_lsb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	}
-	return 0;
-}
-
-/* HCAPI CFA common GET APIs */
-int hcapi_cfa_get_field(uint64_t *obj_data,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id,
-			uint64_t *val)
-{
-	assert(layout);
-	assert(val);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		*val = bs_get_msb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	else
-		*val = bs_get_lsb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	return 0;
-}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index ca0b1c923..42b37da0f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
-
+#include <inttypes.h>
 #include <stdint.h>
 #include <stdlib.h>
 #include <stdbool.h>
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 2c02e29e7..5ed32f12a 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -24,3 +24,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_device.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index c04d1034a..1e78296c6 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -27,8 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
+	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -82,8 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_tbl_type_get_bulk_input;
-struct tf_tbl_type_get_bulk_output;
+struct tf_tbl_type_bulk_get_input;
+struct tf_tbl_type_bulk_get_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -905,8 +905,6 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -922,17 +920,17 @@ typedef struct tf_tbl_type_get_output {
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
-typedef struct tf_tbl_type_get_bulk_input {
+typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
 	uint16_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_TX	   (0x1)
 	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Starting index to get from */
@@ -941,12 +939,12 @@ typedef struct tf_tbl_type_get_bulk_input {
 	uint32_t			 num_entries;
 	/* Host memory where data will be stored */
 	uint64_t			 host_addr;
-} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+} tf_tbl_type_bulk_get_input_t, *ptf_tbl_type_bulk_get_input_t;
 
 /* Output params for table type get */
-typedef struct tf_tbl_type_get_bulk_output {
+typedef struct tf_tbl_type_bulk_get_output {
 	/* Size of the total data read in bytes */
 	uint16_t			 size;
-} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+} tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index eac57e7bd..648d0d1bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,33 +19,41 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static inline uint32_t SWAP_WORDS32(uint32_t val32)
+static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
+			       enum tf_device_type device,
+			       uint16_t key_sz_in_bits,
+			       uint16_t *num_slice_per_row)
 {
-	return (((val32 & 0x0000ffff) << 16) |
-		((val32 & 0xffff0000) >> 16));
-}
+	uint16_t key_bytes;
+	uint16_t slice_sz = 0;
+
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
+
+	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
+		if (device == TF_DEVICE_TYPE_WH) {
+			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
+			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		} else {
+			TFP_DRV_LOG(ERR,
+				    "Unsupported device type %d\n",
+				    device);
+			return -ENOTSUP;
+		}
 
-static void tf_seeds_init(struct tf_session *session)
-{
-	int i;
-	uint32_t r;
-
-	/* Initialize the lfsr */
-	rand_init();
-
-	/* RX and TX use the same seed values */
-	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
-		session->lkup_lkup3_init_cfg[TF_DIR_TX] =
-						SWAP_WORDS32(rand32());
-
-	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+		if (key_bytes > *num_slice_per_row * slice_sz) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Key size %d is not supported\n",
+				    tf_tcam_tbl_2_str(tcam_tbl_type),
+				    key_bytes);
+			return -ENOTSUP;
+		}
+	} else { /* for other type of tcam */
+		*num_slice_per_row = 1;
 	}
+
+	return 0;
 }
 
 /**
@@ -153,15 +161,18 @@ tf_open_session(struct tf                    *tfp,
 	uint8_t fw_session_id;
 	int dir;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
 	 * firmware open session succeeds.
 	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH)
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
 		return -ENOTSUP;
+	}
 
 	/* Build the beginning of session_id */
 	rc = sscanf(parms->ctrl_chan_name,
@@ -171,7 +182,7 @@ tf_open_session(struct tf                    *tfp,
 		    &slot,
 		    &device);
 	if (rc != 4) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "Failed to scan device ctrl_chan_name\n");
 		return -EINVAL;
 	}
@@ -183,13 +194,13 @@ tf_open_session(struct tf                    *tfp,
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
-			PMD_DRV_LOG(ERR,
-				    "Session is already open, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
 		else
-			PMD_DRV_LOG(ERR,
-				    "Open message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
 
 		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
 		return rc;
@@ -202,13 +213,13 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
-	tfp->session = alloc_parms.mem_va;
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -217,9 +228,9 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -240,12 +251,13 @@ tf_open_session(struct tf                    *tfp,
 	session->session_id.internal.device = device;
 	session->session_id.internal.fw_session_id = fw_session_id;
 
+	/* Query for Session Config
+	 */
 	rc = tf_msg_session_qcfg(tfp);
 	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
@@ -256,9 +268,9 @@ tf_open_session(struct tf                    *tfp,
 #if (TF_SHADOW == 1)
 		rc = tf_rm_shadow_db_init(tfs);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%d",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%s",
+				    strerror(-rc));
 		/* Add additional processing */
 #endif /* TF_SHADOW */
 	}
@@ -266,13 +278,12 @@ tf_open_session(struct tf                    *tfp,
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
-		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Rm allocate validate failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
-	/* Setup hash seeds */
-	tf_seeds_init(session);
-
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		rc = tf_create_em_pool(session,
@@ -290,11 +301,11 @@ tf_open_session(struct tf                    *tfp,
 	/* Return session ID */
 	parms->session_id = session->session_id;
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session created, session_id:%d\n",
 		    parms->session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
@@ -379,8 +390,7 @@ tf_attach_session(struct tf *tfp __rte_unused,
 #if (TF_SHARED == 1)
 	int rc;
 
-	if (tfp == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	/* - Open the shared memory for the attach_chan_name
 	 * - Point to the shared session for this Device instance
@@ -389,12 +399,10 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	 *   than one client of the session.
 	 */
 
-	if (tfp->session) {
-		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-			rc = tf_msg_session_attach(tfp,
-						   parms->ctrl_chan_name,
-						   parms->session_id);
-		}
+	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_attach(tfp,
+					   parms->ctrl_chan_name,
+					   parms->session_id);
 	}
 #endif /* TF_SHARED */
 	return -1;
@@ -472,8 +480,7 @@ tf_close_session(struct tf *tfp)
 	union tf_session_id session_id;
 	int dir;
 
-	if (tfp == NULL || tfp->session == NULL)
-		return -EINVAL;
+	TF_CHECK_TFP_SESSION(tfp);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -487,9 +494,9 @@ tf_close_session(struct tf *tfp)
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "Message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Message send failed, rc:%s\n",
+				    strerror(-rc));
 		}
 
 		/* Update the ref_count */
@@ -509,11 +516,11 @@ tf_close_session(struct tf *tfp)
 		tfp->session = NULL;
 	}
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session closed, session_id:%d\n",
 		    session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    session_id.internal.domain,
 		    session_id.internal.bus,
@@ -565,27 +572,39 @@ tf_close_session_new(struct tf *tfp)
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL) {
-		/* External EEM */
-		return tf_insert_eem_entry((struct tf_session *)
-					   (tfp->session->core_data),
-					   tbl_scope_cb,
-					   parms);
-	} else if (parms->mem == TF_MEM_INTERNAL) {
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM insert failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
 	return -EINVAL;
@@ -600,27 +619,44 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tfp, parms);
-	else
-		return tf_delete_em_internal_entry(tfp, parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM delete failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
 }
 
-/** allocate identifier resource
- *
- * Returns success or failure code.
- */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms)
 {
@@ -629,14 +665,7 @@ int tf_alloc_identifier(struct tf *tfp,
 	int id;
 	int rc;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -662,30 +691,31 @@ int tf_alloc_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: %s\n",
+		TFP_DRV_LOG(ERR, "%s: %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	id = ba_alloc(session_pool);
 
 	if (id == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		return -ENOMEM;
@@ -763,14 +793,7 @@ int tf_free_identifier(struct tf *tfp,
 	int ba_rc;
 	struct tf_session *tfs;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -796,29 +819,31 @@ int tf_free_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s Identifier pool access failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	ba_rc = ba_inuse(session_pool, (int)parms->id);
 
 	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type),
 			    parms->id);
@@ -893,21 +918,30 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index = 0;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -916,36 +950,16 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority) {
-		for (index = session_pool->size - 1; index >= 0; index--) {
-			if (ba_inuse(session_pool,
-					  index) == BA_ENTRY_FREE) {
-				break;
-			}
-		}
-		if (ba_alloc_index(session_pool,
-				   index) == BA_FAIL) {
-			TFP_DRV_LOG(ERR,
-				    "%s: %s: ba_alloc index %d failed\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-				    index);
-			return -ENOMEM;
-		}
-	} else {
-		index = ba_alloc(session_pool);
-		if (index == BA_FAIL) {
-			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-			return -ENOMEM;
-		}
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
 	}
 
+	index *= num_slice_per_row;
+
 	parms->idx = index;
 	return 0;
 }
@@ -956,26 +970,29 @@ tf_set_tcam_entry(struct tf *tfp,
 {
 	int rc;
 	int id;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
-	/*
-	 * Each tcam send msg function should check for key sizes range
-	 */
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
 
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
@@ -985,11 +1002,12 @@ tf_set_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 		   "%s: %s: Invalid or not allocated index, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
@@ -1006,21 +1024,8 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	int rc = -EOPNOTSUPP;
-
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	return rc;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+	return -EOPNOTSUPP;
 }
 
 int
@@ -1028,20 +1033,29 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row = 1;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 0,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -1050,24 +1064,27 @@ tf_free_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	rc = ba_inuse(session_pool, (int)parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	rc = ba_inuse(session_pool, index);
 	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    index);
 		return -EINVAL;
 	}
 
-	ba_free(session_pool, (int)parms->idx);
+	ba_free(session_pool, index);
 
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    parms->idx,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index f1ef00b30..bb456bba7 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -10,7 +10,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <stdio.h>
-
+#include "hcapi/hcapi_cfa.h"
 #include "tf_project.h"
 
 /**
@@ -54,6 +54,7 @@ enum tf_mem {
 #define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
 #define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
 
+
 /*
  * Helper Macros
  */
@@ -132,34 +133,40 @@ struct tf_session_version {
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
-	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_SR,     /**< Stingray  */
+	TF_DEVICE_TYPE_THOR,   /**< Thor      */
+	TF_DEVICE_TYPE_SR2,    /**< Stingray2 */
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
-/** Identifier resource types
+/**
+ * Identifier resource types
  */
 enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The L2 Context is returned from the L2 Ctxt TCAM lookup
 	 *  and can be used in WC TCAM or EM keys to virtualize further
 	 *  lookups.
 	 */
 	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The WC profile func is returned from the L2 Ctxt TCAM lookup
 	 *  to enable virtualization of the profile TCAM.
 	 */
 	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
+	/**
+	 *  The WC profile ID is included in the WC lookup key
 	 *  to enable virtualization of the WC TCAM hardware.
 	 */
 	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
+	/**
+	 *  The EM profile ID is included in the EM lookup key
 	 *  to enable virtualization of the EM hardware. (not required for SR2
 	 *  as it has table scope)
 	 */
 	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
+	/**
+	 *  The L2 func is included in the ILT result and from recycling to
 	 *  enable virtualization of further lookups.
 	 */
 	TF_IDENT_TYPE_L2_FUNC,
@@ -239,7 +246,8 @@ enum tf_tbl_type {
 
 	/* External */
 
-	/** External table type - initially 1 poolsize entries.
+	/**
+	 * External table type - initially 1 poolsize entries.
 	 * All External table types are associated with a table
 	 * scope. Internal types are not.
 	 */
@@ -279,13 +287,17 @@ enum tf_em_tbl_type {
 	TF_EM_TBL_TYPE_MAX
 };
 
-/** TruFlow Session Information
+/**
+ * TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
  * session. This structure is initialized at time of
  * tf_open_session(). It is passed to all of the TruFlow APIs as way
  * to prescribe and isolate resources between different TruFlow ULP
  * Applications.
+ *
+ * Ownership of the elements is split between ULP and TruFlow. Please
+ * see the individual elements.
  */
 struct tf_session_info {
 	/**
@@ -355,7 +367,8 @@ struct tf_session_info {
 	uint32_t              core_data_sz_bytes;
 };
 
-/** TruFlow handle
+/**
+ * TruFlow handle
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
@@ -405,7 +418,8 @@ struct tf_session_resources {
  * tf_open_session parameters definition.
  */
 struct tf_open_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -417,7 +431,8 @@ struct tf_open_session_parms {
 	 * shared memory allocation.
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
-	/** [in] shadow_copy
+	/**
+	 * [in] shadow_copy
 	 *
 	 * Boolean controlling the use and availability of shadow
 	 * copy. Shadow copy will allow the TruFlow to keep track of
@@ -430,7 +445,8 @@ struct tf_open_session_parms {
 	 * control channel.
 	 */
 	bool shadow_copy;
-	/** [in/out] session_id
+	/**
+	 * [in/out] session_id
 	 *
 	 * Session_id is unique per session.
 	 *
@@ -441,7 +457,8 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
-	/** [in] device type
+	/**
+	 * [in] device type
 	 *
 	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
@@ -484,7 +501,8 @@ int tf_open_session_new(struct tf *tfp,
 			struct tf_open_session_parms *parms);
 
 struct tf_attach_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -497,7 +515,8 @@ struct tf_attach_session_parms {
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] attach_chan_name
+	/**
+	 * [in] attach_chan_name
 	 *
 	 * String containing name of attach channel interface to be
 	 * used for this session.
@@ -510,7 +529,8 @@ struct tf_attach_session_parms {
 	 */
 	char attach_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] session_id
+	/**
+	 * [in] session_id
 	 *
 	 * Session_id is unique per session. For Attach the session_id
 	 * should be the session_id that was returned on the first
@@ -565,7 +585,8 @@ int tf_close_session_new(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-/** tf_alloc_identifier parameter definition
+/**
+ * tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
 	/**
@@ -582,7 +603,8 @@ struct tf_alloc_identifier_parms {
 	uint16_t id;
 };
 
-/** tf_free_identifier parameter definition
+/**
+ * tf_free_identifier parameter definition
  */
 struct tf_free_identifier_parms {
 	/**
@@ -599,7 +621,8 @@ struct tf_free_identifier_parms {
 	uint16_t id;
 };
 
-/** allocate identifier resource
+/**
+ * allocate identifier resource
  *
  * TruFlow core will allocate a free id from the per identifier resource type
  * pool reserved for the session during tf_open().  No firmware is involved.
@@ -611,7 +634,8 @@ int tf_alloc_identifier(struct tf *tfp,
 int tf_alloc_identifier_new(struct tf *tfp,
 			    struct tf_alloc_identifier_parms *parms);
 
-/** free identifier resource
+/**
+ * free identifier resource
  *
  * TruFlow core will return an id back to the per identifier resource type pool
  * reserved for the session.  No firmware is involved.  During tf_close, the
@@ -639,7 +663,8 @@ int tf_free_identifier_new(struct tf *tfp,
  */
 
 
-/** tf_alloc_tbl_scope_parms definition
+/**
+ * tf_alloc_tbl_scope_parms definition
  */
 struct tf_alloc_tbl_scope_parms {
 	/**
@@ -662,7 +687,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -684,7 +709,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -709,7 +734,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -719,7 +744,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -750,7 +775,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -758,7 +783,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-
 /**
  * @page tcam TCAM Access
  *
@@ -771,7 +795,9 @@ int tf_free_tbl_scope(struct tf *tfp,
  * @ref tf_free_tcam_entry
  */
 
-/** tf_alloc_tcam_entry parameter definition
+
+/**
+ * tf_alloc_tcam_entry parameter definition
  */
 struct tf_alloc_tcam_entry_parms {
 	/**
@@ -799,9 +825,7 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested
-	 * 0: index from top i.e. highest priority first
-	 * !0: index from bottom i.e lowest priority first
+	 * [in] Priority of entry requested (definition TBD)
 	 */
 	uint32_t priority;
 	/**
@@ -819,7 +843,8 @@ struct tf_alloc_tcam_entry_parms {
 	uint16_t idx;
 };
 
-/** allocate TCAM entry
+/**
+ * allocate TCAM entry
  *
  * Allocate a TCAM entry - one of these types:
  *
@@ -844,7 +869,8 @@ struct tf_alloc_tcam_entry_parms {
 int tf_alloc_tcam_entry(struct tf *tfp,
 			struct tf_alloc_tcam_entry_parms *parms);
 
-/** tf_set_tcam_entry parameter definition
+/**
+ * tf_set_tcam_entry parameter definition
  */
 struct	tf_set_tcam_entry_parms {
 	/**
@@ -881,7 +907,8 @@ struct	tf_set_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** set TCAM entry
+/**
+ * set TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -892,7 +919,8 @@ struct	tf_set_tcam_entry_parms {
 int tf_set_tcam_entry(struct tf	*tfp,
 		      struct tf_set_tcam_entry_parms *parms);
 
-/** tf_get_tcam_entry parameter definition
+/**
+ * tf_get_tcam_entry parameter definition
  */
 struct tf_get_tcam_entry_parms {
 	/**
@@ -929,7 +957,7 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/*
+/**
  * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
@@ -941,7 +969,7 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/*
+/**
  * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
@@ -963,7 +991,9 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/*
+/**
+ * free TCAM entry
+ *
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -989,6 +1019,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_get_tbl_entry
  */
 
+
 /**
  * tf_alloc_tbl_entry parameter definition
  */
@@ -1201,9 +1232,9 @@ int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
 /**
- * tf_get_bulk_tbl_entry parameter definition
+ * tf_bulk_get_tbl_entry parameter definition
  */
-struct tf_get_bulk_tbl_entry_parms {
+struct tf_bulk_get_tbl_entry_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -1212,11 +1243,6 @@ struct tf_get_bulk_tbl_entry_parms {
 	 * [in] Type of object to get
 	 */
 	enum tf_tbl_type type;
-	/**
-	 * [in] Clear hardware entries on reads only
-	 * supported for TF_TBL_TYPE_ACT_STATS_64
-	 */
-	bool clear_on_read;
 	/**
 	 * [in] Starting index to read from
 	 */
@@ -1250,8 +1276,8 @@ struct tf_get_bulk_tbl_entry_parms {
  * Returns success or failure code. Failure will be returned if the
  * provided data buffer is too small for the data type requested.
  */
-int tf_get_bulk_tbl_entry(struct tf *tfp,
-		     struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_bulk_get_tbl_entry(struct tf *tfp,
+		     struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
@@ -1280,7 +1306,7 @@ struct tf_insert_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1332,12 +1358,12 @@ struct tf_delete_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
 	 * [in] epoch group IDs of entry to delete
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1366,7 +1392,7 @@ struct tf_search_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1387,7 +1413,7 @@ struct tf_search_em_entry_parms {
 	uint16_t em_record_sz_in_bits;
 	/**
 	 * [in] epoch group IDs of entry to lookup
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1415,7 +1441,7 @@ struct tf_search_em_entry_parms {
  * specified direction and table scope.
  *
  * When inserting an entry into an exact match table, the TruFlow library may
- * need to allocate a dynamic bucket for the entry (Brd4 only).
+ * need to allocate a dynamic bucket for the entry (SR2 only).
  *
  * The insertion of duplicate entries in an EM table is not permitted.	If a
  * TruFlow application can guarantee that it will never insert duplicates, it
@@ -1490,4 +1516,5 @@ int tf_delete_em_entry(struct tf *tfp,
  */
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 6aeb6fedb..1501b20d9 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -366,6 +366,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_tcam)(struct tf *tfp,
 			       struct tf_tcam_get_parms *parms);
+
+	/**
+	 * Insert EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_em_entry)(struct tf *tfp,
+				      struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_em_entry)(struct tf *tfp,
+				      struct tf_delete_em_entry_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c235976fe..f4bd95f1c 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
+#include "tf_em.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -89,4 +90,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_insert_em_entry = tf_em_insert_entry,
+	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 38f7fe419..fcbbd7eca 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -17,11 +17,6 @@
 
 #include "bnxt.h"
 
-/* Enable EEM table dump
- */
-#define TF_EEM_DUMP
-
-static struct tf_eem_64b_entry zero_key_entry;
 
 static uint32_t tf_em_get_key_mask(int num_entries)
 {
@@ -36,326 +31,22 @@ static uint32_t tf_em_get_key_mask(int num_entries)
 	return mask;
 }
 
-/* CRC32i support for Key0 hash */
-#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
-#define crc32(x, y) crc32i(~0, x, y)
-
-static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
-0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
-0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
-0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
-0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
-0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
-0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
-0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
-0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
-0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
-0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
-0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
-0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
-0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
-0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
-0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
-0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
-0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
-0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
-0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
-0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
-0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
-0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
-0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
-0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
-0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
-0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
-0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
-0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
-0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
-0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
-0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
-0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
-0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
-0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
-0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
-0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
-0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
-0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
-0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
-0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
-0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
-0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
-0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
-0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
-0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
-0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
-0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
-0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
-0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
-0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
-0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
-0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
-0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
-0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
-0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
-0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
-0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
-0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
-0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
-0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
-0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
-0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
-0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
-0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
-};
-
-static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
-{
-	int l;
-
-	for (l = (len - 1); l >= 0; l--)
-		crc = ucrc32(buf[l], crc);
-
-	return ~crc;
-}
-
-static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
-					  uint8_t *key,
-					  enum tf_dir dir)
-{
-	int i;
-	uint32_t index;
-	uint32_t val1, val2;
-	uint8_t temp[4];
-	uint8_t *kptr = key;
-
-	/* Do byte-wise XOR of the 52-byte HASH key first. */
-	index = *key;
-	kptr--;
-
-	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
-		index = index ^ *kptr;
-		kptr--;
-	}
-
-	/* Get seeds */
-	val1 = session->lkup_em_seed_mem[dir][index * 2];
-	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
-
-	temp[3] = (uint8_t)(val1 >> 24);
-	temp[2] = (uint8_t)(val1 >> 16);
-	temp[1] = (uint8_t)(val1 >> 8);
-	temp[0] = (uint8_t)(val1 & 0xff);
-	val1 = 0;
-
-	/* Start with seed */
-	if (!(val2 & 0x1))
-		val1 = crc32i(~val1, temp, 4);
-
-	val1 = crc32i(~val1,
-		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
-		      TF_HW_EM_KEY_MAX_SIZE);
-
-	/* End with seed */
-	if (val2 & 0x1)
-		val1 = crc32i(~val1, temp, 4);
-
-	return val1;
-}
-
-static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
-					    uint8_t *in_key)
-{
-	uint32_t val1;
-
-	val1 = hashword(((uint32_t *)in_key) + 1,
-			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
-			 lookup3_init_value);
-
-	return val1;
-}
-
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum tf_em_table_type table_type)
-{
-	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
-	void *addr = NULL;
-	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
-
-	if (ctx == NULL)
-		return NULL;
-
-	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
-		return NULL;
-
-	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
-		return NULL;
-
-	/*
-	 * Use the level according to the num_level of page table
-	 */
-	level = ctx->em_tables[table_type].num_lvl - 1;
-
-	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
-
-	return addr;
-}
-
-/** Read Key table entry
- *
- * Entry is read in to entry
- */
-static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
-	return 0;
-}
-
-static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
-
-	return 0;
-}
-
-static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_eem_64b_entry *entry,
-			       uint32_t index,
-			       enum tf_em_table_type table_type,
-			       enum tf_dir dir)
-{
-	int rc;
-	struct tf_eem_64b_entry table_entry;
-
-	rc = tf_em_read_entry(tbl_scope_cb,
-			      &table_entry,
-			      TF_EM_KEY_RECORD_SIZE,
-			      index,
-			      table_type,
-			      dir);
-
-	if (rc != 0)
-		return -EINVAL;
-
-	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
-		if (entry != NULL) {
-			if (memcmp(&table_entry,
-				   entry,
-				   TF_EM_KEY_RECORD_SIZE) == 0)
-				return -EEXIST;
-		} else {
-			return -EEXIST;
-		}
-
-		return -EBUSY;
-	}
-
-	return 0;
-}
-
-static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t *in_key,
-				    struct tf_eem_64b_entry *key_entry)
+static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+				   uint8_t	       *in_key,
+				   struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
 
-	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
 		key_entry->hdr.pointer = result->pointer;
 	else
 		key_entry->hdr.pointer = result->pointer;
 
 	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-}
-
-/* tf_em_select_inject_table
- *
- * Returns:
- * 0 - Key does not exist in either table and can be inserted
- *		at "index" in table "table".
- * EEXIST  - Key does exist in table at "index" in table "table".
- * TF_ERR     - Something went horribly wrong.
- */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
-					  enum tf_dir dir,
-					  struct tf_eem_64b_entry *entry,
-					  uint32_t key0_hash,
-					  uint32_t key1_hash,
-					  uint32_t *index,
-					  enum tf_em_table_type *table)
-{
-	int key0_entry;
-	int key1_entry;
-
-	/*
-	 * Check KEY0 table.
-	 */
-	key0_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key0_hash,
-					 TF_KEY0_TABLE,
-					 dir);
 
-	/*
-	 * Check KEY1 table.
-	 */
-	key1_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key1_hash,
-					 TF_KEY1_TABLE,
-					 dir);
-
-	if (key0_entry == -EEXIST) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return -EEXIST;
-	} else if (key1_entry == -EEXIST) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return -EEXIST;
-	} else if (key0_entry == 0) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return 0;
-	} else if (key1_entry == 0) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return 0;
-	}
-
-	return -EINVAL;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
 }
 
 /** insert EEM entry API
@@ -368,20 +59,24 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms)
+static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			       struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
 	uint32_t	   key0_hash;
 	uint32_t	   key1_hash;
 	uint32_t	   key0_index;
 	uint32_t	   key1_index;
-	struct tf_eem_64b_entry key_entry;
+	struct cfa_p4_eem_64b_entry key_entry;
 	uint32_t	   index;
-	enum tf_em_table_type table_type;
+	enum hcapi_cfa_em_table_type table_type;
 	uint32_t	   gfid;
-	int		   num_of_entry;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
 
 	/* Get mask to use on hash */
 	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
@@ -389,72 +84,84 @@ int tf_insert_eem_entry(struct tf_session *session,
 	if (!mask)
 		return -EINVAL;
 
-	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
 
-	key0_hash = tf_em_lkup_get_crc32_hash(session,
-					      &parms->key[num_of_entry] - 1,
-					      parms->dir);
-	key0_index = key0_hash & mask;
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
 
-	key1_hash =
-	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-				       parms->key);
+	key0_index = key0_hash & mask;
 	key1_index = key1_hash & mask;
 
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
 	/*
 	 * Use the "result" arg to populate all of the key entry then
 	 * store the byte swapped "raw" entry in a local copy ready
 	 * for insertion in to the table.
 	 */
-	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
 				((uint8_t *)parms->key),
 				&key_entry);
 
 	/*
-	 * Find which table to use
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
 	 */
-	if (tf_em_select_inject_table(tbl_scope_cb,
-				      parms->dir,
-				      &key_entry,
-				      key0_index,
-				      key1_index,
-				      &index,
-				      &table_type) == 0) {
-		if (table_type == TF_KEY0_TABLE) {
-			TF_SET_GFID(gfid,
-				    key0_index,
-				    TF_KEY0_TABLE);
-		} else {
-			TF_SET_GFID(gfid,
-				    key1_index,
-				    TF_KEY1_TABLE);
-		}
-
-		/*
-		 * Inject
-		 */
-		if (tf_em_write_entry(tbl_scope_cb,
-				      &key_entry,
-				      TF_EM_KEY_RECORD_SIZE,
-				      index,
-				      table_type,
-				      parms->dir) == 0) {
-			TF_SET_FLOW_ID(parms->flow_id,
-				       gfid,
-				       TF_GFID_TABLE_EXTERNAL,
-				       parms->dir);
-			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-						     0,
-						     0,
-						     0,
-						     index,
-						     0,
-						     table_type);
-			return 0;
-		}
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
 	}
 
-	return -EINVAL;
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
 }
 
 /**
@@ -463,8 +170,8 @@ int tf_insert_eem_entry(struct tf_session *session,
  *  returns:
  *     0 - Success
  */
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms)
+static int tf_insert_em_internal_entry(struct tf                       *tfp,
+				       struct tf_insert_em_entry_parms *parms)
 {
 	int       rc;
 	uint32_t  gfid;
@@ -494,7 +201,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	TFP_DRV_LOG(INFO,
+	PMD_DRV_LOG(ERR,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
@@ -527,8 +234,8 @@ int tf_insert_em_internal_entry(struct tf *tfp,
  * 0
  * -EINVAL
  */
-int tf_delete_em_internal_entry(struct tf *tfp,
-				struct tf_delete_em_entry_parms *parms)
+static int tf_delete_em_internal_entry(struct tf                       *tfp,
+				       struct tf_delete_em_entry_parms *parms)
 {
 	int rc;
 	struct tf_session *session =
@@ -558,46 +265,96 @@ int tf_delete_em_internal_entry(struct tf *tfp,
  *   0
  *   TF_NO_EM_MATCH - entry not found
  */
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms)
+static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_session	   *session;
-	struct tf_tbl_scope_cb	   *tbl_scope_cb;
-	enum tf_em_table_type hash_type;
+	enum hcapi_cfa_em_table_type hash_type;
 	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
 
-	if (parms == NULL)
+	if (parms->flow_handle == 0)
 		return -EINVAL;
 
-	session = (struct tf_session *)tfp->session->core_data;
-	if (session == NULL)
-		return -EINVAL;
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
 
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
 
-	if (parms->flow_handle == 0)
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
+	}
 
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+	/* Process the EM entry per Table Scope type */
+	if (parms->mem == TF_MEM_EXTERNAL)
+		/* External EEM */
+		return tf_insert_eem_entry
+			(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
 
-	if (tf_em_entry_exists(tbl_scope_cb,
-			       NULL,
-			       index,
-			       hash_type,
-			       parms->dir) == -EEXIST) {
-		tf_em_write_entry(tbl_scope_cb,
-				  &zero_key_entry,
-				  TF_EM_KEY_RECORD_SIZE,
-				  index,
-				  hash_type,
-				  parms->dir);
+	return -EINVAL;
+}
 
-		return 0;
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
 	}
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		return tf_delete_em_internal_entry(tfp, parms);
 
 	return -EINVAL;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index c1805df73..2262ae7cc 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,13 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define SUPPORT_CFA_HW_P4 1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+#define SUPPORT_CFA_HW_ALL 0
+
+#include "hcapi/hcapi_cfa_defs.h"
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -26,56 +33,15 @@
 #define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
 #define TF_EM_INTERNAL_ENTRY_MASK  0x3
 
-/** EEM Entry header
- *
- */
-struct tf_eem_entry_hdr {
-	uint32_t pointer;
-	uint32_t word1;  /*
-			  * The header is made up of two words,
-			  * this is the first word. This field has multiple
-			  * subfields, there is no suitable single name for
-			  * it so just going with word1.
-			  */
-#define TF_LKUP_RECORD_VALID_SHIFT 31
-#define TF_LKUP_RECORD_VALID_MASK 0x80000000
-#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
-#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
-#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
-#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
-#define TF_LKUP_RECORD_RESERVED_SHIFT 17
-#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
-#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
-#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
-#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
-#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
-#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
-#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
-#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
-#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
-};
-
-/** EEM Entry
- *  Each EEM entry is 512-bit (64-bytes)
- */
-struct tf_eem_64b_entry {
-	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
-	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
-};
-
 /** EM Entry
  *  Each EM entry is 512-bit (64-bytes) but ordered differently to
  *  EEM.
  */
 struct tf_em_64b_entry {
 	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
+	struct cfa_p4_eem_entry_hdr hdr;
 	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
 /**
@@ -127,22 +93,14 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
 					  uint32_t tbl_scope_id);
 
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms);
-
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms);
-
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms);
-
-int tf_delete_em_internal_entry(struct tf                       *tfp,
-				struct tf_delete_em_entry_parms *parms);
-
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
-			   enum tf_em_table_type table_type);
+			   enum hcapi_cfa_em_table_type table_type);
+
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
 
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e08a96f23..60274eb35 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -183,6 +183,10 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
+/**
+ * NEW HWRM direct messages
+ */
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -1259,8 +1263,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
 	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
-		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.strength =
+		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
@@ -1436,22 +1441,20 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 }
 
 int
-tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *params)
+tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *params)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_bulk_input req = { 0 };
-	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_tbl_type_bulk_get_input req = { 0 };
+	struct tf_tbl_type_bulk_get_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	int data_size = 0;
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16((params->dir) |
-		((params->clear_on_read) ?
-		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.flags = tfp_cpu_to_le_16(params->dir);
 	req.type = tfp_cpu_to_le_32(params->type);
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
@@ -1462,7 +1465,7 @@ tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 	MSG_PREP(parms,
 		 TF_KONG_MB,
 		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 HWRM_TFT_TBL_TYPE_BULK_GET,
 		 req,
 		 resp);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 06f52ef00..1dad2b9fb 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -338,7 +338,7 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  * Returns:
  *  0 on Success else internal Truflow error
  */
-int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 9b7f5a069..b7b445102 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -23,29 +23,27 @@
 					    * IDs
 					    */
 #define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
-					    * TCAM. A slices is a WC TCAM entry.
-					    */
+#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
 #define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
 #define TF_NUM_METER             1024      /* < Number of meter instances */
 #define TF_NUM_MIRROR               2      /* < Number of mirror instances */
 #define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
 
-/* Wh+/Brd2 specific HW resources */
+/* Wh+/SR specific HW resources */
 #define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
 					    * entries
 					    */
 
-/* Brd2/Brd4 specific HW resources */
+/* SR/SR2 specific HW resources */
 #define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
 
 
-/* Brd3, Brd4 common HW resources */
+/* Thor, SR2 common HW resources */
 #define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
 					    * templates
 					    */
 
-/* Brd4 specific HW resources */
+/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
 #define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
 #define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
@@ -149,10 +147,11 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-#define TF_RSVD_MIRROR_RX                         1
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         1
+#define TF_RSVD_MIRROR_TX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
@@ -501,13 +500,13 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_METER_INST,
 	TF_RESC_TYPE_HW_MIRROR,
 	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/Brd2 specific HW resources */
+	/* Wh+/SR specific HW resources */
 	TF_RESC_TYPE_HW_SP_TCAM,
-	/* Brd2/Brd4 specific HW resources */
+	/* SR/SR2 specific HW resources */
 	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Brd3, Brd4 common HW resources */
+	/* Thor, SR2 common HW resources */
 	TF_RESC_TYPE_HW_FKB,
-	/* Brd4 specific HW resources */
+	/* SR2 specific HW resources */
 	TF_RESC_TYPE_HW_TBL_SCOPE,
 	TF_RESC_TYPE_HW_EPOCH0,
 	TF_RESC_TYPE_HW_EPOCH1,
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 2264704d2..b6fe2f1ad 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -14,6 +14,7 @@
 #include "tf_resources.h"
 #include "tf_msg.h"
 #include "bnxt.h"
+#include "tfp.h"
 
 /**
  * Internal macro to perform HW resource allocation check between what
@@ -329,13 +330,13 @@ tf_rm_print_hw_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_hw_2_str(i),
 				    hw_query->hw_query[i].max,
 				    tf_rm_rsvd_hw_value(dir, i));
@@ -359,13 +360,13 @@ tf_rm_print_sram_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_sram_2_str(i),
 				    sram_query->sram_query[i].max,
 				    tf_rm_rsvd_sram_value(dir, i));
@@ -1700,7 +1701,7 @@ tf_rm_hw_alloc_validate(enum tf_dir dir,
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed id:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1727,7 +1728,7 @@ tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed idx:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1820,19 +1821,22 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed,"
+			"error_flag:0x%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
@@ -1845,9 +1849,10 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1857,15 +1862,17 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -1903,19 +1910,22 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed,"
+			"error_flag:%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
 		goto cleanup;
 	}
@@ -1931,9 +1941,10 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 					    sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1943,15 +1954,18 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed,"
+			    " rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -2177,7 +2191,7 @@ tf_rm_hw_to_flush(struct tf_session *tfs,
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
 	} else {
-		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
 			    tf_dir_2_str(dir),
 			    free_cnt,
 			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
@@ -2538,8 +2552,8 @@ tf_rm_log_hw_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_hw_2_str(i));
 	}
@@ -2564,8 +2578,8 @@ tf_rm_log_sram_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_sram_2_str(i));
 	}
@@ -2777,9 +2791,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering HW resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering HW resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
@@ -2789,9 +2804,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2805,9 +2821,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
@@ -2818,9 +2835,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2828,18 +2846,20 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, HW free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, HW free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 
 		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, SRAM free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, SRAM free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 	}
 
@@ -2890,14 +2910,14 @@ tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Tcam type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "%s:, Tcam type lookup failed, type:%d\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type lookup failed, type:%d\n",
 			    tf_dir_2_str(dir),
 			    type);
 		return rc;
@@ -3057,15 +3077,15 @@ tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type lookup failed, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 35a7cfab5..a68335304 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "stack.h"
 #include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
@@ -53,14 +54,14 @@
  *   Pointer to the page table to free
  */
 static void
-tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
 {
 	uint32_t i;
 
 	for (i = 0; i < tp->pg_count; i++) {
 		if (!tp->pg_va_tbl[i]) {
-			PMD_DRV_LOG(WARNING,
-				    "No map for page %d table %016" PRIu64 "\n",
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
 				    i,
 				    (uint64_t)(uintptr_t)tp);
 			continue;
@@ -84,15 +85,14 @@ tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
  *   Pointer to the EM table to free
  */
 static void
-tf_em_free_page_table(struct tf_em_table *tbl)
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int i;
 
 	for (i = 0; i < tbl->num_lvl; i++) {
 		tp = &tbl->pg_tbl[i];
-
-		PMD_DRV_LOG(INFO,
+		TFP_DRV_LOG(INFO,
 			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
 			   TF_EM_PAGE_SIZE,
 			    i,
@@ -124,7 +124,7 @@ tf_em_free_page_table(struct tf_em_table *tbl)
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
 		   uint32_t pg_count,
 		   uint32_t pg_size)
 {
@@ -183,9 +183,9 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_page_table(struct tf_em_table *tbl)
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int rc = 0;
 	int i;
 	uint32_t j;
@@ -197,14 +197,15 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
 					tbl->page_cnt[i],
 					TF_EM_PAGE_SIZE);
 		if (rc) {
-			PMD_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d\n",
-				i);
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
 			goto cleanup;
 		}
 
 		for (j = 0; j < tp->pg_count; j++) {
-			PMD_DRV_LOG(INFO,
+			TFP_DRV_LOG(INFO,
 				"EEM: Allocated page table: size %u lvl %d cnt"
 				" %u VA:%p PA:%p\n",
 				TF_EM_PAGE_SIZE,
@@ -234,8 +235,8 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
  *   Flag controlling if the page table is last
  */
 static void
-tf_em_link_page_table(struct tf_em_page_tbl *tp,
-		      struct tf_em_page_tbl *tp_next,
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
 		      bool set_pte_last)
 {
 	uint64_t *pg_pa = tp_next->pg_pa_tbl;
@@ -270,10 +271,10 @@ tf_em_link_page_table(struct tf_em_page_tbl *tp,
  *   Pointer to EM page table
  */
 static void
-tf_em_setup_page_table(struct tf_em_table *tbl)
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
 	bool set_pte_last = 0;
 	int i;
 
@@ -415,7 +416,7 @@ tf_em_size_page_tbls(int max_lvl,
  *   - ENOMEM - Out of memory
  */
 static int
-tf_em_size_table(struct tf_em_table *tbl)
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
 {
 	uint64_t num_data_pages;
 	uint32_t *page_cnt;
@@ -456,11 +457,10 @@ tf_em_size_table(struct tf_em_table *tbl)
 					  tbl->num_entries,
 					  &num_data_pages);
 	if (max_lvl < 0) {
-		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		PMD_DRV_LOG(WARNING,
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
 			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type,
-			    (uint64_t)num_entries * tbl->entry_size,
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
 			    TF_EM_PAGE_SIZE);
 		return -ENOMEM;
 	}
@@ -474,8 +474,8 @@ tf_em_size_table(struct tf_em_table *tbl)
 	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
 				page_cnt);
 
-	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
 		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
@@ -504,8 +504,9 @@ tf_em_ctx_unreg(struct tf *tfp,
 		struct tf_tbl_scope_cb *tbl_scope_cb,
 		int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 
 	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
@@ -539,8 +540,9 @@ tf_em_ctx_reg(struct tf *tfp,
 	      struct tf_tbl_scope_cb *tbl_scope_cb,
 	      int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int rc = 0;
 	int i;
 
@@ -601,7 +603,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 					TF_MEGABYTE) / (key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
 				    "%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -613,7 +615,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
 				    "%u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -625,7 +627,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx flows "
 				    "requested:%u max:%u\n",
 				    parms->rx_num_flows_in_k * TF_KILOBYTE,
@@ -642,7 +644,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx requested: %u\n",
 				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -658,7 +660,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			(key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Insufficient memory requested:%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -670,7 +672,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -682,7 +684,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx flows "
 				    "requested:%u max:%u\n",
 				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
@@ -696,7 +698,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -705,7 +707,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->rx_num_flows_in_k != 0 &&
 	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Rx key size required: %u\n",
 			    (parms->rx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -713,7 +715,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->tx_num_flows_in_k != 0 &&
 	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Tx key size required: %u\n",
 			    (parms->tx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -795,11 +797,10 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -817,9 +818,9 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -833,11 +834,11 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Set failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -891,9 +892,9 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -907,11 +908,11 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Get failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -932,8 +933,8 @@ tf_get_tbl_entry_internal(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 static int
-tf_get_bulk_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc;
 	int id;
@@ -975,7 +976,7 @@ tf_get_bulk_tbl_entry_internal(struct tf *tfp,
 	}
 
 	/* Get the entry */
-	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1006,10 +1007,9 @@ static int
 tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
 			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Alloc with search not supported\n",
-		    parms->dir);
-
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Alloc with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1032,9 +1032,9 @@ static int
 tf_free_tbl_entry_shadow(struct tf_session *tfs,
 			 struct tf_free_tbl_entry_parms *parms)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Free with search not supported\n",
-		    parms->dir);
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Free with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1074,8 +1074,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	parms.alignment = 0;
 
 	if (tfp_calloc(&parms) != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
-			    dir, strerror(-ENOMEM));
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
 		return -ENOMEM;
 	}
 
@@ -1084,8 +1084,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	rc = stack_init(num_entries, parms.mem_va, pool);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1101,13 +1101,13 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	for (i = 0; i < num_entries; i++) {
 		rc = stack_push(pool, j);
 		if (rc != 0) {
-			PMD_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
 				    tf_dir_2_str(dir), strerror(-rc));
 			goto cleanup;
 		}
 
 		if (j < 0) {
-			PMD_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
 				    dir, j);
 			goto cleanup;
 		}
@@ -1116,8 +1116,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 	return 0;
@@ -1168,18 +1168,7 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1188,9 +1177,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-					"%s, table scope not allocated\n",
-					tf_dir_2_str(parms->dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1200,9 +1189,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type);
 		return rc;
 	}
@@ -1233,18 +1222,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	struct bitalloc *session_pool;
 	struct tf_session *tfs;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1254,11 +1232,10 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1276,9 +1253,9 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	if (id == -1) {
 		free_cnt = ba_free_count(session_pool);
 
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d, free:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d, free:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   free_cnt);
 		return -ENOMEM;
@@ -1323,18 +1300,7 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1343,9 +1309,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1355,9 +1321,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_push(pool, index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 	}
@@ -1386,18 +1352,7 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	struct tf_session *tfs;
 	uint32_t index;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1408,9 +1363,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1439,9 +1394,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	/* Check if element was indeed allocated */
 	id = ba_inuse_free(session_pool, index);
 	if (id == -1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -ENOMEM;
@@ -1485,8 +1440,10 @@ tf_free_eem_tbl_scope_cb(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 parms->tbl_scope_id);
 
-	if (tbl_scope_cb == NULL)
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
 		return -EINVAL;
+	}
 
 	/* Free Table control block */
 	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
@@ -1516,23 +1473,17 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	int rc;
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_table *em_tables;
+	struct hcapi_cfa_em_table *em_tables;
 	int index;
 	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 
-	/* check parameters */
-	if (parms == NULL || tfp->session == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
-
 	session = (struct tf_session *)tfp->session->core_data;
 
 	/* Get Table Scope control block from the session pool */
 	index = ba_alloc(session->tbl_scope_pool_rx);
 	if (index == -1) {
-		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
 			    "Control Block\n");
 		return -ENOMEM;
 	}
@@ -1547,8 +1498,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				     dir,
 				     &tbl_scope_cb->em_caps[dir]);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"EEM: Unable to query for EEM capability\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 	}
@@ -1565,8 +1518,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 */
 		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 
@@ -1580,8 +1535,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"TBL: Unable to configure EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1590,8 +1547,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
 
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1604,9 +1563,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				    em_tables[TF_RECORD_TABLE].num_entries,
 				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "%d TBL: Unable to allocate idx pools %s\n",
-				    dir,
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup_full;
 		}
@@ -1634,13 +1593,12 @@ tf_set_tbl_entry(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct tf_session *session;
 
-	if (tfp == NULL || parms == NULL || parms->data == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 
@@ -1654,9 +1612,9 @@ tf_set_tbl_entry(struct tf *tfp,
 		tbl_scope_id = parms->tbl_scope_id;
 
 		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Table scope not allocated\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Table scope not allocated\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1665,18 +1623,21 @@ tf_set_tbl_entry(struct tf *tfp,
 		 */
 		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
 
-		if (tbl_scope_cb == NULL)
-			return -EINVAL;
+		if (tbl_scope_cb == NULL) {
+			TFP_DRV_LOG(ERR,
+				    "%s, table scope error\n",
+				    tf_dir_2_str(parms->dir));
+				return -EINVAL;
+		}
 
 		/* External table, implicitly the Action table */
-		base_addr = tf_em_get_table_page(tbl_scope_cb,
-						 parms->dir,
-						 offset,
-						 TF_RECORD_TABLE);
+		base_addr = (void *)(uintptr_t)
+		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
+
 		if (base_addr == NULL) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Base address lookup failed\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Base address lookup failed\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1688,11 +1649,11 @@ tf_set_tbl_entry(struct tf *tfp,
 		/* Internal table type processing */
 		rc = tf_set_tbl_entry_internal(tfp, parms);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Set failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Set failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 		}
 	}
 
@@ -1706,31 +1667,24 @@ tf_get_tbl_entry(struct tf *tfp,
 {
 	int rc = 0;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	if (parms->type == TF_TBL_TYPE_EXT) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, External table type not supported\n",
-			    parms->dir);
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
 
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
 		rc = tf_get_tbl_entry_internal(tfp, parms);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Get failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 	}
 
 	return rc;
@@ -1738,8 +1692,8 @@ tf_get_tbl_entry(struct tf *tfp,
 
 /* API defined in tf_core.h */
 int
-tf_get_bulk_tbl_entry(struct tf *tfp,
-		 struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc = 0;
 
@@ -1754,7 +1708,7 @@ tf_get_bulk_tbl_entry(struct tf *tfp,
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
-		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
 		if (rc)
 			TFP_DRV_LOG(ERR,
 				    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1773,11 +1727,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	rc = tf_alloc_eem_tbl_scope(tfp, parms);
 
@@ -1791,11 +1741,7 @@ tf_free_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	/* free table scope and all associated resources */
 	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
@@ -1813,11 +1759,7 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	/*
 	 * No shadow copy support for external tables, allocate and return
 	 */
@@ -1827,13 +1769,6 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1849,9 +1784,9 @@ tf_alloc_tbl_entry(struct tf *tfp,
 
 	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 
 	return rc;
 }
@@ -1866,11 +1801,8 @@ tf_free_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
 	/*
 	 * No shadow of external tables so just free the entry
 	 */
@@ -1880,13 +1812,6 @@ tf_free_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1903,16 +1828,16 @@ tf_free_tbl_entry(struct tf *tfp,
 	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
 
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 	return rc;
 }
 
 
 static void
-tf_dump_link_page_table(struct tf_em_page_tbl *tp,
-			struct tf_em_page_tbl *tp_next)
+tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+			struct hcapi_cfa_em_page_tbl *tp_next)
 {
 	uint64_t *pg_va;
 	uint32_t i;
@@ -1951,9 +1876,9 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 {
 	struct tf_session      *session;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_page_tbl *tp;
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 	int j;
 	int dir;
@@ -1967,7 +1892,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		TFP_DRV_LOG(ERR, "No table scope\n");
+		PMD_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index ee8a14665..b17557345 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -13,45 +13,6 @@
 
 struct tf_session;
 
-enum tf_pg_tbl_lvl {
-	TF_PT_LVL_0,
-	TF_PT_LVL_1,
-	TF_PT_LVL_2,
-	TF_PT_LVL_MAX
-};
-
-enum tf_em_table_type {
-	TF_KEY0_TABLE,
-	TF_KEY1_TABLE,
-	TF_RECORD_TABLE,
-	TF_EFC_TABLE,
-	TF_MAX_TABLE
-};
-
-struct tf_em_page_tbl {
-	uint32_t	pg_count;
-	uint32_t	pg_size;
-	void		**pg_va_tbl;
-	uint64_t	*pg_pa_tbl;
-};
-
-struct tf_em_table {
-	int				type;
-	uint32_t			num_entries;
-	uint16_t			ctx_id;
-	uint32_t			entry_size;
-	int				num_lvl;
-	uint32_t			page_cnt[TF_PT_LVL_MAX];
-	uint64_t			num_data_pages;
-	void				*l0_addr;
-	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
-};
-
-struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[TF_MAX_TABLE];
-};
-
 /** table scope control block content */
 struct tf_em_caps {
 	uint32_t flags;
@@ -74,18 +35,14 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps          em_caps[TF_DIR_MAX];
 	struct stack               ext_act_pool[TF_DIR_MAX];
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/**
- * Hardware Page sizes supported for EEM:
- *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- *
- * Round-down other page sizes to the lower hardware page
- * size supported.
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
  */
 #define TF_EM_PAGE_SIZE_4K 12
 #define TF_EM_PAGE_SIZE_8K 13
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 17/51] net/bnxt: implement support for TCAM access
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (15 preceding siblings ...)
  2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 18/51] net/bnxt: multiple device implementation Ajit Khaparde
                           ` (33 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Implement TCAM alloc, free, bind, and unbind functions
Update tf_core, tf_msg, etc.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      | 258 ++++++++++-----------
 drivers/net/bnxt/tf_core/tf_device.h    |  14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |  25 ++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  31 +--
 drivers/net/bnxt/tf_core/tf_msg.h       |   4 +-
 drivers/net/bnxt/tf_core/tf_tcam.c      | 285 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tcam.h      |  66 ++++--
 7 files changed, 480 insertions(+), 203 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 648d0d1bd..29522c66e 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,43 +19,6 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
-			       enum tf_device_type device,
-			       uint16_t key_sz_in_bits,
-			       uint16_t *num_slice_per_row)
-{
-	uint16_t key_bytes;
-	uint16_t slice_sz = 0;
-
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
-#define CFA_P4_WC_TCAM_SLICE_SIZE     12
-
-	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
-		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
-		if (device == TF_DEVICE_TYPE_WH) {
-			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
-			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
-		} else {
-			TFP_DRV_LOG(ERR,
-				    "Unsupported device type %d\n",
-				    device);
-			return -ENOTSUP;
-		}
-
-		if (key_bytes > *num_slice_per_row * slice_sz) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Key size %d is not supported\n",
-				    tf_tcam_tbl_2_str(tcam_tbl_type),
-				    key_bytes);
-			return -ENOTSUP;
-		}
-	} else { /* for other type of tcam */
-		*num_slice_per_row = 1;
-	}
-
-	return 0;
-}
-
 /**
  * Create EM Tbl pool of memory indexes.
  *
@@ -918,49 +881,56 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_alloc_parms aparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_alloc_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+	aparms.dir = parms->dir;
+	aparms.type = parms->tcam_tbl_type;
+	aparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	aparms.priority = parms->priority;
+	rc = dev->ops->tf_dev_alloc_tcam(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+			    strerror(-rc));
+		return rc;
 	}
 
-	index *= num_slice_per_row;
+	parms->idx = aparms.idx;
 
-	parms->idx = index;
 	return 0;
 }
 
@@ -969,55 +939,60 @@ tf_set_tcam_entry(struct tf *tfp,
 		  struct tf_set_tcam_entry_parms *parms)
 {
 	int rc;
-	int id;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_set_parms sparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_set_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	/* Verify that the entry has been previously allocated */
-	index = parms->idx / num_slice_per_row;
+	sparms.dir = parms->dir;
+	sparms.type = parms->tcam_tbl_type;
+	sparms.idx = parms->idx;
+	sparms.key = parms->key;
+	sparms.mask = parms->mask;
+	sparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	sparms.result = parms->result;
+	sparms.result_size = TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	rc = dev->ops->tf_dev_set_tcam(tfp, &sparms);
+	if (rc) {
 		TFP_DRV_LOG(ERR,
-		   "%s: %s: Invalid or not allocated index, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-		   parms->idx);
-		return -EINVAL;
+			    "%s: TCAM set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
-	rc = tf_msg_tcam_entry_set(tfp, parms);
-
-	return rc;
+	return 0;
 }
 
 int
@@ -1033,59 +1008,52 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row = 1;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_free_parms fparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 0,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = parms->idx / num_slice_per_row;
-
-	rc = ba_inuse(session_pool, index);
-	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+	if (dev->ops->tf_dev_free_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    index);
-		return -EINVAL;
+			    strerror(-rc));
+		return rc;
 	}
 
-	ba_free(session_pool, index);
-
-	rc = tf_msg_tcam_entry_free(tfp, parms);
+	fparms.dir = parms->dir;
+	fparms.type = parms->tcam_tbl_type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 1501b20d9..32d9a5442 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -116,8 +116,11 @@ struct tf_dev_ops {
 	 * [in] tfp
 	 *   Pointer to TF handle
 	 *
-	 * [out] slice_size
-	 *   Pointer to slice size the device supports
+	 * [in] type
+	 *   TCAM table type
+	 *
+	 * [in] key_sz
+	 *   Key size
 	 *
 	 * [out] num_slices_per_row
 	 *   Pointer to number of slices per row the device supports
@@ -126,9 +129,10 @@ struct tf_dev_ops {
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
-					 uint16_t *slice_size,
-					 uint16_t *num_slices_per_row);
+	int (*tf_dev_get_tcam_slice_info)(struct tf *tfp,
+					  enum tf_tcam_tbl_type type,
+					  uint16_t key_sz,
+					  uint16_t *num_slices_per_row);
 
 	/**
 	 * Allocation of an identifier element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index f4bd95f1c..77fb693dd 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -56,18 +56,21 @@ tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
  *   - (-EINVAL) on failure.
  */
 static int
-tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
-			     uint16_t *slice_size,
-			     uint16_t *num_slices_per_row)
+tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
+			      enum tf_tcam_tbl_type type,
+			      uint16_t key_sz,
+			      uint16_t *num_slices_per_row)
 {
-#define CFA_P4_WC_TCAM_SLICE_SIZE       12
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
 
-	if (slice_size == NULL || num_slices_per_row == NULL)
-		return -EINVAL;
-
-	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
-	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+	if (type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
+			return -ENOTSUP;
+	} else { /* for other type of tcam */
+		*num_slices_per_row = 1;
+	}
 
 	return 0;
 }
@@ -77,7 +80,7 @@ tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
  */
 const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
-	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 60274eb35..b50e1d48c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -9,6 +9,7 @@
 #include <string.h>
 
 #include "tf_msg_common.h"
+#include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
 #include "tf_session.h"
@@ -1480,27 +1481,19 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_set_tcam_entry_parms *parms)
+		      struct tf_tcam_set_parms *parms)
 {
 	int rc;
 	struct tfp_send_msg_parms mparms = { 0 };
 	struct hwrm_tf_tcam_set_input req = { 0 };
 	struct hwrm_tf_tcam_set_output resp = { 0 };
-	uint16_t key_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
-	uint16_t result_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
@@ -1508,11 +1501,11 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	if (parms->dir == TF_DIR_TX)
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	req.key_size = key_bytes;
-	req.mask_offset = key_bytes;
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
 	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * key_bytes;
-	req.result_size = result_bytes;
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
 	data_size = 2 * req.key_size + req.result_size;
 
 	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
@@ -1530,9 +1523,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 			   sizeof(buf.pa_addr));
 	}
 
-	tfp_memcpy(&data[0], parms->key, key_bytes);
-	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
-	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1554,7 +1547,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 int
 tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_free_tcam_entry_parms *in_parms)
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
 	struct hwrm_tf_tcam_free_input req =  { 0 };
@@ -1562,7 +1555,7 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1dad2b9fb..a3e0f7bba 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -247,7 +247,7 @@ int tf_msg_em_op(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_set(struct tf *tfp,
-			  struct tf_set_tcam_entry_parms *parms);
+			  struct tf_tcam_set_parms *parms);
 
 /**
  * Sends tcam entry 'free' to the Firmware.
@@ -262,7 +262,7 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_free(struct tf *tfp,
-			   struct tf_free_tcam_entry_parms *parms);
+			   struct tf_tcam_free_parms *parms);
 
 /**
  * Sends Set message of a Table Type element to the firmware.
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 3ad99dd0d..b9dba5323 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -3,16 +3,24 @@
  * All rights reserved.
  */
 
+#include <string.h>
 #include <rte_common.h>
 
 #include "tf_tcam.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_rm_new.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_session.h"
+#include "tf_msg.h"
 
 struct tf;
 
 /**
  * TCAM DBs.
  */
-/* static void *tcam_db[TF_DIR_MAX]; */
+static void *tcam_db[TF_DIR_MAX];
 
 /**
  * TCAM Shadow DBs
@@ -22,7 +30,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -33,19 +41,131 @@ int
 tf_tcam_bind(struct tf *tfp __rte_unused,
 	     struct tf_tcam_cfg_parms *parms __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
+		db_cfg.rm_db = tcam_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: TCAM DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_tcam_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tcam_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tcam_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
-tf_tcam_alloc(struct tf *tfp __rte_unused,
-	      struct tf_tcam_alloc_parms *parms __rte_unused)
+tf_tcam_alloc(struct tf *tfp,
+	      struct tf_tcam_alloc_parms *parms)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Allocate requested element */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = (uint32_t *)&parms->idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed tcam, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	parms->idx *= num_slice_per_row;
+
 	return 0;
 }
 
@@ -53,6 +173,92 @@ int
 tf_tcam_free(struct tf *tfp __rte_unused,
 	     struct tf_tcam_free_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  0,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tcam_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx / num_slice_per_row;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
@@ -67,6 +273,77 @@ int
 tf_tcam_set(struct tf *tfp __rte_unused,
 	    struct tf_tcam_set_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry is not allocated, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 68c25eb1b..67c3bcb49 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -50,10 +50,18 @@ struct tf_tcam_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint16_t idx;
 };
 
 /**
@@ -71,7 +79,7 @@ struct tf_tcam_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t idx;
+	uint16_t idx;
 	/**
 	 * [out] Reference count after free, only valid if session has been
 	 * created with shadow_copy.
@@ -90,7 +98,7 @@ struct tf_tcam_alloc_search_parms {
 	/**
 	 * [in] TCAM table type
 	 */
-	enum tf_tcam_tbl_type tcam_tbl_type;
+	enum tf_tcam_tbl_type type;
 	/**
 	 * [in] Enable search for matching entry
 	 */
@@ -100,9 +108,9 @@ struct tf_tcam_alloc_search_parms {
 	 */
 	uint8_t *key;
 	/**
-	 * [in] key size in bits (if search)
+	 * [in] key size (if search)
 	 */
-	uint16_t key_sz_in_bits;
+	uint16_t key_size;
 	/**
 	 * [in] Mask data to match on (if search)
 	 */
@@ -139,17 +147,29 @@ struct tf_tcam_set_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [in] Entry data
+	 * [in] Entry index to write to
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [in] Entry size
+	 * [in] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to write to
+	 * [in] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
@@ -165,17 +185,29 @@ struct tf_tcam_get_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [out] Entry data
+	 * [in] Entry index to read
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [out] Entry size
+	 * [out] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to read
+	 * [out] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [out] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [out] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 18/51] net/bnxt: multiple device implementation
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (16 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
                           ` (32 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

Implement the Identifier, Table Type and the Resource Manager
modules.
Integrate Resource Manager with HCAPI.
Update open/close session.
Move to direct msgs for qcaps and resv messages.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 751 ++++++++---------------
 drivers/net/bnxt/tf_core/tf_core.h       |  97 ++-
 drivers/net/bnxt/tf_core/tf_device.c     |  10 +-
 drivers/net/bnxt/tf_core/tf_device.h     |   1 +
 drivers/net/bnxt/tf_core/tf_device_p4.c  |  26 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |  29 +-
 drivers/net/bnxt/tf_core/tf_identifier.h |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  45 +-
 drivers/net/bnxt/tf_core/tf_msg.h        |   1 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 225 +++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  11 +-
 drivers/net/bnxt/tf_core/tf_session.c    |  28 +-
 drivers/net/bnxt/tf_core/tf_session.h    |   2 +-
 drivers/net/bnxt/tf_core/tf_tbl.c        | 611 +-----------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c   | 252 +++++++-
 drivers/net/bnxt/tf_core/tf_tbl_type.h   |   2 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |  12 +-
 drivers/net/bnxt/tf_core/tf_util.h       |  45 +-
 drivers/net/bnxt/tf_core/tfp.c           |   4 +-
 19 files changed, 880 insertions(+), 1276 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 29522c66e..3e23d0513 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,284 +19,15 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- * [in] num_entries
- *   number of entries to write
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i, j;
-	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
-			    strerror(-ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Fill pool with indexes
-	 */
-	j = num_entries - 1;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-		j--;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- *
- * Return:
- */
-static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
-{
-	struct stack *pool = &session->em_pool[dir];
-	uint32_t *ptr;
-
-	ptr = stack_items(pool);
-
-	tfp_free(ptr);
-}
-
 int
-tf_open_session(struct tf                    *tfp,
+tf_open_session(struct tf *tfp,
 		struct tf_open_session_parms *parms)
-{
-	int rc;
-	struct tf_session *session;
-	struct tfp_calloc_parms alloc_parms;
-	unsigned int domain, bus, slot, device;
-	uint8_t fw_session_id;
-	int dir;
-
-	TF_CHECK_PARMS(tfp, parms);
-
-	/* Filter out any non-supported device types on the Core
-	 * side. It is assumed that the Firmware will be supported if
-	 * firmware open session succeeds.
-	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH) {
-		TFP_DRV_LOG(ERR,
-			    "Unsupported device type %d\n",
-			    parms->device_type);
-		return -ENOTSUP;
-	}
-
-	/* Build the beginning of session_id */
-	rc = sscanf(parms->ctrl_chan_name,
-		    "%x:%x:%x.%d",
-		    &domain,
-		    &bus,
-		    &slot,
-		    &device);
-	if (rc != 4) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to scan device ctrl_chan_name\n");
-		return -EINVAL;
-	}
-
-	/* open FW session and get a new session_id */
-	rc = tf_msg_session_open(tfp,
-				 parms->ctrl_chan_name,
-				 &fw_session_id);
-	if (rc) {
-		/* Log error */
-		if (rc == -EEXIST)
-			TFP_DRV_LOG(ERR,
-				    "Session is already open, rc:%s\n",
-				    strerror(-rc));
-		else
-			TFP_DRV_LOG(ERR,
-				    "Open message send failed, rc:%s\n",
-				    strerror(-rc));
-
-		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
-		return rc;
-	}
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session_info);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
-
-	/* Allocate core data for the session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session->core_data = alloc_parms.mem_va;
-
-	session = (struct tf_session *)tfp->session->core_data;
-	tfp_memcpy(session->ctrl_chan_name,
-		   parms->ctrl_chan_name,
-		   TF_SESSION_NAME_MAX);
-
-	/* Initialize Session */
-	session->dev = NULL;
-	tf_rm_init(tfp);
-
-	/* Construct the Session ID */
-	session->session_id.internal.domain = domain;
-	session->session_id.internal.bus = bus;
-	session->session_id.internal.device = device;
-	session->session_id.internal.fw_session_id = fw_session_id;
-
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Shadow DB configuration */
-	if (parms->shadow_copy) {
-		/* Ignore shadow_copy setting */
-		session->shadow_copy = 0;/* parms->shadow_copy; */
-#if (TF_SHADOW == 1)
-		rc = tf_rm_shadow_db_init(tfs);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%s",
-				    strerror(-rc));
-		/* Add additional processing */
-#endif /* TF_SHADOW */
-	}
-
-	/* Adjust the Session with what firmware allowed us to get */
-	rc = tf_rm_allocate_validate(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Rm allocate validate failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Initialize EM pool */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session,
-				       (enum tf_dir)dir,
-				       TF_SESSION_EM_POOL_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EM Pool initialization failed\n");
-			goto cleanup_close;
-		}
-	}
-
-	session->ref_count++;
-
-	/* Return session ID */
-	parms->session_id = session->session_id;
-
-	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    parms->session_id.internal.domain,
-		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
-
-	return 0;
-
- cleanup:
-	tfp_free(tfp->session->core_data);
-	tfp_free(tfp->session);
-	tfp->session = NULL;
-	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
-}
-
-int
-tf_open_session_new(struct tf *tfp,
-		    struct tf_open_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
 	struct tf_session_open_session_parms oparms;
 
-	TF_CHECK_PARMS(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
@@ -347,33 +78,8 @@ tf_open_session_new(struct tf *tfp,
 }
 
 int
-tf_attach_session(struct tf *tfp __rte_unused,
-		  struct tf_attach_session_parms *parms __rte_unused)
-{
-#if (TF_SHARED == 1)
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/* - Open the shared memory for the attach_chan_name
-	 * - Point to the shared session for this Device instance
-	 * - Check that session is valid
-	 * - Attach to the firmware so it can record there is more
-	 *   than one client of the session.
-	 */
-
-	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_attach(tfp,
-					   parms->ctrl_chan_name,
-					   parms->session_id);
-	}
-#endif /* TF_SHARED */
-	return -1;
-}
-
-int
-tf_attach_session_new(struct tf *tfp,
-		      struct tf_attach_session_parms *parms)
+tf_attach_session(struct tf *tfp,
+		  struct tf_attach_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
@@ -436,65 +142,6 @@ tf_attach_session_new(struct tf *tfp,
 
 int
 tf_close_session(struct tf *tfp)
-{
-	int rc;
-	int rc_close = 0;
-	struct tf_session *tfs;
-	union tf_session_id session_id;
-	int dir;
-
-	TF_CHECK_TFP_SESSION(tfp);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Cleanup if we're last user of the session */
-	if (tfs->ref_count == 1) {
-		/* Cleanup any outstanding resources */
-		rc_close = tf_rm_close(tfp);
-	}
-
-	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Message send failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		/* Update the ref_count */
-		tfs->ref_count--;
-	}
-
-	session_id = tfs->session_id;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Free EM pool */
-		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, (enum tf_dir)dir);
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
-
-	TFP_DRV_LOG(INFO,
-		    "Session closed, session_id:%d\n",
-		    session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    session_id.internal.domain,
-		    session_id.internal.bus,
-		    session_id.internal.device,
-		    session_id.internal.fw_session_id);
-
-	return rc_close;
-}
-
-int
-tf_close_session_new(struct tf *tfp)
 {
 	int rc;
 	struct tf_session_close_session_parms cparms = { 0 };
@@ -620,76 +267,9 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
-int tf_alloc_identifier(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	int id;
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	if (rc) {
-		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	id = ba_alloc(session_pool);
-
-	if (id == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		return -ENOMEM;
-	}
-	parms->id = id;
-	return 0;
-}
-
 int
-tf_alloc_identifier_new(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
+tf_alloc_identifier(struct tf *tfp,
+		    struct tf_alloc_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -732,7 +312,7 @@ tf_alloc_identifier_new(struct tf *tfp,
 	}
 
 	aparms.dir = parms->dir;
-	aparms.ident_type = parms->ident_type;
+	aparms.type = parms->ident_type;
 	aparms.id = &id;
 	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
 	if (rc) {
@@ -748,79 +328,9 @@ tf_alloc_identifier_new(struct tf *tfp,
 	return 0;
 }
 
-int tf_free_identifier(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	int rc;
-	int ba_rc;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: %s Identifier pool access failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	ba_rc = ba_inuse(session_pool, (int)parms->id);
-
-	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    parms->id);
-		return -EINVAL;
-	}
-
-	ba_free(session_pool, (int)parms->id);
-
-	return 0;
-}
-
 int
-tf_free_identifier_new(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
+tf_free_identifier(struct tf *tfp,
+		   struct tf_free_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -862,12 +372,12 @@ tf_free_identifier_new(struct tf *tfp,
 	}
 
 	fparms.dir = parms->dir;
-	fparms.ident_type = parms->ident_type;
+	fparms.type = parms->ident_type;
 	fparms.id = parms->id;
 	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Identifier allocation failed, rc:%s\n",
+			    "%s: Identifier free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
@@ -1057,3 +567,242 @@ tf_free_tcam_entry(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_alloc_parms aparms;
+	uint32_t idx;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_tbl_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.type = parms->type;
+	aparms.idx = &idx;
+	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_tbl_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.type = parms->type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table free failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_set_parms sparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&sparms, 0, sizeof(struct tf_tbl_set_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.data = parms->data;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_parms gparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&gparms, 0, sizeof(struct tf_tbl_get_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.data = parms->data;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_get_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index bb456bba7..a7a7bd38a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -384,34 +384,87 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * Identifier resource definition
+ */
+struct tf_identifier_resources {
+	/**
+	 * Array of TF Identifiers where each entry is expected to be
+	 * set to the requested resource number of that specific type.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t cnt[TF_IDENT_TYPE_MAX];
+};
+
+/**
+ * Table type resource definition
+ */
+struct tf_tbl_resources {
+	/**
+	 * Array of TF Table types where each entry is expected to be
+	 * set to the requeste resource number of that specific
+	 * type. The index used is tf_tbl_type.
+	 */
+	uint16_t cnt[TF_TBL_TYPE_MAX];
+};
+
+/**
+ * TCAM type resource definition
+ */
+struct tf_tcam_resources {
+	/**
+	 * Array of TF TCAM types where each entry is expected to be
+	 * set to the requested resource number of that specific
+	 * type. The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t cnt[TF_TCAM_TBL_TYPE_MAX];
+};
+
+/**
+ * EM type resource definition
+ */
+struct tf_em_resources {
+	/**
+	 * Array of TF EM table types where each entry is expected to
+	 * be set to the requested resource number of that specific
+	 * type. The index used is tf_em_tbl_type.
+	 */
+	uint16_t cnt[TF_EM_TBL_TYPE_MAX];
+};
+
 /**
  * tf_session_resources parameter definition.
  */
 struct tf_session_resources {
-	/** [in] Requested Identifier Resources
+	/**
+	 * [in] Requested Identifier Resources
 	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
+	 * Number of identifier resources requested for the
+	 * session.
 	 */
-	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested Index Table resource counts
+	struct tf_identifier_resources ident_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested Index Table resource counts
 	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
+	 * The number of index table resources requested for the
+	 * session.
 	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested TCAM Table resource counts
+	struct tf_tbl_resources tbl_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested TCAM Table resource counts
 	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
+	 * The number of TCAM table resources requested for the
+	 * session.
 	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested EM resource counts
+
+	struct tf_tcam_resources tcam_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested EM resource counts
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * The number of internal EM table resources requested for the
+	 * session.
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+	struct tf_em_resources em_cnt[TF_DIR_MAX];
 };
 
 /**
@@ -497,9 +550,6 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
-int tf_open_session_new(struct tf *tfp,
-			struct tf_open_session_parms *parms);
-
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -565,8 +615,6 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
-int tf_attach_session_new(struct tf *tfp,
-			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -576,7 +624,6 @@ int tf_attach_session_new(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
-int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -631,8 +678,6 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
-int tf_alloc_identifier_new(struct tf *tfp,
-			    struct tf_alloc_identifier_parms *parms);
 
 /**
  * free identifier resource
@@ -645,8 +690,6 @@ int tf_alloc_identifier_new(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
-int tf_free_identifier_new(struct tf *tfp,
-			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
@@ -1277,7 +1320,7 @@ struct tf_bulk_get_tbl_entry_parms {
  * provided data buffer is too small for the data type requested.
  */
 int tf_bulk_get_tbl_entry(struct tf *tfp,
-		     struct tf_bulk_get_tbl_entry_parms *parms);
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 4c46cadc6..b474e8c25 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -43,6 +43,14 @@ dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
 	/* Initialize the modules */
 
 	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
@@ -78,7 +86,7 @@ dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
-	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 32d9a5442..c31bf2357 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -407,6 +407,7 @@ struct tf_dev_ops {
 /**
  * Supported device operation structures
  */
+extern const struct tf_dev_ops tf_dev_ops_p4_init;
 extern const struct tf_dev_ops tf_dev_ops_p4;
 
 #endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 77fb693dd..9e332c594 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -75,6 +75,26 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 	return 0;
 }
 
+/**
+ * Truflow P4 device specific functions
+ */
+const struct tf_dev_ops tf_dev_ops_p4_init = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
+	.tf_dev_alloc_ident = NULL,
+	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_tbl = NULL,
+	.tf_dev_alloc_search_tbl = NULL,
+	.tf_dev_set_tbl = NULL,
+	.tf_dev_get_tbl = NULL,
+	.tf_dev_alloc_tcam = NULL,
+	.tf_dev_free_tcam = NULL,
+	.tf_dev_alloc_search_tcam = NULL,
+	.tf_dev_set_tcam = NULL,
+	.tf_dev_get_tcam = NULL,
+};
+
 /**
  * Truflow P4 device specific functions
  */
@@ -85,14 +105,14 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
-	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
-	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
-	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_get_tcam = NULL,
 	.tf_dev_insert_em_entry = tf_em_insert_entry,
 	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index e89f9768b..ee07a6aea 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -45,19 +45,22 @@ tf_ident_bind(struct tf *tfp,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
-		db_cfg.rm_db = ident_db[i];
+		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
+		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "%s: Identifier DB creation failed\n",
 				    tf_dir_2_str(i));
+
 			return rc;
 		}
 	}
 
 	init = 1;
 
+	printf("Identifier - initialized\n");
+
 	return 0;
 }
 
@@ -73,8 +76,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 	/* Bail if nothing has been initialized done silent as to
 	 * allow for creation cleanup.
 	 */
-	if (!init)
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Identifier DBs created\n");
 		return -EINVAL;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
@@ -96,6 +102,7 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 	       struct tf_ident_alloc_parms *parms)
 {
 	int rc;
+	uint32_t id;
 	struct tf_rm_allocate_parms aparms = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -109,17 +116,19 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 
 	/* Allocate requested element */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
-	aparms.index = (uint32_t *)&parms->id;
+	aparms.db_index = parms->type;
+	aparms.index = &id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Failed allocate, type:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type);
+			    parms->type);
 		return rc;
 	}
 
+	*parms->id = id;
+
 	return 0;
 }
 
@@ -143,7 +152,7 @@ tf_ident_free(struct tf *tfp __rte_unused,
 
 	/* Check if element is in use */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
+	aparms.db_index = parms->type;
 	aparms.index = parms->id;
 	aparms.allocated = &allocated;
 	rc = tf_rm_is_allocated(&aparms);
@@ -154,21 +163,21 @@ tf_ident_free(struct tf *tfp __rte_unused,
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
 
 	/* Free requested element */
 	fparms.rm_db = ident_db[parms->dir];
-	fparms.db_index = parms->ident_type;
+	fparms.db_index = parms->type;
 	fparms.index = parms->id;
 	rc = tf_rm_free(&fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Free failed, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index 1c5319b5e..6e36c525f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -43,7 +43,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [out] Identifier allocated
 	 */
@@ -61,7 +61,7 @@ struct tf_ident_free_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [in] ID to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b50e1d48c..a2e3840f0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -12,6 +12,7 @@
 #include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
+#include "tf_common.h"
 #include "tf_session.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -935,13 +936,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	struct tf_rm_resc_req_entry *data;
 	int dma_size;
 
-	if (size == 0 || query == NULL || resv_strategy == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Resource QCAPS parameter error, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-EINVAL));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS3(tfp, query, resv_strategy);
 
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
@@ -962,7 +957,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.qcaps_size = size;
-	req.qcaps_addr = qcaps_buf.pa_addr;
+	req.qcaps_addr = tfp_cpu_to_le_64(qcaps_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
 	parms.req_data = (uint32_t *)&req;
@@ -980,18 +975,29 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: QCAPS message error, rc:%s\n",
+			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+
+	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_cpu_to_le_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
+
+		printf("type: %d(0x%x) %d %d\n",
+		       query[i].type,
+		       query[i].type,
+		       query[i].min,
+		       query[i].max);
+
 	}
 
 	*resv_strategy = resp.flags &
@@ -1021,6 +1027,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	struct tf_rm_resc_entry *resv_data;
 	int dma_size;
 
+	TF_CHECK_PARMS3(tfp, request, resv);
+
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1053,8 +1061,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
 	}
 
-	req.req_addr = req_buf.pa_addr;
-	req.resp_addr = resv_buf.pa_addr;
+	req.req_addr = tfp_cpu_to_le_64(req_buf.pa_addr);
+	req.resc_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
 	parms.req_data = (uint32_t *)&req;
@@ -1072,18 +1080,28 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Alloc message error, rc:%s\n",
+			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("\nRESV\n");
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
 		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
 		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
 		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+
+		printf("%d type: %d(0x%x) %d %d\n",
+		       i,
+		       resv[i].type,
+		       resv[i].type,
+		       resv[i].start,
+		       resv[i].stride);
 	}
 
 	tf_msg_free_dma_buf(&req_buf);
@@ -1460,7 +1478,8 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
 
-	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	data_size = params->num_entries * params->entry_sz_in_bytes;
+
 	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
 
 	MSG_PREP(parms,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index a3e0f7bba..fb635f6dc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_rm_new.h"
+#include "tf_tcam.h"
 
 struct tf;
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 7cadb231f..6abf79aa1 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -10,6 +10,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_rm_new.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
 #include "tf_device.h"
@@ -65,6 +66,46 @@ struct tf_rm_new_db {
 	struct tf_rm_element *db;
 };
 
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] cfg
+ *   Pointer to the DB configuration
+ *
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
+ *
+ * [in] count
+ *   Number of DB configuration elements
+ *
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static void
+tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
+{
+	int i;
+	uint16_t cnt = 0;
+
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
+	}
+
+	*valid_count = cnt;
+}
 
 /**
  * Resource Manager Adjust of base index definitions.
@@ -132,6 +173,7 @@ tf_rm_create_db(struct tf *tfp,
 {
 	int rc;
 	int i;
+	int j;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	uint16_t max_types;
@@ -143,6 +185,9 @@ tf_rm_create_db(struct tf *tfp,
 	struct tf_rm_new_db *rm_db;
 	struct tf_rm_element *db;
 	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -177,10 +222,19 @@ tf_rm_create_db(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/* Process capabilities against db requirements */
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
 
 	/* Alloc request, alignment already set */
-	cparms.nitems = parms->num_elements;
+	cparms.nitems = (size_t)hcapi_items;
 	cparms.size = sizeof(struct tf_rm_resc_req_entry);
 	rc = tfp_calloc(&cparms);
 	if (rc)
@@ -195,15 +249,24 @@ tf_rm_create_db(struct tf *tfp,
 	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
 
 	/* Build the request */
-	for (i = 0; i < parms->num_elements; i++) {
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
 		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			req[i].type = parms->cfg[i].hcapi_type;
-			/* Check that we can get the full amount allocated */
-			if (parms->alloc_num[i] <=
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
 			    query[parms->cfg[i].hcapi_type].max) {
-				req[i].min = parms->alloc_num[i];
-				req[i].max = parms->alloc_num[i];
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
 			} else {
 				TFP_DRV_LOG(ERR,
 					    "%s: Resource failure, type:%d\n",
@@ -211,19 +274,16 @@ tf_rm_create_db(struct tf *tfp,
 					    parms->cfg[i].hcapi_type);
 				TFP_DRV_LOG(ERR,
 					"req:%d, avail:%d\n",
-					parms->alloc_num[i],
+					parms->alloc_cnt[i],
 					query[parms->cfg[i].hcapi_type].max);
 				return -EINVAL;
 			}
-		} else {
-			/* Skip the element */
-			req[i].type = CFA_RESOURCE_TYPE_INVALID;
 		}
 	}
 
 	rc = tf_msg_session_resc_alloc(tfp,
 				       parms->dir,
-				       parms->num_elements,
+				       hcapi_items,
 				       req,
 				       resv);
 	if (rc)
@@ -246,42 +306,74 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
 	db = rm_db->db;
-	for (i = 0; i < parms->num_elements; i++) {
-		/* If allocation failed for a single entry the DB
-		 * creation is considered a failure.
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
+
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
 		 */
-		if (parms->alloc_num[i] != resv[i].stride) {
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
 				    "%s: Alloc failed, type:%d\n",
 				    tf_dir_2_str(parms->dir),
-				    i);
+				    db[i].cfg_type);
 			TFP_DRV_LOG(ERR,
 				    "req:%d, alloc:%d\n",
-				    parms->alloc_num[i],
-				    resv[i].stride);
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
 			goto fail;
 		}
-
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-		db[i].alloc.entry.start = resv[i].start;
-		db[i].alloc.entry.stride = resv[i].stride;
-
-		/* Create pool */
-		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
-			     sizeof(struct bitalloc));
-		/* Alloc request, alignment already set */
-		cparms.nitems = pool_size;
-		cparms.size = sizeof(struct bitalloc);
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-		db[i].pool = (struct bitalloc *)cparms.mem_va;
 	}
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
-	parms->rm_db = (void *)rm_db;
+	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
@@ -307,13 +399,15 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 	int i;
 	struct tf_rm_new_db *rm_db;
 
+	TF_CHECK_PARMS1(parms);
+
 	/* Traverse the DB and clear each pool.
 	 * NOTE:
 	 *   Firmware is not cleared. It will be cleared on close only.
 	 */
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db->pool);
+		tfp_free((void *)rm_db->db[i].pool);
 
 	tfp_free((void *)parms->rm_db);
 
@@ -325,11 +419,11 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc = 0;
 	int id;
+	uint32_t index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -339,6 +433,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		TFP_DRV_LOG(ERR,
@@ -353,15 +458,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 				TF_RM_ADJUST_ADD_BASE,
 				parms->db_index,
 				id,
-				parms->index);
+				&index);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc adjust of base index failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -1;
+		return -EINVAL;
 	}
 
+	*parms->index = index;
+
 	return rc;
 }
 
@@ -373,8 +480,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -384,6 +490,17 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -409,8 +526,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -420,6 +536,17 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -442,8 +569,7 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -465,8 +591,7 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 6d8234ddc..ebf38c411 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -135,13 +135,16 @@ struct tf_rm_create_db_parms {
 	 */
 	struct tf_rm_element_cfg *cfg;
 	/**
-	 * Allocation number array. Array size is num_elements.
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
 	 */
-	uint16_t *alloc_num;
+	uint16_t *alloc_cnt;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *rm_db;
+	void **rm_db;
 };
 
 /**
@@ -382,7 +385,7 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
  * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI type.
+ * requested HCAPI RM type.
  *
  * [in] parms
  *   Pointer to get hcapi parameters
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 1917f8100..3a602618c 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -95,21 +95,11 @@ tf_session_open_session(struct tf *tfp,
 		      parms->open_cfg->device_type,
 		      session->shadow_copy,
 		      &parms->open_cfg->resources,
-		      session->dev);
+		      &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
 
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
 	session->ref_count++;
 
 	return 0;
@@ -119,10 +109,6 @@ tf_session_open_session(struct tf *tfp,
 	tfp_free(tfp->session);
 	tfp->session = NULL;
 	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
 }
 
 int
@@ -231,17 +217,7 @@ int
 tf_session_get_device(struct tf_session *tfs,
 		      struct tf_dev_info **tfd)
 {
-	int rc;
-
-	if (tfs->dev == NULL) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR,
-			    "Device not created, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	*tfd = tfs->dev;
+	*tfd = &tfs->dev;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 92792518b..705bb0955 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -97,7 +97,7 @@ struct tf_session {
 	uint8_t ref_count;
 
 	/** Device handle */
-	struct tf_dev_info *dev;
+	struct tf_dev_info dev;
 
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index a68335304..e594f0248 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -761,163 +761,6 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 	return 0;
 }
 
-/**
- * Internal function to set a Table Entry. Supports all internal Table Types
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_set_tbl_entry_internal(struct tf *tfp,
-			  struct tf_set_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -1145,266 +988,6 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
-/**
- * Allocate External Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_external(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	uint32_t index;
-	struct tf_session *tfs;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	/* Allocate an element
-	 */
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type);
-		return rc;
-	}
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Allocate Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	int free_cnt;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	id = ba_alloc(session_pool);
-	if (id == -1) {
-		free_cnt = ba_free_count(session_pool);
-
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d, free:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   free_cnt);
-		return -ENOMEM;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_ADD_BASE,
-			    id,
-			    &index);
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Free External Tbl entry to the session pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry freed
- *
- * - Failure, entry not successfully freed for these reasons
- *  -ENOMEM
- *  -EOPNOTSUPP
- *  -EINVAL
- */
-static int
-tf_free_tbl_entry_pool_external(struct tf *tfp,
-				struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_session *tfs;
-	uint32_t index;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	index = parms->idx;
-
-	rc = stack_push(pool, index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, consistency error, stack full, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-	}
-	return rc;
-}
-
-/**
- * Free Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_free_tbl_entry_pool_internal(struct tf *tfp,
-		       struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	int id;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	uint32_t index;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Check if element was indeed allocated */
-	id = ba_inuse_free(session_pool, index);
-	if (id == -1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -ENOMEM;
-	}
-
-	return rc;
-}
-
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
 tbl_scope_cb_find(struct tf_session *session,
@@ -1584,113 +1167,7 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	return -EINVAL;
 }
 
-/* API defined in tf_core.h */
-int
-tf_set_tbl_entry(struct tf *tfp,
-		 struct tf_set_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		void *base_addr;
-		uint32_t offset = parms->idx;
-		uint32_t tbl_scope_id;
-
-		session = (struct tf_session *)(tfp->session->core_data);
-
-		tbl_scope_id = parms->tbl_scope_id;
-
-		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			TFP_DRV_LOG(ERR,
-				    "%s, Table scope not allocated\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		/* Get the table scope control block associated with the
-		 * external pool
-		 */
-		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
-
-		if (tbl_scope_cb == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, table scope error\n",
-				    tf_dir_2_str(parms->dir));
-				return -EINVAL;
-		}
-
-		/* External table, implicitly the Action table */
-		base_addr = (void *)(uintptr_t)
-		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
-
-		if (base_addr == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Base address lookup failed\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		offset %= TF_EM_PAGE_SIZE;
-		rte_memcpy((char *)base_addr + offset,
-			   parms->data,
-			   parms->data_sz_in_bytes);
-	} else {
-		/* Internal table type processing */
-		rc = tf_set_tbl_entry_internal(tfp, parms);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Set failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-		}
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_get_tbl_entry(struct tf *tfp,
-		 struct tf_get_tbl_entry_parms *parms)
-{
-	int rc = 0;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
-		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
-			    tf_dir_2_str(parms->dir));
-
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_get_tbl_entry_internal(tfp, parms);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s, Get failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
+ /* API defined in tf_core.h */
 int
 tf_bulk_get_tbl_entry(struct tf *tfp,
 		 struct tf_bulk_get_tbl_entry_parms *parms)
@@ -1749,92 +1226,6 @@ tf_free_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_entry(struct tf *tfp,
-		   struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	/*
-	 * No shadow copy support for external tables, allocate and return
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
-		/* Entry found and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_entry(struct tf *tfp,
-		  struct tf_free_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/*
-	 * No shadow of external tables so just free the entry
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_free_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_free_tbl_entry_shadow(tfs, parms);
-		/* Entry free'ed and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
-
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-	return rc;
-}
-
-
 static void
 tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
 			struct hcapi_cfa_em_page_tbl *tp_next)
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index b79706f97..51f8f0740 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -6,13 +6,18 @@
 #include <rte_common.h>
 
 #include "tf_tbl_type.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Table DBs.
  */
-/* static void *tbl_db[TF_DIR_MAX]; */
+static void *tbl_db[TF_DIR_MAX];
 
 /**
  * Table Shadow DBs
@@ -22,7 +27,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -30,29 +35,164 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_bind(struct tf *tfp __rte_unused,
-	    struct tf_tbl_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
 	return 0;
 }
 
 int
 tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Table DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms __rte_unused)
+	     struct tf_tbl_alloc_parms *parms)
 {
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
 	return 0;
 }
 
 int
 tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms __rte_unused)
+	    struct tf_tbl_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
 	return 0;
 }
 
@@ -64,15 +204,107 @@ tf_tbl_alloc_search(struct tf *tfp __rte_unused,
 }
 
 int
-tf_tbl_set(struct tf *tfp __rte_unused,
-	   struct tf_tbl_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
 int
-tf_tbl_get(struct tf *tfp __rte_unused,
-	   struct tf_tbl_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index 11f2aa333..3474489a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -55,7 +55,7 @@ struct tf_tbl_alloc_parms {
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint32_t *idx;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b9dba5323..e0fac31f2 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -38,8 +38,8 @@ static uint8_t init;
 /* static uint8_t shadow_init; */
 
 int
-tf_tcam_bind(struct tf *tfp __rte_unused,
-	     struct tf_tcam_cfg_parms *parms __rte_unused)
+tf_tcam_bind(struct tf *tfp,
+	     struct tf_tcam_cfg_parms *parms)
 {
 	int rc;
 	int i;
@@ -59,8 +59,8 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
-		db_cfg.rm_db = tcam_db[i];
+		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
+		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
@@ -72,11 +72,13 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 
 	init = 1;
 
+	printf("TCAM - initialized\n");
+
 	return 0;
 }
 
 int
-tf_tcam_unbind(struct tf *tfp __rte_unused)
+tf_tcam_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index 4099629ea..ad8edaf30 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -10,32 +10,57 @@
 
 /**
  * Helper function converting direction to text string
+ *
+ * [in] dir
+ *   Receive or transmit direction identifier
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the direction
  */
-const char
-*tf_dir_2_str(enum tf_dir dir);
+const char *tf_dir_2_str(enum tf_dir dir);
 
 /**
  * Helper function converting identifier to text string
+ *
+ * [in] id_type
+ *   Identifier type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the identifier
  */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
+const char *tf_ident_2_str(enum tf_identifier_type id_type);
 
 /**
  * Helper function converting tcam type to text string
+ *
+ * [in] tcam_type
+ *   TCAM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the tcam
  */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+const char *tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
 
 /**
  * Helper function converting tbl type to text string
+ *
+ * [in] tbl_type
+ *   Table type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the table type
  */
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
 
 /**
  * Helper function converting em tbl type to text string
+ *
+ * [in] em_type
+ *   EM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
  */
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
 #endif /* _TF_UTIL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 3bce3ade1..69d1c9a1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -102,13 +102,13 @@ tfp_calloc(struct tfp_calloc_parms *parms)
 				    (parms->nitems * parms->size),
 				    parms->alignment);
 	if (parms->mem_va == NULL) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_va\n");
 		return -ENOMEM;
 	}
 
 	parms->mem_pa = (void *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
 	if (parms->mem_pa == (void *)((uintptr_t)RTE_BAD_IOVA)) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_pa\n");
 		return -ENOMEM;
 	}
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 19/51] net/bnxt: update identifier with remap support
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (17 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 18/51] net/bnxt: multiple device implementation Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
                           ` (31 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add Identifier L2 CTXT Remap to the P4 device and updated the
  cfa_resource_types.h to get the type support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 110 ++++++++++--------
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   2 +-
 2 files changed, 60 insertions(+), 52 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 11e8892f4..058d8cc88 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -20,46 +20,48 @@
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_REMAP   0x1UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x2UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x3UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x4UL
 /* Wildcard TCAM Profile Id */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x5UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x6UL
 /* Meter Profile */
-#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x7UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+#define CFA_RESOURCE_TYPE_P59_METER           0x8UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x9UL
 /* Source Properties TCAM */
-#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0xaUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xbUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xcUL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
-#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
 /* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
@@ -105,30 +107,32 @@
 #define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
 #define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
@@ -176,26 +180,28 @@
 #define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
 #define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
@@ -243,24 +249,26 @@
 #define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
 #define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 5cd02b298..235d81f96 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,7 +12,7 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 20/51] net/bnxt: update RM with residual checker
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (18 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
                           ` (30 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add residual checker to the TF Host RM as well as new RM APIs. On
  close it will scan the DB and check of any remaining elements. If
  found they will be logged and FW msg sent for FW to scrub that
  specific type of resources.
- Update the module bind to be aware of the module type, for each of
  the modules.
- Add additional type 2 string util functions.
- Fix the device naming to be in compliance with TF.
- Update the device unbind order as to assure TCAMs gets flushed
  first.
- Update the close functionality such that the session gets
  closed after the device is unbound.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device.c     |  53 +++--
 drivers/net/bnxt/tf_core/tf_device.h     |  25 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   1 -
 drivers/net/bnxt/tf_core/tf_identifier.c |  10 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  67 +++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |   7 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 287 +++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  45 +++-
 drivers/net/bnxt/tf_core/tf_session.c    |  58 +++--
 drivers/net/bnxt/tf_core/tf_tbl_type.c   |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.h       |   4 +
 drivers/net/bnxt/tf_core/tf_util.c       |  55 ++++-
 drivers/net/bnxt/tf_core/tf_util.h       |  32 +++
 14 files changed, 561 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index b474e8c25..441d0c678 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -10,7 +10,7 @@
 struct tf;
 
 /* Forward declarations */
-static int dev_unbind_p4(struct tf *tfp);
+static int tf_dev_unbind_p4(struct tf *tfp);
 
 /**
  * Device specific bind function, WH+
@@ -32,10 +32,10 @@ static int dev_unbind_p4(struct tf *tfp);
  *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp,
-	    bool shadow_copy,
-	    struct tf_session_resources *resources,
-	    struct tf_dev_info *dev_handle)
+tf_dev_bind_p4(struct tf *tfp,
+	       bool shadow_copy,
+	       struct tf_session_resources *resources,
+	       struct tf_dev_info *dev_handle)
 {
 	int rc;
 	int frc;
@@ -93,7 +93,7 @@ dev_bind_p4(struct tf *tfp,
 
  fail:
 	/* Cleanup of already created modules */
-	frc = dev_unbind_p4(tfp);
+	frc = tf_dev_unbind_p4(tfp);
 	if (frc)
 		return frc;
 
@@ -111,7 +111,7 @@ dev_bind_p4(struct tf *tfp,
  *   - (-EINVAL) on failure.
  */
 static int
-dev_unbind_p4(struct tf *tfp)
+tf_dev_unbind_p4(struct tf *tfp)
 {
 	int rc = 0;
 	bool fail = false;
@@ -119,25 +119,28 @@ dev_unbind_p4(struct tf *tfp)
 	/* Unbind all the support modules. As this is only done on
 	 * close we only report errors as everything has to be cleaned
 	 * up regardless.
+	 *
+	 * In case of residuals TCAMs are cleaned up first as to
+	 * invalidate the pipeline in a clean manner.
 	 */
-	rc = tf_ident_unbind(tfp);
+	rc = tf_tcam_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Identifier\n");
+			    "Device unbind failed, TCAM\n");
 		fail = true;
 	}
 
-	rc = tf_tbl_unbind(tfp);
+	rc = tf_ident_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Table Type\n");
+			    "Device unbind failed, Identifier\n");
 		fail = true;
 	}
 
-	rc = tf_tcam_unbind(tfp);
+	rc = tf_tbl_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, TCAM\n");
+			    "Device unbind failed, Table Type\n");
 		fail = true;
 	}
 
@@ -148,18 +151,18 @@ dev_unbind_p4(struct tf *tfp)
 }
 
 int
-dev_bind(struct tf *tfp __rte_unused,
-	 enum tf_device_type type,
-	 bool shadow_copy,
-	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_handle)
+tf_dev_bind(struct tf *tfp __rte_unused,
+	    enum tf_device_type type,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_bind_p4(tfp,
-				   shadow_copy,
-				   resources,
-				   dev_handle);
+		return tf_dev_bind_p4(tfp,
+				      shadow_copy,
+				      resources,
+				      dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
@@ -168,12 +171,12 @@ dev_bind(struct tf *tfp __rte_unused,
 }
 
 int
-dev_unbind(struct tf *tfp,
-	   struct tf_dev_info *dev_handle)
+tf_dev_unbind(struct tf *tfp,
+	      struct tf_dev_info *dev_handle)
 {
 	switch (dev_handle->type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_unbind_p4(tfp);
+		return tf_dev_unbind_p4(tfp);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c31bf2357..c8feac55d 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -14,6 +14,17 @@
 struct tf;
 struct tf_session;
 
+/**
+ *
+ */
+enum tf_device_module_type {
+	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	TF_DEVICE_MODULE_TYPE_TABLE,
+	TF_DEVICE_MODULE_TYPE_TCAM,
+	TF_DEVICE_MODULE_TYPE_EM,
+	TF_DEVICE_MODULE_TYPE_MAX
+};
+
 /**
  * The Device module provides a general device template. A supported
  * device type should implement one or more of the listed function
@@ -60,11 +71,11 @@ struct tf_dev_info {
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_bind(struct tf *tfp,
-	     enum tf_device_type type,
-	     bool shadow_copy,
-	     struct tf_session_resources *resources,
-	     struct tf_dev_info *dev_handle);
+int tf_dev_bind(struct tf *tfp,
+		enum tf_device_type type,
+		bool shadow_copy,
+		struct tf_session_resources *resources,
+		struct tf_dev_info *dev_handle);
 
 /**
  * Device release handles cleanup of the device specific information.
@@ -80,8 +91,8 @@ int dev_bind(struct tf *tfp,
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_unbind(struct tf *tfp,
-	       struct tf_dev_info *dev_handle);
+int tf_dev_unbind(struct tf *tfp,
+		  struct tf_dev_info *dev_handle);
 
 /**
  * Truflow device specific function hooks structure
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 235d81f96..411e21637 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -77,5 +77,4 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
-
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index ee07a6aea..b197bb271 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -39,12 +39,12 @@ tf_ident_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_IDENTIFIER;
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
 		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
@@ -86,8 +86,10 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 		fparms.dir = i;
 		fparms.rm_db = ident_db[i];
 		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "rm free failed on unbind\n");
+		}
 
 		ident_db[i] = NULL;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index a2e3840f0..c015b0ce2 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1110,6 +1110,69 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	return rc;
 }
 
+int
+tf_msg_session_resc_flush(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_flush_input req = { 0 };
+	struct hwrm_tf_session_resc_flush_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	TF_CHECK_PARMS2(tfp, resv);
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.flush_size = size;
+
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv_data[i].type = tfp_cpu_to_le_32(resv[i].type);
+		resv_data[i].start = tfp_cpu_to_le_16(resv[i].start);
+		resv_data[i].stride = tfp_cpu_to_le_16(resv[i].stride);
+	}
+
+	req.flush_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_FLUSH;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1512,9 +1575,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
-	if (rc != 0)
-		return rc;
+	req.type = parms->type;
 
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fb635f6dc..1ff1044e8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -181,6 +181,13 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 			      struct tf_rm_resc_req_entry *request,
 			      struct tf_rm_resc_entry *resv);
 
+/**
+ * Sends session resource flush request to TF Firmware
+ */
+int tf_msg_session_resc_flush(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 6abf79aa1..02b4b5c8f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -60,6 +60,11 @@ struct tf_rm_new_db {
 	 */
 	enum tf_dir dir;
 
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
@@ -167,6 +172,178 @@ tf_rm_adjust_index(struct tf_rm_element *db,
 	return rc;
 }
 
+/**
+ * Logs an array of found residual entries to the console.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] type
+ *   Type of Device Module
+ *
+ * [in] count
+ *   Number of entries in the residual array
+ *
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
+ */
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
+{
+	int i;
+
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
+	 */
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
+	}
+}
+
+/**
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
+ *
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
+{
+	int rc;
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
+	}
+
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
+		}
+		*resv_size = found;
+	}
+
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
 int
 tf_rm_create_db(struct tf *tfp,
 		struct tf_rm_create_db_parms *parms)
@@ -373,6 +550,7 @@ tf_rm_create_db(struct tf *tfp,
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
@@ -392,20 +570,69 @@ tf_rm_create_db(struct tf *tfp,
 }
 
 int
-tf_rm_free_db(struct tf *tfp __rte_unused,
+tf_rm_free_db(struct tf *tfp,
 	      struct tf_rm_free_db_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
 	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
 
-	TF_CHECK_PARMS1(parms);
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	/* Traverse the DB and clear each pool.
-	 * NOTE:
-	 *   Firmware is not cleared. It will be cleared on close only.
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
 	 */
+
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
+	 */
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
+	}
+
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -417,7 +644,7 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 int
 tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int id;
 	uint32_t index;
 	struct tf_rm_new_db *rm_db;
@@ -446,11 +673,12 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
+		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
 			    "%s: Allocation failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -ENOMEM;
+		return rc;
 	}
 
 	/* Adjust for any non zero start value */
@@ -475,7 +703,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 int
 tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -521,7 +749,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 int
 tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -565,7 +793,6 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 int
 tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -579,15 +806,16 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
-	parms->info = &rm_db->db[parms->db_index].alloc;
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
-	return rc;
+	return 0;
 }
 
 int
 tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -603,5 +831,36 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
 
+	return 0;
+}
+
+int
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
+{
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
+	}
+
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
 	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index ebf38c411..a40296ed2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -8,6 +8,7 @@
 
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
 
@@ -57,9 +58,9 @@ struct tf_rm_new_entry {
 enum tf_rm_elem_cfg_type {
 	/** No configuration */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled' */
+	/** HCAPI 'controlled', uses a Pool for internal storage */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled' */
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
 	TF_RM_ELEM_CFG_PRIVATE,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
@@ -123,7 +124,11 @@ struct tf_rm_alloc_info {
  */
 struct tf_rm_create_db_parms {
 	/**
-	 * [in] Receive or transmit direction
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
 	 */
 	enum tf_dir dir;
 	/**
@@ -263,6 +268,25 @@ struct tf_rm_get_hcapi_parms {
 	uint16_t *hcapi_type;
 };
 
+/**
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
 /**
  * @page rm Resource Manager
  *
@@ -279,6 +303,8 @@ struct tf_rm_get_hcapi_parms {
  * @ref tf_rm_get_info
  *
  * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
 
 /**
@@ -396,4 +422,17 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
+ *
+ * [in] parms
+ *   Pointer to get inuse parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
+
 #endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 3a602618c..b08d06306 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -91,11 +91,11 @@ tf_session_open_session(struct tf *tfp,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
-	rc = dev_bind(tfp,
-		      parms->open_cfg->device_type,
-		      session->shadow_copy,
-		      &parms->open_cfg->resources,
-		      &session->dev);
+	rc = tf_dev_bind(tfp,
+			 parms->open_cfg->device_type,
+			 session->shadow_copy,
+			 &parms->open_cfg->resources,
+			 &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
@@ -151,6 +151,8 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	tfs->ref_count--;
+
 	/* Record the session we're closing so the caller knows the
 	 * details.
 	 */
@@ -164,6 +166,32 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	if (tfs->ref_count > 0) {
+		/* In case we're attached only the session client gets
+		 * closed.
+		 */
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "FW Session close failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		return 0;
+	}
+
+	/* Final cleanup as we're last user of the session */
+
+	/* Unbind the device */
+	rc = tf_dev_unbind(tfp, tfd);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
 	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
@@ -173,23 +201,9 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	tfs->ref_count--;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Unbind the device */
-		rc = dev_unbind(tfp, tfd);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Device unbind failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index 51f8f0740..bdf7d2089 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -51,11 +51,12 @@ tf_tbl_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
 		db_cfg.rm_db = &tbl_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index e0fac31f2..2f4441de8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -54,11 +54,12 @@ tf_tcam_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
 		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 67c3bcb49..5090dfd9f 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -146,6 +146,10 @@ struct tf_tcam_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Entry index to write to
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index a9010543d..16c43eb67 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -7,8 +7,8 @@
 
 #include "tf_util.h"
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
+const char *
+tf_dir_2_str(enum tf_dir dir)
 {
 	switch (dir) {
 	case TF_DIR_RX:
@@ -20,8 +20,8 @@ const char
 	}
 }
 
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
+const char *
+tf_ident_2_str(enum tf_identifier_type id_type)
 {
 	switch (id_type) {
 	case TF_IDENT_TYPE_L2_CTXT:
@@ -39,8 +39,8 @@ const char
 	}
 }
 
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+const char *
+tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
 {
 	switch (tcam_type) {
 	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
@@ -60,8 +60,8 @@ const char
 	}
 }
 
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+const char *
+tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 {
 	switch (tbl_type) {
 	case TF_TBL_TYPE_FULL_ACT_RECORD:
@@ -131,8 +131,8 @@ const char
 	}
 }
 
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+const char *
+tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
 {
 	switch (em_type) {
 	case TF_EM_TBL_TYPE_EM_RECORD:
@@ -143,3 +143,38 @@ const char
 		return "Invalid EM type";
 	}
 }
+
+const char *
+tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
+				    uint16_t mod_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return tf_ident_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return tf_tcam_tbl_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return tf_em_tbl_type_2_str(mod_type);
+	default:
+		return "Invalid Device Module type";
+	}
+}
+
+const char *
+tf_device_module_type_2_str(enum tf_device_module_type dm_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return "Identifer";
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return "Table";
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return "TCAM";
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return "EM";
+	default:
+		return "Invalid Device Module type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index ad8edaf30..c97e2a66a 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -7,6 +7,7 @@
 #define _TF_UTIL_H_
 
 #include "tf_core.h"
+#include "tf_device.h"
 
 /**
  * Helper function converting direction to text string
@@ -63,4 +64,35 @@ const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
  */
 const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
+/**
+ * Helper function converting device module type and module type to
+ * text string.
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_subtype_2_str
+					(enum tf_device_module_type dm_type,
+					 uint16_t mod_type);
+
+/**
+ * Helper function converting device module type to text string
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_2_str(enum tf_device_module_type dm_type);
+
 #endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 21/51] net/bnxt: support two level priority for TCAMs
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (19 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
                           ` (29 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/bitalloc.c  | 107 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h  |   5 ++
 drivers/net/bnxt/tf_core/tf_rm_new.c |   9 ++-
 drivers/net/bnxt/tf_core/tf_rm_new.h |   8 ++
 drivers/net/bnxt/tf_core/tf_tcam.c   |   1 +
 5 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
index fb4df9a19..918cabf19 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.c
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -7,6 +7,40 @@
 
 #define BITALLOC_MAX_LEVELS 6
 
+
+/* Finds the last bit set plus 1, equivalent to gcc __builtin_fls */
+static int
+ba_fls(bitalloc_word_t v)
+{
+	int c = 32;
+
+	if (!v)
+		return 0;
+
+	if (!(v & 0xFFFF0000u)) {
+		v <<= 16;
+		c -= 16;
+	}
+	if (!(v & 0xFF000000u)) {
+		v <<= 8;
+		c -= 8;
+	}
+	if (!(v & 0xF0000000u)) {
+		v <<= 4;
+		c -= 4;
+	}
+	if (!(v & 0xC0000000u)) {
+		v <<= 2;
+		c -= 2;
+	}
+	if (!(v & 0x80000000u)) {
+		v <<= 1;
+		c -= 1;
+	}
+
+	return c;
+}
+
 /* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
 static int
 ba_ffs(bitalloc_word_t v)
@@ -120,6 +154,79 @@ ba_alloc(struct bitalloc *pool)
 	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
 }
 
+/**
+ * Help function to alloc entry from highest available index
+ *
+ * Searching the pool from highest index for the empty entry.
+ *
+ * [in] pool
+ *   Pointer to the resource pool
+ *
+ * [in] offset
+ *   Offset of the storage in the pool
+ *
+ * [in] words
+ *   Number of words in this level
+ *
+ * [in] size
+ *   Number of entries in this level
+ *
+ * [in] index
+ *   Index of words that has the entry
+ *
+ * [in] clear
+ *   Indicate if a bit needs to be clear due to the entry is allocated
+ *
+ * Returns:
+ *     0 - Success
+ *    -1 - Failure
+ */
+static int
+ba_alloc_reverse_helper(struct bitalloc *pool,
+			int offset,
+			int words,
+			unsigned int size,
+			int index,
+			int *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int loc = ba_fls(storage[index]);
+	int r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_reverse_helper(pool,
+					    offset + words + 1,
+					    storage[words],
+					    size * 32,
+					    index * 32 + loc,
+					    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_reverse(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_reverse_helper(pool, 0, 1, 32, 0, &clear);
+}
+
 static int
 ba_alloc_index_helper(struct bitalloc *pool,
 		      int              offset,
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
index 563c8531a..2825bb37e 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.h
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -72,6 +72,11 @@ int ba_init(struct bitalloc *pool, int size);
 int ba_alloc(struct bitalloc *pool);
 int ba_alloc_index(struct bitalloc *pool, int index);
 
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc_reverse(struct bitalloc *pool);
+
 /**
  * Query a particular index in a pool to check if its in use.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 02b4b5c8f..de8f11955 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -671,7 +671,14 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 		return rc;
 	}
 
-	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index a40296ed2..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -185,6 +185,14 @@ struct tf_rm_allocate_parms {
 	 * i.e. Full Action Record offsets.
 	 */
 	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 2f4441de8..260fb15a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -157,6 +157,7 @@ tf_tcam_alloc(struct tf *tfp,
 	/* Allocate requested element */
 	aparms.rm_db = tcam_db[parms->dir];
 	aparms.db_index = parms->type;
+	aparms.priority = parms->priority;
 	aparms.index = (uint32_t *)&parms->idx;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 22/51] net/bnxt: support EM and TCAM lookup with table scope
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (20 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 23/51] net/bnxt: update table get to use new design Ajit Khaparde
                           ` (28 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Support for table scope within the EM module
- Support for host and system memory
- Update TCAM set/free.
- Replace TF device type by HCAPI RM type.
- Update TCAM set and free for HCAPI RM type

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |    5 +-
 drivers/net/bnxt/tf_core/Makefile             |    5 +-
 drivers/net/bnxt/tf_core/cfa_resource_types.h |    8 +-
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  864 +-----------
 drivers/net/bnxt/tf_core/tf_core.c            |  100 +-
 drivers/net/bnxt/tf_core/tf_device.c          |   50 +-
 drivers/net/bnxt/tf_core/tf_device.h          |   86 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |   14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   20 +-
 drivers/net/bnxt/tf_core/tf_em.c              |  360 -----
 drivers/net/bnxt/tf_core/tf_em.h              |  310 +++-
 drivers/net/bnxt/tf_core/tf_em_common.c       |  281 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  107 ++
 drivers/net/bnxt/tf_core/tf_em_host.c         | 1146 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  312 +++++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  118 ++
 drivers/net/bnxt/tf_core/tf_msg.c             | 1248 ++++-------------
 drivers/net/bnxt/tf_core/tf_msg.h             |  233 +--
 drivers/net/bnxt/tf_core/tf_rm.c              |   89 +-
 drivers/net/bnxt/tf_core/tf_rm_new.c          |   40 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1134 ---------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |   39 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |   25 +-
 drivers/net/bnxt/tf_core/tf_tcam.h            |    4 +
 drivers/net/bnxt/tf_core/tf_util.c            |    4 +-
 25 files changed, 3030 insertions(+), 3572 deletions(-)
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 33e6ebd66..35038dc8b 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -28,7 +28,10 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_msg.c',
 	'tf_core/rand.c',
 	'tf_core/stack.c',
-	'tf_core/tf_em.c',
+        'tf_core/tf_em_common.c',
+        'tf_core/tf_em_host.c',
+        'tf_core/tf_em_internal.c',
+        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 5ed32f12a..f186741e4 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -12,8 +12,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 058d8cc88..6e79facec 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -202,7 +202,9 @@
 #define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
 #define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
@@ -269,7 +271,9 @@
 #define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
 #define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 1e78296c6..26836e488 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -13,20 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_SESSION_ATTACH = 712,
-	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
-	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
-	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
-	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
-	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
-	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
-	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
-	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
-	HWRM_TFT_TBL_SCOPE_CFG = 731,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
-	HWRM_TFT_TBL_TYPE_SET = 823,
-	HWRM_TFT_TBL_TYPE_GET = 824,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
@@ -66,858 +54,8 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
-struct tf_session_attach_input;
-struct tf_session_hw_resc_qcaps_input;
-struct tf_session_hw_resc_qcaps_output;
-struct tf_session_hw_resc_alloc_input;
-struct tf_session_hw_resc_alloc_output;
-struct tf_session_hw_resc_free_input;
-struct tf_session_hw_resc_flush_input;
-struct tf_session_sram_resc_qcaps_input;
-struct tf_session_sram_resc_qcaps_output;
-struct tf_session_sram_resc_alloc_input;
-struct tf_session_sram_resc_alloc_output;
-struct tf_session_sram_resc_free_input;
-struct tf_session_sram_resc_flush_input;
-struct tf_tbl_type_set_input;
-struct tf_tbl_type_get_input;
-struct tf_tbl_type_get_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
-/* Input params for session attach */
-typedef struct tf_session_attach_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* Session Name */
-	char				 session_name[TF_SESSION_NAME_MAX];
-} tf_session_attach_input_t, *ptf_session_attach_input_t;
-
-/* Input params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
-
-/* Output params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_output {
-	/* Control Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Unused */
-	uint8_t			  unused[4];
-	/* Minimum guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_min;
-	/* Maximum non-guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_max;
-	/* Minimum guaranteed number of profile functions */
-	uint16_t			 prof_func_min;
-	/* Maximum non-guaranteed number of profile functions */
-	uint16_t			 prof_func_max;
-	/* Minimum guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_min;
-	/* Maximum non-guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_max;
-	/* Minimum guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_min;
-	/* Maximum non-guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_max;
-	/* Minimum guaranteed number of EM records entries */
-	uint16_t			 em_record_entries_min;
-	/* Maximum non-guaranteed number of EM record entries */
-	uint16_t			 em_record_entries_max;
-	/* Minimum guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_min;
-	/* Maximum non-guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_max;
-	/* Minimum guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_min;
-	/* Maximum non-guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_max;
-	/* Minimum guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_min;
-	/* Maximum non-guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_max;
-	/* Minimum guaranteed number of meter instances */
-	uint16_t			 meter_inst_min;
-	/* Maximum non-guaranteed number of meter instances */
-	uint16_t			 meter_inst_max;
-	/* Minimum guaranteed number of mirrors */
-	uint16_t			 mirrors_min;
-	/* Maximum non-guaranteed number of mirrors */
-	uint16_t			 mirrors_max;
-	/* Minimum guaranteed number of UPAR */
-	uint16_t			 upar_min;
-	/* Maximum non-guaranteed number of UPAR */
-	uint16_t			 upar_max;
-	/* Minimum guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_min;
-	/* Maximum non-guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_max;
-	/* Minimum guaranteed number of L2 Functions */
-	uint16_t			 l2_func_min;
-	/* Maximum non-guaranteed number of L2 Functions */
-	uint16_t			 l2_func_max;
-	/* Minimum guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_min;
-	/* Maximum non-guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_max;
-	/* Minimum guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_min;
-	/* Maximum non-guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_max;
-	/* Minimum guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_min;
-	/* Maximum non-guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_max;
-	/* Minimum guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_min;
-	/* Maximum non-guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_max;
-	/* Minimum guaranteed number of metadata */
-	uint16_t			 metadata_min;
-	/* Maximum non-guaranteed number of metadata */
-	uint16_t			 metadata_max;
-	/* Minimum guaranteed number of CT states */
-	uint16_t			 ct_state_min;
-	/* Maximum non-guaranteed number of CT states */
-	uint16_t			 ct_state_max;
-	/* Minimum guaranteed number of range profiles */
-	uint16_t			 range_prof_min;
-	/* Maximum non-guaranteed number range profiles */
-	uint16_t			 range_prof_max;
-	/* Minimum guaranteed number of range entries */
-	uint16_t			 range_entries_min;
-	/* Maximum non-guaranteed number of range entries */
-	uint16_t			 range_entries_max;
-	/* Minimum guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_min;
-	/* Maximum non-guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_max;
-} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
-
-/* Input params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of L2 CTX TCAM entries to be allocated */
-	uint16_t			 num_l2_ctx_tcam_entries;
-	/* Number of profile functions to be allocated */
-	uint16_t			 num_prof_func_entries;
-	/* Number of profile TCAM entries to be allocated */
-	uint16_t			 num_prof_tcam_entries;
-	/* Number of EM profile ids to be allocated */
-	uint16_t			 num_em_prof_id;
-	/* Number of EM records entries to be allocated */
-	uint16_t			 num_em_record_entries;
-	/* Number of WC profiles ids to be allocated */
-	uint16_t			 num_wc_tcam_prof_id;
-	/* Number of WC TCAM entries to be allocated */
-	uint16_t			 num_wc_tcam_entries;
-	/* Number of meter profiles to be allocated */
-	uint16_t			 num_meter_profiles;
-	/* Number of meter instances to be allocated */
-	uint16_t			 num_meter_inst;
-	/* Number of mirrors to be allocated */
-	uint16_t			 num_mirrors;
-	/* Number of UPAR to be allocated */
-	uint16_t			 num_upar;
-	/* Number of SP TCAM entries to be allocated */
-	uint16_t			 num_sp_tcam_entries;
-	/* Number of L2 functions to be allocated */
-	uint16_t			 num_l2_func;
-	/* Number of flexible key templates to be allocated */
-	uint16_t			 num_flex_key_templ;
-	/* Number of table scopes to be allocated */
-	uint16_t			 num_tbl_scope;
-	/* Number of epoch0 entries to be allocated */
-	uint16_t			 num_epoch0_entries;
-	/* Number of epoch1 entries to be allocated */
-	uint16_t			 num_epoch1_entries;
-	/* Number of metadata to be allocated */
-	uint16_t			 num_metadata;
-	/* Number of CT states to be allocated */
-	uint16_t			 num_ct_state;
-	/* Number of range profiles to be allocated */
-	uint16_t			 num_range_prof;
-	/* Number of range Entries to be allocated */
-	uint16_t			 num_range_entries;
-	/* Number of LAG table entries to be allocated */
-	uint16_t			 num_lag_tbl_entries;
-} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
-
-/* Output params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_output {
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
-
-/* Input params for session resource HW free */
-typedef struct tf_session_hw_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
-
-/* Input params for session resource HW flush */
-typedef struct tf_session_hw_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
-
-/* Input params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
-
-/* Output params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_output {
-	/* Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Minimum guaranteed number of Full Action */
-	uint16_t			 full_action_min;
-	/* Maximum non-guaranteed number of Full Action */
-	uint16_t			 full_action_max;
-	/* Minimum guaranteed number of MCG */
-	uint16_t			 mcg_min;
-	/* Maximum non-guaranteed number of MCG */
-	uint16_t			 mcg_max;
-	/* Minimum guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_min;
-	/* Maximum non-guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_max;
-	/* Minimum guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_min;
-	/* Maximum non-guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_max;
-	/* Minimum guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_min;
-	/* Maximum non-guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_max;
-	/* Minimum guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_min;
-	/* Maximum non-guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_max;
-	/* Minimum guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_max;
-	/* Minimum guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_max;
-	/* Minimum guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_min;
-	/* Maximum non-guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_max;
-	/* Minimum guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_min;
-	/* Maximum non-guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_max;
-	/* Minimum guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_min;
-	/* Maximum non-guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_max;
-	/* Minimum guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_min;
-	/* Maximum non-guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_max;
-	/* Minimum guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_min;
-	/* Maximum non-guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_max;
-} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
-
-/* Input params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_input {
-	/* FW Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of full action SRAM entries to be allocated */
-	uint16_t			 num_full_action;
-	/* Number of multicast groups to be allocated */
-	uint16_t			 num_mcg;
-	/* Number of Encap 8B entries to be allocated */
-	uint16_t			 num_encap_8b;
-	/* Number of Encap 16B entries to be allocated */
-	uint16_t			 num_encap_16b;
-	/* Number of Encap 64B entries to be allocated */
-	uint16_t			 num_encap_64b;
-	/* Number of SP SMAC entries to be allocated */
-	uint16_t			 num_sp_smac;
-	/* Number of SP SMAC IPv4 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv4;
-	/* Number of SP SMAC IPv6 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv6;
-	/* Number of Counter 64B entries to be allocated */
-	uint16_t			 num_counter_64b;
-	/* Number of NAT source ports to be allocated */
-	uint16_t			 num_nat_sport;
-	/* Number of NAT destination ports to be allocated */
-	uint16_t			 num_nat_dport;
-	/* Number of NAT source iPV4 addresses to be allocated */
-	uint16_t			 num_nat_s_ipv4;
-	/* Number of NAT destination IPV4 addresses to be allocated */
-	uint16_t			 num_nat_d_ipv4;
-} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
-
-/* Output params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_output {
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
-
-/* Input params for session resource SRAM free */
-typedef struct tf_session_sram_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
-
-/* Input params for session resource SRAM flush */
-typedef struct tf_session_sram_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
-
-/* Input params for table type set */
-typedef struct tf_tbl_type_set_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Size of the data to set in bytes */
-	uint16_t			 size;
-	/* Data to set */
-	uint8_t			  data[TF_BULK_SEND];
-	/* Index to set */
-	uint32_t			 index;
-} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
-
-/* Input params for table type get */
-typedef struct tf_tbl_type_get_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Index to get */
-	uint32_t			 index;
-} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
-
-/* Output params for table type get */
-typedef struct tf_tbl_type_get_output {
-	/* Size of the data read in bytes */
-	uint16_t			 size;
-	/* Data read */
-	uint8_t			  data[TF_BULK_RECV];
-} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3e23d0513..8b3e15c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -208,7 +208,15 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL &&
+		dev->ops->tf_dev_insert_ext_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL &&
+		dev->ops->tf_dev_insert_int_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM insert failed, rc:%s\n",
@@ -217,7 +225,7 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	return -EINVAL;
+	return 0;
 }
 
 /** Delete EM hash entry API
@@ -255,7 +263,13 @@ int tf_delete_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		rc = dev->ops->tf_dev_delete_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		rc = dev->ops->tf_dev_delete_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM delete failed, rc:%s\n",
@@ -806,3 +820,83 @@ tf_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl_scope != NULL) {
+		rc = dev->ops->tf_dev_alloc_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Alloc table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl_scope) {
+		rc = dev->ops->tf_dev_free_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Free table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 441d0c678..20b0c5948 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,6 +6,7 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
+#include "tf_em.h"
 
 struct tf;
 
@@ -42,10 +43,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_ident_cfg_parms ident_cfg;
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
-
-	dev_handle->type = TF_DEVICE_TYPE_WH;
-	/* Initial function initialization */
-	dev_handle->ops = &tf_dev_ops_p4_init;
+	struct tf_em_cfg_parms em_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -86,6 +84,36 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * EEM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_ext_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
+
+	rc = tf_em_ext_common_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EEM initialization failure\n");
+		goto fail;
+	}
+
+	/*
+	 * EM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_int_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = 0; /* Not used by EM */
+
+	rc = tf_em_int_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EM initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -144,6 +172,20 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_em_ext_common_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EEM\n");
+		fail = true;
+	}
+
+	rc = tf_em_int_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EM\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c8feac55d..2712d1039 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -15,12 +15,24 @@ struct tf;
 struct tf_session;
 
 /**
- *
+ * Device module types
  */
 enum tf_device_module_type {
+	/**
+	 * Identifier module
+	 */
 	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	/**
+	 * Table type module
+	 */
 	TF_DEVICE_MODULE_TYPE_TABLE,
+	/**
+	 * TCAM module
+	 */
 	TF_DEVICE_MODULE_TYPE_TCAM,
+	/**
+	 * EM module
+	 */
 	TF_DEVICE_MODULE_TYPE_EM,
 	TF_DEVICE_MODULE_TYPE_MAX
 };
@@ -395,8 +407,8 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_insert_em_entry)(struct tf *tfp,
-				      struct tf_insert_em_entry_parms *parms);
+	int (*tf_dev_insert_int_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
 
 	/**
 	 * Delete EM hash entry API
@@ -411,8 +423,72 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_delete_em_entry)(struct tf *tfp,
-				      struct tf_delete_em_entry_parms *parms);
+	int (*tf_dev_delete_int_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Insert EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_ext_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_ext_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Allocate EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope alloc parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_alloc_tbl_scope)(struct tf *tfp,
+				      struct tf_alloc_tbl_scope_parms *parms);
+
+	/**
+	 * Free EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope free parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
+				     struct tf_free_tbl_scope_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9e332c594..127c655a6 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -93,6 +93,12 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = NULL,
 	.tf_dev_get_tcam = NULL,
+	.tf_dev_insert_int_em_entry = NULL,
+	.tf_dev_delete_int_em_entry = NULL,
+	.tf_dev_insert_ext_em_entry = NULL,
+	.tf_dev_delete_ext_em_entry = NULL,
+	.tf_dev_alloc_tbl_scope = NULL,
+	.tf_dev_free_tbl_scope = NULL,
 };
 
 /**
@@ -113,6 +119,10 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = NULL,
-	.tf_dev_insert_em_entry = tf_em_insert_entry,
-	.tf_dev_delete_em_entry = tf_em_delete_entry,
+	.tf_dev_insert_int_em_entry = tf_em_insert_int_entry,
+	.tf_dev_delete_int_em_entry = tf_em_delete_int_entry,
+	.tf_dev_insert_ext_em_entry = tf_em_insert_ext_entry,
+	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
+	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
+	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 411e21637..da6dd65a3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -36,13 +36,12 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
@@ -77,4 +76,17 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
+
+struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
+	/* CFA_RESOURCE_TYPE_P4_EM_REC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+};
+
+struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_REC },
+	/* CFA_RESOURCE_TYPE_P4_TBL_SCOPE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
deleted file mode 100644
index fcbbd7eca..000000000
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ /dev/null
@@ -1,360 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-#include <rte_common.h>
-#include <rte_errno.h>
-#include <rte_log.h>
-
-#include "tf_core.h"
-#include "tf_em.h"
-#include "tf_msg.h"
-#include "tfp.h"
-#include "lookup3.h"
-#include "tf_ext_flow_handle.h"
-
-#include "bnxt.h"
-
-
-static uint32_t tf_em_get_key_mask(int num_entries)
-{
-	uint32_t mask = num_entries - 1;
-
-	if (num_entries & 0x7FFF)
-		return 0;
-
-	if (num_entries > (128 * 1024 * 1024))
-		return 0;
-
-	return mask;
-}
-
-static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
-				   uint8_t	       *in_key,
-				   struct cfa_p4_eem_64b_entry *key_entry)
-{
-	key_entry->hdr.word1 = result->word1;
-
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
-
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
-#endif
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
-			       struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t	   mask;
-	uint32_t	   key0_hash;
-	uint32_t	   key1_hash;
-	uint32_t	   key0_index;
-	uint32_t	   key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t	   index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t	   gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/**
- * Insert EM internal entry API
- *
- *  returns:
- *     0 - Success
- */
-static int tf_insert_em_internal_entry(struct tf                       *tfp,
-				       struct tf_insert_em_entry_parms *parms)
-{
-	int       rc;
-	uint32_t  gfid;
-	uint16_t  rptr_index = 0;
-	uint8_t   rptr_entry = 0;
-	uint8_t   num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-	uint32_t index;
-
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
-		return rc;
-	}
-
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
-	rc = tf_msg_insert_em_internal_entry(tfp,
-					     parms,
-					     &rptr_index,
-					     &rptr_entry,
-					     &num_of_entries);
-	if (rc != 0)
-		return -1;
-
-	PMD_DRV_LOG(ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
-		   rptr_index,
-		   rptr_entry,
-		   num_of_entries);
-
-	TF_SET_GFID(gfid,
-		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
-		     rptr_entry),
-		    0); /* N/A for internal table */
-
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_INTERNAL,
-		       parms->dir);
-
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     num_of_entries,
-				     0,
-				     0,
-				     rptr_index,
-				     rptr_entry,
-				     0);
-	return 0;
-}
-
-/** Delete EM internal entry API
- *
- * returns:
- * 0
- * -EINVAL
- */
-static int tf_delete_em_internal_entry(struct tf                       *tfp,
-				       struct tf_delete_em_entry_parms *parms)
-{
-	int rc;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-
-	rc = tf_msg_delete_em_entry(tfp, parms);
-
-	/* Return resource to pool */
-	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
-
-	return rc;
-}
-
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL)
-		/* External EEM */
-		return tf_insert_eem_entry
-			(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
-
-	return -EINVAL;
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		return tf_delete_em_internal_entry(tfp, parms);
-
-	return -EINVAL;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 2262ae7cc..cf799c200 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
@@ -19,6 +20,9 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+#define TF_EM_MAX_MASK 0x7FFF
+#define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
+
 /*
  * Used to build GFID:
  *
@@ -44,6 +48,47 @@ struct tf_em_64b_entry {
 	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
+/** EEM Memory Type
+ *
+ */
+enum tf_mem_type {
+	TF_EEM_MEM_TYPE_INVALID,
+	TF_EEM_MEM_TYPE_HOST,
+	TF_EEM_MEM_TYPE_SYSTEM
+};
+
+/**
+ * tf_em_cfg_parms definition
+ */
+struct tf_em_cfg_parms {
+	/**
+	 * [in] Num entries in resource config
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Resource config
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+	/**
+	 * [in] Memory type.
+	 */
+	enum tf_mem_type mem_type;
+};
+
+/**
+ * @page table Table
+ *
+ * @ref tf_alloc_eem_tbl_scope
+ *
+ * @ref tf_free_eem_tbl_scope_cb
+ *
+ * @ref tbl_scope_cb_find
+ */
+
 /**
  * Allocates EEM Table scope
  *
@@ -78,29 +123,258 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 			     struct tf_free_tbl_scope_parms *parms);
 
 /**
- * Function to search for table scope control block structure
- * with specified table scope ID.
+ * Insert record in to internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_int_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_int_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
+
+/**
+ * Insert record in to external EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Insert record from external EEM table
  *
- * [in] session
- *   Session to use for the search of the table scope control block
- * [in] tbl_scope_id
- *   Table scope ID to search for
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
  *
  * Returns:
- *  Pointer to the found table scope control block struct or NULL if
- *  table scope control block struct not found
+ *   0       - Success
+ *   -EINVAL - Parameter error
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+int tf_em_delete_ext_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
 
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum hcapi_cfa_em_table_type table_type);
+/**
+ * Insert record in to external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_sys_entry(struct tf *tfp,
+			       struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_ext_sys_entry(struct tf *tfp,
+			       struct tf_delete_em_entry_parms *parms);
 
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms);
+/**
+ * Bind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_bind(struct tf *tfp,
+		   struct tf_em_cfg_parms *parms);
 
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms);
+/**
+ * Unbind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_unbind(struct tf *tfp);
+
+/**
+ * Common bind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_bind(struct tf *tfp,
+			  struct tf_em_cfg_parms *parms);
+
+/**
+ * Common unbind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_unbind(struct tf *tfp);
+
+/**
+ * Alloc for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Alloc for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common free for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_free(struct tf *tfp,
+			  struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common alloc for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_alloc(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
new file mode 100644
index 000000000..ba6aa7ac1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_device.h"
+#include "tf_ext_flow_handle.h"
+#include "cfa_resource_types.h"
+
+#include "bnxt.h"
+
+
+/**
+ * EM DBs.
+ */
+void *eem_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Host or system
+ */
+static enum tf_mem_type mem_type;
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+	struct tf_rm_is_allocated_parms parms;
+	int allocated;
+
+	/* Check that id is valid */
+	parms.rm_db = eem_db[TF_DIR_RX];
+	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.allocated = &allocated;
+
+	i = tf_rm_is_allocated(&parms);
+
+	if (i < 0 || !allocated)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+int
+tf_create_tbl_pool_external(enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i;
+	int32_t j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes in reverse
+	 */
+	j = (num_entries - 1) * entry_sz_bytes;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+				    tf_dir_2_str(dir), strerror(-rc));
+			goto cleanup;
+		}
+
+		if (j < 0) {
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+				    dir, j);
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ */
+void
+tf_destroy_tbl_pool_external(enum tf_dir dir,
+			     struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_act_pool_mem =
+		tbl_scope_cb->ext_act_pool_mem[dir];
+
+	tfp_free(ext_act_pool_mem);
+}
+
+uint32_t
+tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & TF_EM_MAX_MASK)
+		return 0;
+
+	if (num_entries > TF_EM_MAX_ENTRY)
+		return 0;
+
+	return mask;
+}
+
+void
+tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+		       uint8_t *in_key,
+		       struct cfa_p4_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
+}
+
+int
+tf_em_ext_common_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+		db_cfg.rm_db = &eem_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	mem_type = parms->mem_type;
+	init = 1;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = eem_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		eem_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_alloc(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_alloc(tfp, parms);
+	else
+		return tf_em_ext_system_alloc(tfp, parms);
+}
+
+int
+tf_em_ext_common_free(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_free(tfp, parms);
+	else
+		return tf_em_ext_system_free(tfp, parms);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
new file mode 100644
index 000000000..45699a7c3
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_COMMON_H_
+#define _TF_EM_COMMON_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *   table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+/**
+ * Create and initialize a stack to use for action entries
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ * [in] num_entries
+ *   Number of EEM entries
+ * [in] entry_sz_bytes
+ *   Size of the entry
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memory
+ *   -EINVAL - Failure
+ */
+int tf_create_tbl_pool_external(enum tf_dir dir,
+				struct tf_tbl_scope_cb *tbl_scope_cb,
+				uint32_t num_entries,
+				uint32_t entry_sz_bytes);
+
+/**
+ * Delete and cleanup action record allocation stack
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ *
+ */
+void tf_destroy_tbl_pool_external(enum tf_dir dir,
+				  struct tf_tbl_scope_cb *tbl_scope_cb);
+
+/**
+ * Get hash mask for current EEM table size
+ *
+ * [in] num_entries
+ *   Number of EEM entries
+ */
+uint32_t tf_em_get_key_mask(int num_entries);
+
+/**
+ * Populate key_entry
+ *
+ * [in] result
+ *   Entry data
+ * [in] in_key
+ *   Key data
+ * [out] key_entry
+ *   Completed key record
+ */
+void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+			    uint8_t	       *in_key,
+			    struct cfa_p4_eem_64b_entry *key_entry);
+
+/**
+ * Find base page address for offset into specified table type
+ *
+ * [in] tbl_scope_cb
+ *   Table scope
+ * [in] dir
+ *   Direction
+ * [in] Offset
+ *   Offset in to table
+ * [in] table_type
+ *   Table type
+ *
+ * Returns:
+ *
+ * 0                                 - Failure
+ * Void pointer to page base address - Success
+ */
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum hcapi_cfa_em_table_type table_type);
+
+#endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
new file mode 100644
index 000000000..8be39afdd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -0,0 +1,1146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * EM DBs.
+ */
+extern void *eem_db[TF_DIR_MAX];
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)(uintptr_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+		TFP_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			TFP_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(void *)(uintptr_t)tp->pg_va_tbl[j],
+				(void *)(uintptr_t)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_host_alloc(struct tf *tfp,
+		     struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *em_tables;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
+	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
+				   parms->hw_flow_cache_flush_timer,
+				   dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(dir,
+					    tbl_scope_cb,
+					    em_tables[TF_RECORD_TABLE].num_entries,
+					    em_tables[TF_RECORD_TABLE].entry_size);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_host_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_host_free(struct tf *tfp,
+		    struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
+		return -EINVAL;
+	}
+
+	/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
new file mode 100644
index 000000000..9be91ad5d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -0,0 +1,312 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/**
+ * EM DBs.
+ */
+static void *em_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	rc = tfp_calloc(&parms);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	if (ptr != NULL)
+		tfp_free(ptr);
+}
+
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int
+tf_em_insert_int_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	int rc;
+	uint32_t gfid;
+	uint16_t rptr_index = 0;
+	uint8_t rptr_entry = 0;
+	uint8_t num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc) {
+		PMD_DRV_LOG
+		  (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc)
+		return -1;
+
+	PMD_DRV_LOG
+		  (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     (uint32_t)num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int
+tf_em_delete_int_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+int
+tf_em_int_bind(struct tf *tfp,
+	       struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		tf_create_em_pool(session,
+				  i,
+				  TF_SESSION_EM_POOL_SIZE);
+	}
+
+	/*
+	 * I'm not sure that this code is needed.
+	 * leaving for now until resolved
+	 */
+	if (parms->num_elements) {
+		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+
+		for (i = 0; i < TF_DIR_MAX; i++) {
+			db_cfg.dir = i;
+			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+			db_cfg.rm_db = &em_db[i];
+			rc = tf_rm_create_db(tfp, &db_cfg);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: EM DB creation failed\n",
+					    tf_dir_2_str(i));
+
+				return rc;
+			}
+		}
+	}
+
+	init = 1;
+	return 0;
+}
+
+int
+tf_em_int_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		tf_free_em_pool(session, i);
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = em_db[i];
+		if (em_db[i] != NULL) {
+			rc = tf_rm_free_db(tfp, &fparms);
+			if (rc)
+				return rc;
+		}
+
+		em_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
new file mode 100644
index 000000000..ee18a0c70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_insert_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_delete_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_sys_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_sys_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
+		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_em_ext_system_free(struct tf *tfp __rte_unused,
+		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c015b0ce2..d8b80bc84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,82 +18,6 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
-/**
- * Endian converts min and max values from the HW response to the query
- */
-#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
-	(query)->hw_query[index].min =                                       \
-		tfp_le_to_cpu_16(response. element ## _min);                 \
-	(query)->hw_query[index].max =                                       \
-		tfp_le_to_cpu_16(response. element ## _max);                 \
-} while (0)
-
-/**
- * Endian converts the number of entries from the alloc to the request
- */
-#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
-	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(hw_entry[index].start);                     \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
-	hw_entry[index].start =                                              \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	hw_entry[index].stride =                                             \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
-/**
- * Endian converts min and max values from the SRAM response to the
- * query
- */
-#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
-	(query)->sram_query[index].min =                                     \
-		tfp_le_to_cpu_16(response.element ## _min);                  \
-	(query)->sram_query[index].max =                                     \
-		tfp_le_to_cpu_16(response.element ## _max);                  \
-} while (0)
-
-/**
- * Endian converts the number of entries from the action (alloc) to
- * the request
- */
-#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
-	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(sram_entry[index].start);                   \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
-	sram_entry[index].start =                                            \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	sram_entry[index].stride =                                           \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -107,39 +31,6 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
-static int
-tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
-		   uint32_t *hwrm_type)
-{
-	int rc = 0;
-
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	return rc;
-}
-
 /**
  * Allocates a DMA buffer that can be used for message transfer.
  *
@@ -185,13 +76,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
-/**
- * NEW HWRM direct messages
- */
+/* HWRM Direct messages */
 
-/**
- * Sends session open request to TF Firmware
- */
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
@@ -222,9 +108,6 @@ tf_msg_session_open(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends session attach request to TF Firmware
- */
 int
 tf_msg_session_attach(struct tf *tfp __rte_unused,
 		      char *ctrl_chan_name __rte_unused,
@@ -233,9 +116,6 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
-/**
- * Sends session close request to TF Firmware
- */
 int
 tf_msg_session_close(struct tf *tfp)
 {
@@ -261,14 +141,11 @@ tf_msg_session_close(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session query config request to TF Firmware
- */
 int
 tf_msg_session_qcfg(struct tf *tfp)
 {
 	int rc;
-	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
@@ -289,636 +166,6 @@ tf_msg_session_qcfg(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_query *query)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_qcaps_input req = { 0 };
-	struct tf_session_hw_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(query, 0, sizeof(*query));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
-			    resp, meter_inst);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
-			     struct tf_rm_entry *hw_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_alloc_input req = { 0 };
-	struct tf_session_hw_resc_alloc_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			   l2_ctx_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			   prof_func_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			   prof_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			   em_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
-			   em_record_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			   wc_tcam_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
-			   wc_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
-			   meter_profiles);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
-			   meter_inst);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
-			   mirrors);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
-			   upar);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
-			   sp_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
-			   l2_func);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
-			   flex_key_templ);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			   tbl_scope);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
-			   epoch0_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
-			   epoch1_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
-			   metadata);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
-			   ct_state);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			   range_prof);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			   range_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			   lag_tbl_entries);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
-			    meter_inst);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_free(struct tf *tfp,
-			    enum tf_dir dir,
-			    struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_HW_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_flush(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_query *query __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_qcaps_input req = { 0 };
-	struct tf_session_sram_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
-			      full_action);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
-			      sp_smac_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
-			      sp_smac_ipv6);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
-			       struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_alloc_input req = { 0 };
-	struct tf_session_sram_resc_alloc_output resp;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(&resp, 0, sizeof(resp));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			     full_action);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
-			     mcg);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			     encap_8b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			     encap_16b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			     encap_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			     sp_smac);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			     req, sp_smac_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			     req, sp_smac_ipv6);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
-			     req, counter_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			     nat_sport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			     nat_dport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			     nat_s_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			     nat_d_ipv4);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
-			      resp, full_action);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			      resp, sp_smac_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			      resp, sp_smac_ipv6);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
-			      enum tf_dir dir,
-			      struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_flush(struct tf *tfp,
-			       enum tf_dir dir,
-			       struct tf_rm_entry *sram_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
 int
 tf_msg_session_resc_qcaps(struct tf *tfp,
 			  enum tf_dir dir,
@@ -973,7 +220,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -981,14 +228,14 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
 	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
-		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
@@ -1078,7 +325,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -1087,14 +334,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	}
 
 	printf("\nRESV\n");
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
-		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
-		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
-		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+		resv[i].type = tfp_le_to_cpu_32(resv_data[i].type);
+		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
+		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
@@ -1173,24 +420,112 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem register request to Firmware
- */
-int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id)
+int
+tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
 {
 	int rc;
-	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
-	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_insert_input req = { 0 };
+	struct hwrm_tf_em_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.strength = (em_result->hdr.word1 &
+			CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+			CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return 0;
+}
+
+int
+tf_msg_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_delete_input req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return 0;
+}
+
+int
+tf_msg_em_mem_rgtr(struct tf *tfp,
+		   int page_lvl,
+		   int page_size,
+		   uint64_t dma_addr,
+		   uint16_t *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
 
-	req.page_level = page_lvl;
-	req.page_size = page_size;
-	req.page_dir = tfp_cpu_to_le_64(dma_addr);
-
 	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
@@ -1208,11 +543,9 @@ int tf_msg_em_mem_rgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem unregister request to Firmware
- */
-int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t  *ctx_id)
+int
+tf_msg_em_mem_unrgtr(struct tf *tfp,
+		     uint16_t *ctx_id)
 {
 	int rc;
 	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
@@ -1233,12 +566,10 @@ int tf_msg_em_mem_unrgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM qcaps request to Firmware
- */
-int tf_msg_em_qcaps(struct tf *tfp,
-		    int dir,
-		    struct tf_em_caps *em_caps)
+int
+tf_msg_em_qcaps(struct tf *tfp,
+		int dir,
+		struct tf_em_caps *em_caps)
 {
 	int rc;
 	struct hwrm_tf_ext_em_qcaps_input  req = {0};
@@ -1273,17 +604,15 @@ int tf_msg_em_qcaps(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM config request to Firmware
- */
-int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t   num_entries,
-		  uint16_t   key0_ctx_id,
-		  uint16_t   key1_ctx_id,
-		  uint16_t   record_ctx_id,
-		  uint16_t   efc_ctx_id,
-		  uint8_t    flush_interval,
-		  int        dir)
+int
+tf_msg_em_cfg(struct tf *tfp,
+	      uint32_t num_entries,
+	      uint16_t key0_ctx_id,
+	      uint16_t key1_ctx_id,
+	      uint16_t record_ctx_id,
+	      uint16_t efc_ctx_id,
+	      uint8_t flush_interval,
+	      int dir)
 {
 	int rc;
 	struct hwrm_tf_ext_em_cfg_input  req = {0};
@@ -1317,42 +646,23 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM internal insert request to Firmware
- */
-int tf_msg_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *em_parms,
-				uint16_t *rptr_index,
-				uint8_t *rptr_entry,
-				uint8_t *num_of_entries)
+int
+tf_msg_em_op(struct tf *tfp,
+	     int dir,
+	     uint16_t op)
 {
-	int                         rc;
-	struct tfp_send_msg_parms        parms = { 0 };
-	struct hwrm_tf_em_insert_input   req = { 0 };
-	struct hwrm_tf_em_insert_output  resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_em_64b_entry *em_result =
-		(struct tf_em_64b_entry *)em_parms->em_record;
+	int rc;
+	struct hwrm_tf_ext_em_op_input req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	tfp_memcpy(req.em_key,
-		   em_parms->key,
-		   ((em_parms->key_sz_in_bits + 7) / 8));
-
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength =
-		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
-		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
-	req.em_key_bitlen = em_parms->key_sz_in_bits;
-	req.action_ptr = em_result->hdr.pointer;
-	req.em_record_idx = *rptr_index;
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
 
-	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1361,75 +671,86 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &parms);
-	if (rc)
-		return rc;
-
-	*rptr_entry = resp.rptr_entry;
-	*rptr_index = resp.rptr_index;
-	*num_of_entries = resp.num_of_entries;
-
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM delete insert request to Firmware
- */
-int tf_msg_delete_em_entry(struct tf *tfp,
-			   struct tf_delete_em_entry_parms *em_parms)
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_tcam_set_parms *parms)
 {
-	int                             rc;
-	struct tfp_send_msg_parms       parms = { 0 };
-	struct hwrm_tf_em_delete_input  req = { 0 };
-	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.type = parms->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
+	data_size = 2 * req.key_size + req.result_size;
 
-	parms.tf_type = HWRM_TF_EM_DELETE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
+		data = buf.va_addr;
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
+	}
+
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp,
-				 &parms);
+				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
-	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+cleanup:
+	tf_msg_free_dma_buf(&buf);
 
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM operation request to Firmware
- */
-int tf_msg_em_op(struct tf *tfp,
-		 int dir,
-		 uint16_t op)
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input req = {0};
-	struct hwrm_tf_ext_em_op_output resp = {0};
-	uint32_t flags;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
 
-	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_32(flags);
-	req.op = tfp_cpu_to_le_16(op);
+	req.type = in_parms->hcapi_type;
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
 
-	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.tf_type = HWRM_TF_TCAM_FREE;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1444,21 +765,32 @@ int tf_msg_em_op(struct tf *tfp,
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_set_input req = { 0 };
+	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_set_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
 	req.index = tfp_cpu_to_le_32(index);
 
@@ -1466,13 +798,15 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 		   data,
 		   size);
 
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_TBL_TYPE_SET,
-			 req);
+	parms.tf_type = HWRM_TF_TBL_TYPE_SET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1482,32 +816,43 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 int
 tf_msg_get_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_get_input req = { 0 };
+	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_input req = { 0 };
-	struct tf_tbl_type_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_TBL_TYPE_GET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1522,6 +867,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/* HWRM Tunneled messages */
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  struct tf_bulk_get_tbl_entry_parms *params)
@@ -1562,96 +909,3 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
-
-int
-tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_tcam_set_parms *parms)
-{
-	int rc;
-	struct tfp_send_msg_parms mparms = { 0 };
-	struct hwrm_tf_tcam_set_input req = { 0 };
-	struct hwrm_tf_tcam_set_output resp = { 0 };
-	struct tf_msg_dma_buf buf = { 0 };
-	uint8_t *data = NULL;
-	int data_size = 0;
-
-	req.type = parms->type;
-
-	req.idx = tfp_cpu_to_le_16(parms->idx);
-	if (parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
-
-	req.key_size = parms->key_size;
-	req.mask_offset = parms->key_size;
-	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * parms->key_size;
-	req.result_size = parms->result_size;
-	data_size = 2 * req.key_size + req.result_size;
-
-	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
-		/* use pci buffer */
-		data = &req.dev_data[0];
-	} else {
-		/* use dma buffer */
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_alloc_dma_buf(&buf, data_size);
-		if (rc)
-			goto cleanup;
-		data = buf.va_addr;
-		tfp_memcpy(&req.dev_data[0],
-			   &buf.pa_addr,
-			   sizeof(buf.pa_addr));
-	}
-
-	tfp_memcpy(&data[0], parms->key, parms->key_size);
-	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
-	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
-
-	mparms.tf_type = HWRM_TF_TCAM_SET;
-	mparms.req_data = (uint32_t *)&req;
-	mparms.req_size = sizeof(req);
-	mparms.resp_data = (uint32_t *)&resp;
-	mparms.resp_size = sizeof(resp);
-	mparms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &mparms);
-	if (rc)
-		goto cleanup;
-
-cleanup:
-	tf_msg_free_dma_buf(&buf);
-
-	return rc;
-}
-
-int
-tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_tcam_free_parms *in_parms)
-{
-	int rc;
-	struct hwrm_tf_tcam_free_input req =  { 0 };
-	struct hwrm_tf_tcam_free_output resp = { 0 };
-	struct tfp_send_msg_parms parms = { 0 };
-
-	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
-	if (rc != 0)
-		return rc;
-
-	req.count = 1;
-	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
-	if (in_parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
-
-	parms.tf_type = HWRM_TF_TCAM_FREE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &parms);
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1ff1044e8..8e276d4c0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -16,6 +16,8 @@
 
 struct tf;
 
+/* HWRM Direct messages */
+
 /**
  * Sends session open request to Firmware
  *
@@ -29,7 +31,7 @@ struct tf;
  *   Pointer to the fw_session_id that is allocated on firmware side
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
@@ -46,7 +48,7 @@ int tf_msg_session_open(struct tf *tfp,
  *   time of session open
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
@@ -59,73 +61,21 @@ int tf_msg_session_attach(struct tf *tfp,
  *   Pointer to session handle
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_close(struct tf *tfp);
 
 /**
  * Sends session query config request to TF Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_qcfg(struct tf *tfp);
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_query *hw_query);
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int tf_msg_session_hw_resc_alloc(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_alloc *hw_alloc,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int tf_msg_session_hw_resc_free(struct tf *tfp,
-				enum tf_dir dir,
-				struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int tf_msg_session_hw_resc_flush(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_query *sram_query);
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int tf_msg_session_sram_resc_alloc(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_alloc *sram_alloc,
-				   struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int tf_msg_session_sram_resc_free(struct tf *tfp,
-				  enum tf_dir dir,
-				  struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int tf_msg_session_sram_resc_flush(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_entry *sram_entry);
-
 /**
  * Sends session HW resource query capability request to TF Firmware
  *
@@ -183,6 +133,21 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 
 /**
  * Sends session resource flush request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements that needs to be flushed
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_resc_flush(struct tf *tfp,
 			      enum tf_dir dir,
@@ -190,6 +155,24 @@ int tf_msg_session_resc_flush(struct tf *tfp,
 			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to em insert parameter list
+ *
+ * [in] rptr_index
+ *   Record ptr index
+ *
+ * [in] rptr_entry
+ *   Record ptr entry
+ *
+ * [in] num_of_entries
+ *   Number of entries to insert
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    struct tf_insert_em_entry_parms *params,
@@ -198,26 +181,75 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    uint8_t *num_of_entries);
 /**
  * Sends EM internal delete request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] em_parms
+ *   Pointer to em delete parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms);
+
 /**
  * Sends EM mem register request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] page_lvl
+ *   Page level
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] dma_addr
+ *   DMA Address for the memory page
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id);
+		       int page_lvl,
+		       int page_size,
+		       uint64_t dma_addr,
+		       uint16_t *ctx_id);
 
 /**
  * Sends EM mem unregister request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t     *ctx_id);
+			 uint16_t *ctx_id);
 
 /**
  * Sends EM qcaps request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] em_caps
+ *   Pointer to EM capabilities
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_qcaps(struct tf *tfp,
 		    int dir,
@@ -225,22 +257,63 @@ int tf_msg_em_qcaps(struct tf *tfp,
 
 /**
  * Sends EM config request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] num_entries
+ *   EM Table, key 0, number of entries to configure
+ *
+ * [in] key0_ctx_id
+ *   EM Table, Key 0 context id
+ *
+ * [in] key1_ctx_id
+ *   EM Table, Key 1 context id
+ *
+ * [in] record_ctx_id
+ *   EM Table, Record context id
+ *
+ * [in] efc_ctx_id
+ *   EM Table, EFC Table context id
+ *
+ * [in] flush_interval
+ *   Flush pending HW cached flows every 1/10th of value set in
+ *   seconds, both idle and active flows are flushed from the HW
+ *   cache. If set to 0, this feature will be disabled.
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t      num_entries,
-		  uint16_t      key0_ctx_id,
-		  uint16_t      key1_ctx_id,
-		  uint16_t      record_ctx_id,
-		  uint16_t      efc_ctx_id,
-		  uint8_t       flush_interval,
-		  int           dir);
+		  uint32_t num_entries,
+		  uint16_t key0_ctx_id,
+		  uint16_t key1_ctx_id,
+		  uint16_t record_ctx_id,
+		  uint16_t efc_ctx_id,
+		  uint8_t flush_interval,
+		  int dir);
 
 /**
  * Sends EM operation request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] op
+ *   CFA Operator
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op);
+		 int dir,
+		 uint16_t op);
 
 /**
  * Sends tcam entry 'set' to the Firmware.
@@ -281,7 +354,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to set
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to set
  *
  * [in] size
@@ -298,7 +371,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  */
 int tf_msg_set_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
@@ -312,7 +385,7 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to get
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to get
  *
  * [in] size
@@ -329,11 +402,13 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  */
 int tf_msg_get_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
 
+/* HWRM Tunneled messages */
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index b6fe2f1ad..e0a84e64d 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -1818,16 +1818,8 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_entries = tfs->resc.tx.hw_entry;
 
 	/* Query for Session HW Resources */
-	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
 
+	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1846,16 +1838,6 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
 
 	/* Allocate Session HW Resources */
-	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform HW allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -1906,17 +1888,7 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	else
 		sram_entries = tfs->resc.tx.sram_entry;
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
+	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1934,20 +1906,6 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
 		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform SRAM allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -2798,17 +2756,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
-			rc = tf_msg_session_hw_resc_flush(tfp,
-							  i,
-							  hw_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
 		}
 
 		/* Check for any not previously freed SRAM resources
@@ -2828,38 +2775,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
-
-			rc = tf_msg_session_sram_resc_flush(tfp,
-							    i,
-							    sram_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
-		}
-
-		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, HW free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
-		}
-
-		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, SRAM free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
 		}
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index de8f11955..2d9be654a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -95,7 +95,9 @@ struct tf_rm_new_db {
  *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
 			       uint16_t *reservations,
 			       uint16_t count,
 			       uint16_t *valid_count)
@@ -107,6 +109,26 @@ tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
 		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
 		    reservations[i] > 0)
 			cnt++;
+
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
+		 */
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
+			TFP_DRV_LOG(ERR,
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+		}
 	}
 
 	*valid_count = cnt;
@@ -405,7 +427,9 @@ tf_rm_create_db(struct tf *tfp,
 	 * the DB holds them all as to give a fast lookup. We can also
 	 * remove entries where there are no request for elements.
 	 */
-	tf_rm_count_hcapi_reservations(parms->cfg,
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
 				       parms->alloc_cnt,
 				       parms->num_elements,
 				       &hcapi_items);
@@ -507,6 +531,11 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
 			/* Create pool */
 			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
 				     sizeof(struct bitalloc));
@@ -548,11 +577,16 @@ tf_rm_create_db(struct tf *tfp,
 		}
 	}
 
-	rm_db->num_entries = i;
+	rm_db->num_entries = parms->num_elements;
 	rm_db->dir = parms->dir;
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index e594f0248..d7f5de4c4 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -26,741 +26,6 @@
 #include "stack.h"
 #include "tf_common.h"
 
-#define PTU_PTE_VALID          0x1UL
-#define PTU_PTE_LAST           0x2UL
-#define PTU_PTE_NEXT_TO_LAST   0x4UL
-
-/* Number of pointers per page_size */
-#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
-
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define	TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
-/**
- * Function to free a page table
- *
- * [in] tp
- *   Pointer to the page table to free
- */
-static void
-tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
-{
-	uint32_t i;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		if (!tp->pg_va_tbl[i]) {
-			TFP_DRV_LOG(WARNING,
-				    "No mapping for page: %d table: %016" PRIu64 "\n",
-				    i,
-				    (uint64_t)(uintptr_t)tp);
-			continue;
-		}
-
-		tfp_free(tp->pg_va_tbl[i]);
-		tp->pg_va_tbl[i] = NULL;
-	}
-
-	tp->pg_count = 0;
-	tfp_free(tp->pg_va_tbl);
-	tp->pg_va_tbl = NULL;
-	tfp_free(tp->pg_pa_tbl);
-	tp->pg_pa_tbl = NULL;
-}
-
-/**
- * Function to free an EM table
- *
- * [in] tbl
- *   Pointer to the EM table to free
- */
-static void
-tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-		TFP_DRV_LOG(INFO,
-			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
-			   TF_EM_PAGE_SIZE,
-			    i,
-			    tp->pg_count);
-
-		tf_em_free_pg_tbl(tp);
-	}
-
-	tbl->l0_addr = NULL;
-	tbl->l0_dma_addr = 0;
-	tbl->num_lvl = 0;
-	tbl->num_data_pages = 0;
-}
-
-/**
- * Allocation of page tables
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] pg_count
- *   Page count to allocate
- *
- * [in] pg_size
- *   Size of each page
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
-		   uint32_t pg_count,
-		   uint32_t pg_size)
-{
-	uint32_t i;
-	struct tfp_calloc_parms parms;
-
-	parms.nitems = pg_count;
-	parms.size = sizeof(void *);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0)
-		return -ENOMEM;
-
-	tp->pg_va_tbl = parms.mem_va;
-
-	if (tfp_calloc(&parms) != 0) {
-		tfp_free(tp->pg_va_tbl);
-		return -ENOMEM;
-	}
-
-	tp->pg_pa_tbl = parms.mem_va;
-
-	tp->pg_count = 0;
-	tp->pg_size = pg_size;
-
-	for (i = 0; i < pg_count; i++) {
-		parms.nitems = 1;
-		parms.size = pg_size;
-		parms.alignment = TF_EM_PAGE_ALIGNMENT;
-
-		if (tfp_calloc(&parms) != 0)
-			goto cleanup;
-
-		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
-		tp->pg_va_tbl[i] = parms.mem_va;
-
-		memset(tp->pg_va_tbl[i], 0, pg_size);
-		tp->pg_count++;
-	}
-
-	return 0;
-
-cleanup:
-	tf_em_free_pg_tbl(tp);
-	return -ENOMEM;
-}
-
-/**
- * Allocates EM page tables
- *
- * [in] tbl
- *   Table to allocate pages for
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int rc = 0;
-	int i;
-	uint32_t j;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-
-		rc = tf_em_alloc_pg_tbl(tp,
-					tbl->page_cnt[i],
-					TF_EM_PAGE_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d, rc:%s\n",
-				i,
-				strerror(-rc));
-			goto cleanup;
-		}
-
-		for (j = 0; j < tp->pg_count; j++) {
-			TFP_DRV_LOG(INFO,
-				"EEM: Allocated page table: size %u lvl %d cnt"
-				" %u VA:%p PA:%p\n",
-				TF_EM_PAGE_SIZE,
-				i,
-				tp->pg_count,
-				(uint32_t *)tp->pg_va_tbl[j],
-				(uint32_t *)(uintptr_t)tp->pg_pa_tbl[j]);
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_free_page_table(tbl);
-	return rc;
-}
-
-/**
- * Links EM page tables
- *
- * [in] tp
- *   Pointer to page table
- *
- * [in] tp_next
- *   Pointer to the next page table
- *
- * [in] set_pte_last
- *   Flag controlling if the page table is last
- */
-static void
-tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-		      struct hcapi_cfa_em_page_tbl *tp_next,
-		      bool set_pte_last)
-{
-	uint64_t *pg_pa = tp_next->pg_pa_tbl;
-	uint64_t *pg_va;
-	uint64_t valid;
-	uint32_t k = 0;
-	uint32_t i;
-	uint32_t j;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			if (k == tp_next->pg_count - 2 && set_pte_last)
-				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
-			else if (k == tp_next->pg_count - 1 && set_pte_last)
-				valid = PTU_PTE_LAST | PTU_PTE_VALID;
-			else
-				valid = PTU_PTE_VALID;
-
-			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
-			if (++k >= tp_next->pg_count)
-				return;
-		}
-	}
-}
-
-/**
- * Setup a EM page table
- *
- * [in] tbl
- *   Pointer to EM page table
- */
-static void
-tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_page_tbl *tp;
-	bool set_pte_last = 0;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl - 1; i++) {
-		tp = &tbl->pg_tbl[i];
-		tp_next = &tbl->pg_tbl[i + 1];
-		if (i == tbl->num_lvl - 2)
-			set_pte_last = 1;
-		tf_em_link_page_table(tp, tp_next, set_pte_last);
-	}
-
-	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
-}
-
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
-/**
- * Unregisters EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- */
-static void
-tf_em_ctx_unreg(struct tf *tfp,
-		struct tf_tbl_scope_cb *tbl_scope_cb,
-		int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
-			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
-			tf_em_free_page_table(tbl);
-		}
-	}
-}
-
-/**
- * Registers EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of Memory
- */
-static int
-tf_em_ctx_reg(struct tf *tfp,
-	      struct tf_tbl_scope_cb *tbl_scope_cb,
-	      int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int rc = 0;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
-
-			if (rc)
-				goto cleanup;
-
-			rc = tf_em_alloc_page_table(tbl);
-			if (rc)
-				goto cleanup;
-
-			tf_em_setup_page_table(tbl);
-			rc = tf_msg_em_mem_rgtr(tfp,
-						tbl->num_lvl - 1,
-						TF_EM_PAGE_SIZE_ENUM,
-						tbl->l0_dma_addr,
-						&tbl->ctx_id);
-			if (rc)
-				goto cleanup;
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	return rc;
-}
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	return 0;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -883,289 +148,6 @@ tf_free_tbl_entry_shadow(struct tf_session *tfs,
 }
 #endif /* TF_SHADOW */
 
-/**
- * Create External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- * [in] num_entries
- *   number of entries to write
- * [in] entry_sz_bytes
- *   size of each entry
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_tbl_pool_external(enum tf_dir dir,
-			    struct tf_tbl_scope_cb *tbl_scope_cb,
-			    uint32_t num_entries,
-			    uint32_t entry_sz_bytes)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i;
-	int32_t j;
-	int rc = 0;
-	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
-			    tf_dir_2_str(dir), strerror(ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Save the  malloced memory address so that it can
-	 * be freed when the table scope is freed.
-	 */
-	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
-
-	/* Fill pool with indexes in reverse
-	 */
-	j = (num_entries - 1) * entry_sz_bytes;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-				    tf_dir_2_str(dir), strerror(-rc));
-			goto cleanup;
-		}
-
-		if (j < 0) {
-			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
-				    dir, j);
-			goto cleanup;
-		}
-		j -= entry_sz_bytes;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Destroy External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- *
- */
-static void
-tf_destroy_tbl_pool_external(enum tf_dir dir,
-			     struct tf_tbl_scope_cb *tbl_scope_cb)
-{
-	uint32_t *ext_act_pool_mem =
-		tbl_scope_cb->ext_act_pool_mem[dir];
-
-	tfp_free(ext_act_pool_mem);
-}
-
-/* API defined in tf_em.h */
-struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
-{
-	int i;
-
-	/* Check that id is valid */
-	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
-	if (i < 0)
-		return NULL;
-
-	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
-	}
-
-	return NULL;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			 struct tf_free_tbl_scope_parms *parms)
-{
-	int rc = 0;
-	enum tf_dir  dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Table scope error\n");
-		return -EINVAL;
-	}
-
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-
-	/* free table scope locks */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/* Free associated external pools
-		 */
-		tf_destroy_tbl_pool_external(dir,
-					     tbl_scope_cb);
-		tf_msg_em_op(tfp,
-			     dir,
-			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
-
-		/* free table scope and all associated resources */
-		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	}
-
-	return rc;
-}
-
-/* API defined in tf_em.h */
-int
-tf_alloc_eem_tbl_scope(struct tf *tfp,
-		       struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-	enum tf_dir dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_table *em_tables;
-	int index;
-	struct tf_session *session;
-	struct tf_free_tbl_scope_parms free_parms;
-
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* Get Table Scope control block from the session pool */
-	index = ba_alloc(session->tbl_scope_pool_rx);
-	if (index == -1) {
-		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
-			    "Control Block\n");
-		return -ENOMEM;
-	}
-
-	tbl_scope_cb = &session->tbl_scopes[index];
-	tbl_scope_cb->index = index;
-	tbl_scope_cb->tbl_scope_id = index;
-	parms->tbl_scope_id = index;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_msg_em_qcaps(tfp,
-				     dir,
-				     &tbl_scope_cb->em_caps[dir]);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to query for EEM capability,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-	}
-
-	/*
-	 * Validate and setup table sizes
-	 */
-	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
-		goto cleanup;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/*
-		 * Allocate tables and signal configuration to FW
-		 */
-		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-
-		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
-		rc = tf_msg_em_cfg(tfp,
-				   em_tables[TF_KEY0_TABLE].num_entries,
-				   em_tables[TF_KEY0_TABLE].ctx_id,
-				   em_tables[TF_KEY1_TABLE].ctx_id,
-				   em_tables[TF_RECORD_TABLE].ctx_id,
-				   em_tables[TF_EFC_TABLE].ctx_id,
-				   parms->hw_flow_cache_flush_timer,
-				   dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "TBL: Unable to configure EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		rc = tf_msg_em_op(tfp,
-				  dir,
-				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
-
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		/* Allocate the pool of offsets of the external memory.
-		 * Initially, this is a single fixed size pool for all external
-		 * actions related to a single table scope.
-		 */
-		rc = tf_create_tbl_pool_external(dir,
-				    tbl_scope_cb,
-				    em_tables[TF_RECORD_TABLE].num_entries,
-				    em_tables[TF_RECORD_TABLE].entry_size);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s TBL: Unable to allocate idx pools %s\n",
-				    tf_dir_2_str(dir),
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-	}
-
-	return 0;
-
-cleanup_full:
-	free_parms.tbl_scope_id = index;
-	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
-	return -EINVAL;
-
-cleanup:
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-	return -EINVAL;
-}
 
  /* API defined in tf_core.h */
 int
@@ -1196,119 +178,3 @@ tf_bulk_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
-
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_scope(struct tf *tfp,
-		   struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	rc = tf_alloc_eem_tbl_scope(tfp, parms);
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_scope(struct tf *tfp,
-		  struct tf_free_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	/* free table scope and all associated resources */
-	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
-
-	return rc;
-}
-
-static void
-tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-			struct hcapi_cfa_em_page_tbl *tp_next)
-{
-	uint64_t *pg_va;
-	uint32_t i;
-	uint32_t j;
-	uint32_t k = 0;
-
-	printf("pg_count:%d pg_size:0x%x\n",
-	       tp->pg_count,
-	       tp->pg_size);
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-		printf("\t%p\n", (void *)pg_va);
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
-			if (((pg_va[j] & 0x7) ==
-			     tfp_cpu_to_le_64(PTU_PTE_LAST |
-					      PTU_PTE_VALID)))
-				return;
-
-			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
-				printf("** Invalid entry **\n");
-				return;
-			}
-
-			if (++k >= tp_next->pg_count) {
-				printf("** Shouldn't get here **\n");
-				return;
-			}
-		}
-	}
-}
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
-{
-	struct tf_session      *session;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_page_tbl *tp;
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-	int j;
-	int dir;
-
-	printf("called %s\n", __func__);
-
-	/* find session struct */
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* find control block for table scope */
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-
-		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
-			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
-			printf
-	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
-			       j,
-			       tbl->type,
-			       tbl->num_entries,
-			       tbl->entry_size,
-			       tbl->num_lvl);
-			if (tbl->pg_tbl[0].pg_va_tbl &&
-			    tbl->pg_tbl[0].pg_pa_tbl)
-				printf("%p %p\n",
-			       tbl->pg_tbl[0].pg_va_tbl[0],
-			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
-			for (i = 0; i < tbl->num_lvl - 1; i++) {
-				printf("Level:%d\n", i);
-				tp = &tbl->pg_tbl[i];
-				tp_next = &tbl->pg_tbl[i + 1];
-				tf_dump_link_page_table(tp, tp_next);
-			}
-			printf("\n");
-		}
-	}
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index bdf7d2089..2f5af6060 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -209,8 +209,10 @@ tf_tbl_set(struct tf *tfp,
 	   struct tf_tbl_set_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
 	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -240,9 +242,22 @@ tf_tbl_set(struct tf *tfp,
 	}
 
 	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	rc = tf_msg_set_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
@@ -262,8 +277,10 @@ tf_tbl_get(struct tf *tfp,
 	   struct tf_tbl_get_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
+	uint16_t hcapi_type;
 	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -292,10 +309,24 @@ tf_tbl_get(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Get the entry */
 	rc = tf_msg_get_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 260fb15a6..a1761ad56 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -53,7 +53,6 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	db_cfg.num_elements = parms->num_elements;
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -174,14 +173,15 @@ tf_tcam_alloc(struct tf *tfp,
 }
 
 int
-tf_tcam_free(struct tf *tfp __rte_unused,
-	     struct tf_tcam_free_parms *parms __rte_unused)
+tf_tcam_free(struct tf *tfp,
+	     struct tf_tcam_free_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -253,6 +253,15 @@ tf_tcam_free(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
@@ -281,6 +290,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -338,6 +348,15 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 5090dfd9f..ee5bacc09 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -76,6 +76,10 @@ struct tf_tcam_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Index to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 16c43eb67..5472a9aac 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -152,9 +152,9 @@ tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
 	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
 		return tf_ident_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_TABLE:
-		return tf_tcam_tbl_2_str(mod_type);
-	case TF_DEVICE_MODULE_TYPE_TCAM:
 		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tcam_tbl_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_EM:
 		return tf_em_tbl_type_2_str(mod_type);
 	default:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 23/51] net/bnxt: update table get to use new design
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (21 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
                           ` (27 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Move bulk table get implementation to new Tbl Module design.
- Update messages for bulk table get
- Retrieve specified table element using bulk mechanism
- Remove deprecated resource definitions
- Update device type configuration for P4.
- Update RM DB HCAPI count check and fix EM internal and host
  code such that EM DBs can be created correctly.
- Update error logging to be info on unbind in the different modules.
- Move RTE RSVD out of tf_resources.h

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h      |  250 ++
 drivers/net/bnxt/hcapi/hcapi_cfa.h        |    2 +
 drivers/net/bnxt/meson.build              |    3 +-
 drivers/net/bnxt/tf_core/Makefile         |    2 -
 drivers/net/bnxt/tf_core/tf_common.h      |   55 +-
 drivers/net/bnxt/tf_core/tf_core.c        |   86 +-
 drivers/net/bnxt/tf_core/tf_device.h      |   24 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c   |    4 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h   |    5 +-
 drivers/net/bnxt/tf_core/tf_em.h          |   88 +-
 drivers/net/bnxt/tf_core/tf_em_common.c   |   29 +-
 drivers/net/bnxt/tf_core/tf_em_internal.c |   59 +-
 drivers/net/bnxt/tf_core/tf_identifier.c  |   14 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |   31 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |    8 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  529 ---
 drivers/net/bnxt/tf_core/tf_rm.c          | 3695 ++++-----------------
 drivers/net/bnxt/tf_core/tf_rm.h          |  539 +--
 drivers/net/bnxt/tf_core/tf_rm_new.c      |  907 -----
 drivers/net/bnxt/tf_core/tf_rm_new.h      |  446 ---
 drivers/net/bnxt/tf_core/tf_session.h     |  214 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  478 ++-
 drivers/net/bnxt/tf_core/tf_tbl.h         |  436 ++-
 drivers/net/bnxt/tf_core/tf_tbl_type.c    |  342 --
 drivers/net/bnxt/tf_core/tf_tbl_type.h    |  318 --
 drivers/net/bnxt/tf_core/tf_tcam.c        |   15 +-
 26 files changed, 2337 insertions(+), 6242 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
new file mode 100644
index 000000000..c30e4f49c
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_tbl.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  12/16/19 17:18:12
+ *
+ * Note:  This file was originally generated by tflib_decode.py.
+ *        Remainder is hand coded due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union fields)
+ *
+ **/
+#ifndef _CFA_P40_TBL_H_
+#define _CFA_P40_TBL_H_
+
+#include "cfa_p40_hw.h"
+
+#include "hcapi_cfa_defs.h"
+
+const struct hcapi_cfa_field cfa_p40_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_act_veb_tcam_layout[] = {
+	{CFA_P40_ACT_VEB_TCAM_VALID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_MAC_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_OVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_IVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_lkup_tcam_record_mem_layout[] = {
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_ctxt_remap_mem_layout[] = {
+	{CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS},
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
+	{CFA_P40_EEM_KEY_TBL_VALID_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
+
+};
+#endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index f60af4e56..7a67493bd 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -243,6 +243,8 @@ int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
 			       struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
+			     struct hcapi_cfa_data *mirror);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 35038dc8b..7f3ec6204 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -41,10 +41,9 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
 	'tf_core/tf_shadow_tcam.c',
-	'tf_core/tf_tbl_type.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm_new.c',
+	'tf_core/tf_rm.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index f186741e4..9ba60e1c2 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -23,10 +23,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index ec3bca835..b982203db 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -6,52 +6,11 @@
 #ifndef _TF_COMMON_H_
 #define _TF_COMMON_H_
 
-/* Helper to check the parms */
-#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "%s: session error\n", \
-				    tf_dir_2_str((parms)->dir)); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_TFP_SESSION(tfp) do { \
-		if ((tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
+/* Helpers to performs parameter check */
 
+/**
+ * Checks 1 parameter against NULL.
+ */
 #define TF_CHECK_PARMS1(parms) do {					\
 		if ((parms) == NULL) {					\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -59,6 +18,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 2 parameters against NULL.
+ */
 #define TF_CHECK_PARMS2(parms1, parms2) do {				\
 		if ((parms1) == NULL || (parms2) == NULL) {		\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -66,6 +28,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 3 parameters against NULL.
+ */
 #define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
 		if ((parms1) == NULL ||					\
 		    (parms2) == NULL ||					\
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8b3e15c8a..8727900c4 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -186,7 +186,7 @@ int tf_insert_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -241,7 +241,7 @@ int tf_delete_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -523,7 +523,7 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 	return -EOPNOTSUPP;
 }
 
@@ -821,7 +821,80 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
+int
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_bulk_parms bparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+
+		return rc;
+	}
+
+	/* Internal table type processing */
+
+	if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	bparms.dir = parms->dir;
+	bparms.type = parms->type;
+	bparms.starting_idx = parms->starting_idx;
+	bparms.num_entries = parms->num_entries;
+	bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
+	bparms.physical_mem_addr = parms->physical_mem_addr;
+	rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get bulk failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
@@ -830,7 +903,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -861,7 +934,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
 int
 tf_free_tbl_scope(struct tf *tfp,
 		  struct tf_free_tbl_scope_parms *parms)
@@ -870,7 +942,7 @@ tf_free_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 2712d1039..93f3627d4 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -8,7 +8,7 @@
 
 #include "tf_core.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -293,7 +293,27 @@ struct tf_dev_ops {
 	 *   - (-EINVAL) on failure.
 	 */
 	int (*tf_dev_get_tbl)(struct tf *tfp,
-			       struct tf_tbl_get_parms *parms);
+			      struct tf_tbl_get_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element using 'bulk'
+	 * mechanism.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get bulk parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_bulk_tbl)(struct tf *tfp,
+				   struct tf_tbl_get_bulk_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 127c655a6..e3526672f 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -8,7 +8,7 @@
 
 #include "tf_device.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
 
@@ -88,6 +88,7 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
+	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
 	.tf_dev_free_tcam = NULL,
 	.tf_dev_alloc_search_tcam = NULL,
@@ -114,6 +115,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
+	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = NULL,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index da6dd65a3..473e4eae5 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -9,7 +9,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_core.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -41,8 +41,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index cf799c200..6bfcbd59e 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -23,6 +23,56 @@
 #define TF_EM_MAX_MASK 0x7FFF
 #define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
 
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
+ */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
+
+#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /*
  * Used to build GFID:
  *
@@ -80,13 +130,43 @@ struct tf_em_cfg_parms {
 };
 
 /**
- * @page table Table
+ * @page em EM
  *
  * @ref tf_alloc_eem_tbl_scope
  *
  * @ref tf_free_eem_tbl_scope_cb
  *
- * @ref tbl_scope_cb_find
+ * @ref tf_em_insert_int_entry
+ *
+ * @ref tf_em_delete_int_entry
+ *
+ * @ref tf_em_insert_ext_entry
+ *
+ * @ref tf_em_delete_ext_entry
+ *
+ * @ref tf_em_insert_ext_sys_entry
+ *
+ * @ref tf_em_delete_ext_sys_entry
+ *
+ * @ref tf_em_int_bind
+ *
+ * @ref tf_em_int_unbind
+ *
+ * @ref tf_em_ext_common_bind
+ *
+ * @ref tf_em_ext_common_unbind
+ *
+ * @ref tf_em_ext_host_alloc
+ *
+ * @ref tf_em_ext_host_free
+ *
+ * @ref tf_em_ext_system_alloc
+ *
+ * @ref tf_em_ext_system_free
+ *
+ * @ref tf_em_ext_common_free
+ *
+ * @ref tf_em_ext_common_alloc
  */
 
 /**
@@ -328,7 +408,7 @@ int tf_em_ext_host_free(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+			   struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using system memory
@@ -344,7 +424,7 @@ int tf_em_ext_system_alloc(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
+			  struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index ba6aa7ac1..d0d80daeb 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -194,12 +194,13 @@ tf_em_ext_common_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Ext DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -210,19 +211,29 @@ tf_em_ext_common_bind(struct tf *tfp,
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
 		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Ext DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_TBL_SCOPE] == 0)
+			continue;
+
 		db_cfg.rm_db = &eem_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
-				    "%s: EM DB creation failed\n",
+				    "%s: EM Ext DB creation failed\n",
 				    tf_dir_2_str(i));
 
 			return rc;
 		}
+		db_exists = 1;
 	}
 
-	mem_type = parms->mem_type;
-	init = 1;
+	if (db_exists) {
+		mem_type = parms->mem_type;
+		init = 1;
+	}
 
 	return 0;
 }
@@ -236,13 +247,11 @@ tf_em_ext_common_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Ext DBs created\n");
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 9be91ad5d..1c514747d 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -225,12 +225,13 @@ tf_em_int_bind(struct tf *tfp,
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 	struct tf_session *session;
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Int DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -242,31 +243,35 @@ tf_em_int_bind(struct tf *tfp,
 				  TF_SESSION_EM_POOL_SIZE);
 	}
 
-	/*
-	 * I'm not sure that this code is needed.
-	 * leaving for now until resolved
-	 */
-	if (parms->num_elements) {
-		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
-
-		for (i = 0; i < TF_DIR_MAX; i++) {
-			db_cfg.dir = i;
-			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
-			db_cfg.rm_db = &em_db[i];
-			rc = tf_rm_create_db(tfp, &db_cfg);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: EM DB creation failed\n",
-					    tf_dir_2_str(i));
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
-				return rc;
-			}
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Int DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
+			continue;
+
+		db_cfg.rm_db = &em_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM Int DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
 		}
+		db_exists = 1;
 	}
 
-	init = 1;
+	if (db_exists)
+		init = 1;
+
 	return 0;
 }
 
@@ -280,13 +285,11 @@ tf_em_int_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Int DBs created\n");
+		return 0;
 	}
 
 	session = (struct tf_session *)tfp->session->core_data;
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index b197bb271..211371081 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -7,7 +7,7 @@
 
 #include "tf_identifier.h"
 #include "tf_common.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_util.h"
 #include "tfp.h"
 
@@ -35,7 +35,7 @@ tf_ident_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "Identifier DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -65,7 +65,7 @@ tf_ident_bind(struct tf *tfp,
 }
 
 int
-tf_ident_unbind(struct tf *tfp __rte_unused)
+tf_ident_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
@@ -73,13 +73,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
+		TFP_DRV_LOG(INFO,
 			    "No Identifier DBs created\n");
-		return -EINVAL;
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index d8b80bc84..02d8a4971 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -871,26 +871,41 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *params)
+			  enum tf_dir dir,
+			  uint16_t hcapi_type,
+			  uint32_t starting_idx,
+			  uint16_t num_entries,
+			  uint16_t entry_sz_in_bytes,
+			  uint64_t physical_mem_addr)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
 	int data_size = 0;
 
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(params->dir);
-	req.type = tfp_cpu_to_le_32(params->type);
-	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
-	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
+	req.start_index = tfp_cpu_to_le_32(starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
 
-	data_size = params->num_entries * params->entry_sz_in_bytes;
+	data_size = num_entries * entry_sz_in_bytes;
 
-	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+	req.host_addr = tfp_cpu_to_le_64(physical_mem_addr);
 
 	MSG_PREP(parms,
 		 TF_KONG_MB,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8e276d4c0..7432873d7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -11,7 +11,6 @@
 
 #include "tf_tbl.h"
 #include "tf_rm.h"
-#include "tf_rm_new.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -422,6 +421,11 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms);
+			      enum tf_dir dir,
+			      uint16_t hcapi_type,
+			      uint32_t starting_idx,
+			      uint16_t num_entries,
+			      uint16_t entry_sz_in_bytes,
+			      uint64_t physical_mem_addr);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index b7b445102..4688514fc 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,535 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
-/* Common HW resources for all chip variants */
-#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
-					    * entries
-					    */
-#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
-#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
-					    * TCAM
-					    */
-#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
-					    * IDs
-					    */
-#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
-#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
-#define TF_NUM_METER             1024      /* < Number of meter instances */
-#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
-#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
-
-/* Wh+/SR specific HW resources */
-#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
-					    * entries
-					    */
-
-/* SR/SR2 specific HW resources */
-#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
-
-
-/* Thor, SR2 common HW resources */
-#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
-					    * templates
-					    */
-
-/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
-#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
-#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
-#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
-#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
-					    * States
-					    */
-#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
-#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
-#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
-
-/*
- * Common for the Reserved Resource defines below:
- *
- * - HW Resources
- *   For resources where a priority level plays a role, i.e. l2 ctx
- *   tcam entries, both a number of resources and a begin/end pair is
- *   required. The begin/end is used to assure TFLIB gets the correct
- *   priority setting for that resource.
- *
- *   For EM records there is no priority required thus a number of
- *   resources is sufficient.
- *
- *   Example, TCAM:
- *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
- *     0-63 as HW presents 0 as the highest priority entry.
- *
- * - SRAM Resources
- *   Handled as regular resources as there is no priority required.
- *
- * Common for these resources is that they are handled per direction,
- * rx/tx.
- */
-
-/* HW Resources */
-
-/* L2 CTX */
-#define TF_RSVD_L2_CTXT_TCAM_RX                   64
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
-#define TF_RSVD_L2_CTXT_TCAM_TX                   960
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
-
-/* Profiler */
-#define TF_RSVD_PROF_FUNC_RX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
-#define TF_RSVD_PROF_FUNC_TX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
-
-#define TF_RSVD_PROF_TCAM_RX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
-#define TF_RSVD_PROF_TCAM_TX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
-
-/* EM Profiles IDs */
-#define TF_RSVD_EM_PROF_ID_RX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
-#define TF_RSVD_EM_PROF_ID_TX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
-
-/* EM Records */
-#define TF_RSVD_EM_REC_RX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
-#define TF_RSVD_EM_REC_TX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
-
-/* Wildcard */
-#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
-#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
-
-#define TF_RSVD_WC_TCAM_RX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_WC_TCAM_END_IDX_RX                63
-#define TF_RSVD_WC_TCAM_TX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_WC_TCAM_END_IDX_TX                63
-
-#define TF_RSVD_METER_PROF_RX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_PROF_END_IDX_RX             0
-#define TF_RSVD_METER_PROF_TX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_PROF_END_IDX_TX             0
-
-#define TF_RSVD_METER_INST_RX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_INST_END_IDX_RX             0
-#define TF_RSVD_METER_INST_TX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_INST_END_IDX_TX             0
-
-/* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
-#define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
-#define TF_RSVD_MIRROR_END_IDX_TX                 0
-
-/* UPAR */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_UPAR_RX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
-#define TF_RSVD_UPAR_END_IDX_RX                   0
-#define TF_RSVD_UPAR_TX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
-#define TF_RSVD_UPAR_END_IDX_TX                   0
-
-/* Source Properties */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SP_TCAM_RX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_SP_TCAM_END_IDX_RX                0
-#define TF_RSVD_SP_TCAM_TX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_SP_TCAM_END_IDX_TX                0
-
-/* L2 Func */
-#define TF_RSVD_L2_FUNC_RX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
-#define TF_RSVD_L2_FUNC_END_IDX_RX                0
-#define TF_RSVD_L2_FUNC_TX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
-#define TF_RSVD_L2_FUNC_END_IDX_TX                0
-
-/* FKB */
-#define TF_RSVD_FKB_RX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
-#define TF_RSVD_FKB_END_IDX_RX                    0
-#define TF_RSVD_FKB_TX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
-#define TF_RSVD_FKB_END_IDX_TX                    0
-
-/* TBL Scope */
-#define TF_RSVD_TBL_SCOPE_RX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
-#define TF_RSVD_TBL_SCOPE_TX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
-
-/* EPOCH0 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH0_RX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH0_END_IDX_RX                 0
-#define TF_RSVD_EPOCH0_TX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH0_END_IDX_TX                 0
-
-/* EPOCH1 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH1_RX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH1_END_IDX_RX                 0
-#define TF_RSVD_EPOCH1_TX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH1_END_IDX_TX                 0
-
-/* METADATA */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_METADATA_RX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
-#define TF_RSVD_METADATA_END_IDX_RX               0
-#define TF_RSVD_METADATA_TX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
-#define TF_RSVD_METADATA_END_IDX_TX               0
-
-/* CT_STATE */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_CT_STATE_RX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
-#define TF_RSVD_CT_STATE_END_IDX_RX               0
-#define TF_RSVD_CT_STATE_TX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
-#define TF_RSVD_CT_STATE_END_IDX_TX               0
-
-/* RANGE_PROF */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_PROF_RX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
-#define TF_RSVD_RANGE_PROF_TX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
-
-/* RANGE_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_ENTRY_RX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
-#define TF_RSVD_RANGE_ENTRY_TX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
-
-/* LAG_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_LAG_ENTRY_RX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
-#define TF_RSVD_LAG_ENTRY_TX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
-
-
-/* SRAM - Resources
- * Limited to the types that CFA provides.
- */
-#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
-
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SRAM_MCG_RX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
-/* Multicast Group on TX is not supported */
-#define TF_RSVD_SRAM_MCG_TX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
-
-/* First encap of 8B RX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
-/* First encap of 8B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
-
-#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
-/* First encap of 16B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
-
-/* Encap of 64B on RX is not supported */
-#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
-/* First encap of 64B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_SP_SMAC_RX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
-#define TF_RSVD_SRAM_SP_SMAC_TX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
-
-/* SRAM SP IPV4 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
-
-/* SRAM SP IPV6 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
-/* Not yet supported fully in infra */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
-
-#define TF_RSVD_SRAM_COUNTER_64B_RX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_COUNTER_64B_TX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
-
-#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
-
-#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
-
-/* HW Resource Pool names */
-
-#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
-#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
-#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
-
-#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
-#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
-#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
-
-#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
-#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
-#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
-
-#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
-#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
-#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
-
-#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
-
-#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
-#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
-#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
-
-#define TF_METER_PROF_POOL_NAME           meter_prof_pool
-#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
-#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
-
-#define TF_METER_INST_POOL_NAME           meter_inst_pool
-#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
-#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
-
-#define TF_MIRROR_POOL_NAME               mirror_pool
-#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
-#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
-
-#define TF_UPAR_POOL_NAME                 upar_pool
-#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
-#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
-
-#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
-#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
-#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
-
-#define TF_FKB_POOL_NAME                  fkb_pool
-#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
-#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
-
-#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
-#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
-#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
-
-#define TF_L2_FUNC_POOL_NAME              l2_func_pool
-#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
-#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
-
-#define TF_EPOCH0_POOL_NAME               epoch0_pool
-#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
-#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
-
-#define TF_EPOCH1_POOL_NAME               epoch1_pool
-#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
-#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
-
-#define TF_METADATA_POOL_NAME             metadata_pool
-#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
-#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
-
-#define TF_CT_STATE_POOL_NAME             ct_state_pool
-#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
-#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
-
-#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
-#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
-#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
-
-#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
-#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
-#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
-
-#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
-#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
-#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
-
-/* SRAM Resource Pool names */
-#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
-#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
-#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
-
-#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
-#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
-#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
-
-#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
-#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
-#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
-
-#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
-#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
-#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
-
-#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
-#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
-#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
-
-#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
-#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
-#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
-
-#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
-#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
-#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
-
-#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
-#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
-#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
-
-#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
-#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
-#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
-
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
-
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
-
-/* Sw Resource Pool Names */
-
-#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
-#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
-#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
-
-
-/** HW Resource types
- */
-enum tf_resource_type_hw {
-	/* Common HW resources for all chip variants */
-	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-	TF_RESC_TYPE_HW_PROF_FUNC,
-	TF_RESC_TYPE_HW_PROF_TCAM,
-	TF_RESC_TYPE_HW_EM_PROF_ID,
-	TF_RESC_TYPE_HW_EM_REC,
-	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-	TF_RESC_TYPE_HW_WC_TCAM,
-	TF_RESC_TYPE_HW_METER_PROF,
-	TF_RESC_TYPE_HW_METER_INST,
-	TF_RESC_TYPE_HW_MIRROR,
-	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/SR specific HW resources */
-	TF_RESC_TYPE_HW_SP_TCAM,
-	/* SR/SR2 specific HW resources */
-	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Thor, SR2 common HW resources */
-	TF_RESC_TYPE_HW_FKB,
-	/* SR2 specific HW resources */
-	TF_RESC_TYPE_HW_TBL_SCOPE,
-	TF_RESC_TYPE_HW_EPOCH0,
-	TF_RESC_TYPE_HW_EPOCH1,
-	TF_RESC_TYPE_HW_METADATA,
-	TF_RESC_TYPE_HW_CT_STATE,
-	TF_RESC_TYPE_HW_RANGE_PROF,
-	TF_RESC_TYPE_HW_RANGE_ENTRY,
-	TF_RESC_TYPE_HW_LAG_ENTRY,
-	TF_RESC_TYPE_HW_MAX
-};
-
-/** HW Resource types
- */
-enum tf_resource_type_sram {
-	TF_RESC_TYPE_SRAM_FULL_ACTION,
-	TF_RESC_TYPE_SRAM_MCG,
-	TF_RESC_TYPE_SRAM_ENCAP_8B,
-	TF_RESC_TYPE_SRAM_ENCAP_16B,
-	TF_RESC_TYPE_SRAM_ENCAP_64B,
-	TF_RESC_TYPE_SRAM_SP_SMAC,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-	TF_RESC_TYPE_SRAM_COUNTER_64B,
-	TF_RESC_TYPE_SRAM_NAT_SPORT,
-	TF_RESC_TYPE_SRAM_NAT_DPORT,
-	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-	TF_RESC_TYPE_SRAM_MAX
-};
 
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0a84e64d..e0469b653 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -7,3171 +7,916 @@
 
 #include <rte_common.h>
 
+#include <cfa_resource_types.h>
+
 #include "tf_rm.h"
-#include "tf_core.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
-#include "tf_resources.h"
-#include "tf_msg.h"
-#include "bnxt.h"
+#include "tf_device.h"
 #include "tfp.h"
+#include "tf_msg.h"
 
 /**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
- *   enum tf_dir               dir         - Direction to process
- *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
- *                                           in the hw query structure
- *   define                    def_value   - Define value to check against
- *   uint32_t                 *eflag       - Result of the check
- */
-#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
-	if ((dir) == TF_DIR_RX) {					      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	} else {							      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	}								      \
-} while (0)
-
-/**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
- *   enum tf_dir                dir         - Direction to process
- *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
- *                                            in the hw query structure
- *   define                     def_value   - Define value to check against
- *   uint32_t                  *eflag       - Result of the check
- */
-#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
-	if ((dir) == TF_DIR_RX) {					       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	} else {							       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	}								       \
-} while (0)
-
-/**
- * Internal macro to convert a reserved resource define name to be
- * direction specific.
- *
- * Parameters:
- *   enum tf_dir    dir         - Direction to process
- *   string         type        - Type name to append RX or TX to
- *   string         dtype       - Direction specific type
- *
- *
+ * Generic RM Element data type that an RM DB is build upon.
  */
-#define TF_RESC_RSVD(dir, type, dtype) do {	\
-		if ((dir) == TF_DIR_RX)		\
-			(dtype) = type ## _RX;	\
-		else				\
-			(dtype) = type ## _TX;	\
-	} while (0)
-
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
-{
-	switch (hw_type) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		return "L2 ctxt tcam";
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		return "Profile Func";
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		return "Profile tcam";
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		return "EM profile id";
-	case TF_RESC_TYPE_HW_EM_REC:
-		return "EM record";
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		return "WC tcam profile id";
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		return "WC tcam";
-	case TF_RESC_TYPE_HW_METER_PROF:
-		return "Meter profile";
-	case TF_RESC_TYPE_HW_METER_INST:
-		return "Meter instance";
-	case TF_RESC_TYPE_HW_MIRROR:
-		return "Mirror";
-	case TF_RESC_TYPE_HW_UPAR:
-		return "UPAR";
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		return "Source properties tcam";
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		return "L2 Function";
-	case TF_RESC_TYPE_HW_FKB:
-		return "FKB";
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		return "Table scope";
-	case TF_RESC_TYPE_HW_EPOCH0:
-		return "EPOCH0";
-	case TF_RESC_TYPE_HW_EPOCH1:
-		return "EPOCH1";
-	case TF_RESC_TYPE_HW_METADATA:
-		return "Metadata";
-	case TF_RESC_TYPE_HW_CT_STATE:
-		return "Connection tracking state";
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		return "Range profile";
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		return "Range entry";
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		return "LAG";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
-{
-	switch (sram_type) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		return "Full action";
-	case TF_RESC_TYPE_SRAM_MCG:
-		return "MCG";
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		return "Encap 8B";
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		return "Encap 16B";
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		return "Encap 64B";
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		return "Source properties SMAC";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		return "Source properties SMAC IPv4";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		return "Source properties IPv6";
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		return "Counter 64B";
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		return "NAT source port";
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		return "NAT destination port";
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		return "NAT source IPv4";
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		return "NAT destination IPv4";
-	default:
-		return "Invalid identifier";
-	}
-}
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
 
-/**
- * Helper function to perform a HW HCAPI resource type lookup against
- * the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
- */
-static int
-tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
-{
-	uint32_t value = -EOPNOTSUPP;
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
 
-	switch (index) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_REC:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_INST:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
-		break;
-	case TF_RESC_TYPE_HW_MIRROR:
-		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
-		break;
-	case TF_RESC_TYPE_HW_UPAR:
-		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
-		break;
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_FKB:
-		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
-		break;
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH0:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH1:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
-		break;
-	case TF_RESC_TYPE_HW_METADATA:
-		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
-		break;
-	case TF_RESC_TYPE_HW_CT_STATE:
-		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
-		break;
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
-		break;
-	default:
-		break;
-	}
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
 
-	return value;
-}
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
 
 /**
- * Helper function to perform a SRAM HCAPI resource type lookup
- * against the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
+ * TF RM DB definition
  */
-static int
-tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
-{
-	uint32_t value = -EOPNOTSUPP;
-
-	switch (index) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
-		break;
-	case TF_RESC_TYPE_SRAM_MCG:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
-		break;
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
-		break;
-	default:
-		break;
-	}
-
-	return value;
-}
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
 
-/**
- * Helper function to print all the HW resource qcaps errors reported
- * in the error_flag.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] error_flag
- *   Pointer to the hw error flags created at time of the query check
- */
-static void
-tf_rm_print_hw_qcaps_error(enum tf_dir dir,
-			   struct tf_rm_hw_query *hw_query,
-			   uint32_t *error_flag)
-{
-	int i;
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
 
-	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
 
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_hw_2_str(i),
-				    hw_query->hw_query[i].max,
-				    tf_rm_rsvd_hw_value(dir, i));
-	}
-}
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
 
 /**
- * Helper function to print all the SRAM resource qcaps errors
- * reported in the error_flag.
+ * Adjust an index according to the allocation information.
  *
- * [in] dir
- *   Receive or transmit direction
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] error_flag
- *   Pointer to the sram error flags created at time of the query check
- */
-static void
-tf_rm_print_sram_qcaps_error(enum tf_dir dir,
-			     struct tf_rm_sram_query *sram_query,
-			     uint32_t *error_flag)
-{
-	int i;
-
-	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_sram_2_str(i),
-				    sram_query->sram_query[i].max,
-				    tf_rm_rsvd_sram_value(dir, i));
-	}
-}
-
-/**
- * Performs a HW resource check between what firmware capability
- * reports and what the core expects is available.
+ * [in] cfg
+ *   Pointer to the DB configuration
  *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
  *
- * [in] query
- *   Pointer to HW Query data structure. Query holds what the firmware
- *   offers of the HW resources.
+ * [in] count
+ *   Number of DB configuration elements
  *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
  *
  * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
-			    enum tf_dir dir,
-			    uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-			     TF_RSVD_L2_CTXT_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_FUNC,
-			     TF_RSVD_PROF_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_TCAM,
-			     TF_RSVD_PROF_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_PROF_ID,
-			     TF_RSVD_EM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_REC,
-			     TF_RSVD_EM_REC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-			     TF_RSVD_WC_TCAM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM,
-			     TF_RSVD_WC_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_PROF,
-			     TF_RSVD_METER_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_INST,
-			     TF_RSVD_METER_INST,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_MIRROR,
-			     TF_RSVD_MIRROR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_UPAR,
-			     TF_RSVD_UPAR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_SP_TCAM,
-			     TF_RSVD_SP_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_FUNC,
-			     TF_RSVD_L2_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_FKB,
-			     TF_RSVD_FKB,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_TBL_SCOPE,
-			     TF_RSVD_TBL_SCOPE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH0,
-			     TF_RSVD_EPOCH0,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH1,
-			     TF_RSVD_EPOCH1,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METADATA,
-			     TF_RSVD_METADATA,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_CT_STATE,
-			     TF_RSVD_CT_STATE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_PROF,
-			     TF_RSVD_RANGE_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_ENTRY,
-			     TF_RSVD_RANGE_ENTRY,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_LAG_ENTRY,
-			     TF_RSVD_LAG_ENTRY,
-			     error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Performs a SRAM resource check between what firmware capability
- * reports and what the core expects is available.
- *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
- *
- * [in] query
- *   Pointer to SRAM Query data structure. Query holds what the
- *   firmware offers of the SRAM resources.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
- *
- * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
-			      enum tf_dir dir,
-			      uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_FULL_ACTION,
-			       TF_RSVD_SRAM_FULL_ACTION,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_MCG,
-			       TF_RSVD_SRAM_MCG,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_8B,
-			       TF_RSVD_SRAM_ENCAP_8B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_16B,
-			       TF_RSVD_SRAM_ENCAP_16B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_64B,
-			       TF_RSVD_SRAM_ENCAP_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC,
-			       TF_RSVD_SRAM_SP_SMAC,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			       TF_RSVD_SRAM_SP_SMAC_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			       TF_RSVD_SRAM_SP_SMAC_IPV6,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_COUNTER_64B,
-			       TF_RSVD_SRAM_COUNTER_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_SPORT,
-			       TF_RSVD_SRAM_NAT_SPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_DPORT,
-			       TF_RSVD_SRAM_NAT_DPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-			       TF_RSVD_SRAM_NAT_S_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-			       TF_RSVD_SRAM_NAT_D_IPV4,
-			       error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Internal function to mark pool entries used.
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_reserve_range(uint32_t count,
-		    uint32_t rsv_begin,
-		    uint32_t rsv_end,
-		    uint32_t max,
-		    struct bitalloc *pool)
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
 {
-	uint32_t i;
+	int i;
+	uint16_t cnt = 0;
 
-	/* If no resources has been requested we mark everything
-	 * 'used'
-	 */
-	if (count == 0)	{
-		for (i = 0; i < max; i++)
-			ba_alloc_index(pool, i);
-	} else {
-		/* Support 2 main modes
-		 * Reserved range starts from bottom up (with
-		 * pre-reserved value or not)
-		 * - begin = 0 to end xx
-		 * - begin = 1 to end xx
-		 *
-		 * Reserved range starts from top down
-		 * - begin = yy to end max
-		 */
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
 
-		/* Bottom up check, start from 0 */
-		if (rsv_begin == 0) {
-			for (i = rsv_end + 1; i < max; i++)
-				ba_alloc_index(pool, i);
-		}
-
-		/* Bottom up check, start from 1 or higher OR
-		 * Top Down
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
 		 */
-		if (rsv_begin >= 1) {
-			/* Allocate from 0 until start */
-			for (i = 0; i < rsv_begin; i++)
-				ba_alloc_index(pool, i);
-
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
-}
-
-/**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
- */
-static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
-	uint32_t end = 0;
-
-	/* l2 ctxt rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
-
-	/* l2 ctxt tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the profile tcam and profile func
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
-	uint32_t end = 0;
-
-	/* profile func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
-
-	/* profile func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_PROF_TCAM;
-
-	/* profile tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
-
-	/* profile tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the em profile id allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_em_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
-	uint32_t end = 0;
-
-	/* em prof id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
-
-	/* em prof id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the wildcard tcam and profile id
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_wc(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
-	uint32_t end = 0;
-
-	/* wc profile id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
-
-	/* wc profile id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_WC_TCAM;
-
-	/* wc tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_RX);
-
-	/* wc tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the meter resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_meter(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
-	uint32_t end = 0;
-
-	/* meter profiles rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_RX);
-
-	/* meter profiles tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_METER_INST;
-
-	/* meter rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_RX);
-
-	/* meter tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the mirror resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_mirror(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
-	uint32_t end = 0;
-
-	/* mirror rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_RX);
-
-	/* mirror tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the upar resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_upar(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_UPAR;
-	uint32_t end = 0;
-
-	/* upar rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_RX);
-
-	/* upar tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp tcam resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
-	uint32_t end = 0;
-
-	/* sp tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_RX);
-
-	/* sp tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 func resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
-	uint32_t end = 0;
-
-	/* l2 func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
-
-	/* l2 func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the fkb resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_fkb(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_FKB;
-	uint32_t end = 0;
-
-	/* fkb rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_RX);
-
-	/* fkb tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the tbld scope resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
-	uint32_t end = 0;
-
-	/* tbl scope rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
-
-	/* tbl scope tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 epoch resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_epoch(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
-	uint32_t end = 0;
-
-	/* epoch0 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_RX);
-
-	/* epoch0 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_EPOCH1;
-
-	/* epoch1 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_RX);
-
-	/* epoch1 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the metadata resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_metadata(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METADATA;
-	uint32_t end = 0;
-
-	/* metadata rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_RX);
-
-	/* metadata tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the ct state resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_ct_state(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
-	uint32_t end = 0;
-
-	/* ct state rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_RX);
-
-	/* ct state tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the range resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_range(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
-	uint32_t end = 0;
-
-	/* range profile rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
-
-	/* range profile tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
-
-	/* range entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
-
-	/* range entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the lag resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_lag_entry(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
-	uint32_t end = 0;
-
-	/* lag entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
-
-	/* lag entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the full action resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
-	uint16_t end = 0;
-
-	/* full action rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_RX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
-
-	/* full action tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_TX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the multicast group resources
- * allocated that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
-	uint16_t end = 0;
-
-	/* multicast group rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_MCG_RX,
-			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
-
-	/* Multicast Group on TX is not supported */
-}
-
-/**
- * Internal function to mark all the encap resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_encap(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
-	uint16_t end = 0;
-
-	/* encap 8b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_RX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
-
-	/* encap 8b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_TX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
-
-	/* encap 16b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_RX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
-
-	/* encap 16b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_TX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
-
-	/* Encap 64B not supported on RX */
-
-	/* Encap 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_64B_TX,
-			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_sp(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
-	uint16_t end = 0;
-
-	/* sp smac rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_RX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
-
-	/* sp smac tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_TX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-
-	/* SP SMAC IPv4 not supported on RX */
-
-	/* sp smac ipv4 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-
-	/* SP SMAC IPv6 not supported on RX */
-
-	/* sp smac ipv6 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the stat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_stats(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
-	uint16_t end = 0;
-
-	/* counter 64b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_RX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
-
-	/* counter 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_TX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the nat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_nat(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
-	uint16_t end = 0;
-
-	/* nat source port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_RX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
-
-	/* nat source port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_TX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
-
-	/* nat destination port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_RX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
-
-	/* nat destination port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_TX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-
-	/* nat source port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
-
-	/* nat source ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-
-	/* nat destination port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
-
-	/* nat destination ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
-}
-
-/**
- * Internal function used to validate the HW allocated resources
- * against the requested values.
- */
-static int
-tf_rm_hw_alloc_validate(enum tf_dir dir,
-			struct tf_rm_hw_alloc *hw_alloc,
-			struct tf_rm_entry *hw_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				hw_alloc->hw_num[i],
-				hw_entry[i].stride);
-			error = -1;
-		}
-	}
-
-	return error;
-}
-
-/**
- * Internal function used to validate the SRAM allocated resources
- * against the requested values.
- */
-static int
-tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
-			  struct tf_rm_sram_alloc *sram_alloc,
-			  struct tf_rm_entry *sram_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				sram_alloc->sram_num[i],
-				sram_entry[i].stride);
-			error = -1;
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+
 		}
 	}
 
-	return error;
+	*valid_count = cnt;
 }
 
 /**
- * Internal function used to mark all the HW resources allocated that
- * Truflow does not own.
+ * Resource Manager Adjust of base index definitions.
  */
-static void
-tf_rm_reserve_hw(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_l2_ctxt(tfs);
-	tf_rm_rsvd_prof(tfs);
-	tf_rm_rsvd_em_prof(tfs);
-	tf_rm_rsvd_wc(tfs);
-	tf_rm_rsvd_mirror(tfs);
-	tf_rm_rsvd_meter(tfs);
-	tf_rm_rsvd_upar(tfs);
-	tf_rm_rsvd_sp_tcam(tfs);
-	tf_rm_rsvd_l2_func(tfs);
-	tf_rm_rsvd_fkb(tfs);
-	tf_rm_rsvd_tbl_scope(tfs);
-	tf_rm_rsvd_epoch(tfs);
-	tf_rm_rsvd_metadata(tfs);
-	tf_rm_rsvd_ct_state(tfs);
-	tf_rm_rsvd_range(tfs);
-	tf_rm_rsvd_lag_entry(tfs);
-}
-
-/**
- * Internal function used to mark all the SRAM resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_reserve_sram(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_sram_full_action(tfs);
-	tf_rm_rsvd_sram_mcg(tfs);
-	tf_rm_rsvd_sram_encap(tfs);
-	tf_rm_rsvd_sram_sp(tfs);
-	tf_rm_rsvd_sram_stats(tfs);
-	tf_rm_rsvd_sram_nat(tfs);
-}
-
-/**
- * Internal function used to allocate and validate all HW resources.
- */
-static int
-tf_rm_allocate_validate_hw(struct tf *tfp,
-			   enum tf_dir dir)
-{
-	int rc;
-	int i;
-	struct tf_rm_hw_query hw_query;
-	struct tf_rm_hw_alloc hw_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *hw_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		hw_entries = tfs->resc.rx.hw_entry;
-	else
-		hw_entries = tfs->resc.tx.hw_entry;
-
-	/* Query for Session HW Resources */
-
-	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
-	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed,"
-			"error_flag:0x%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
-		goto cleanup;
-	}
-
-	/* Post process HW capability */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
-		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
-
-	/* Allocate Session HW Resources */
-	/* Perform HW allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-
- cleanup:
-
-	return -1;
-}
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
 
 /**
- * Internal function used to allocate and validate all SRAM resources.
+ * Adjust an index according to the allocation information.
  *
- * [in] tfp
- *   Pointer to TF handle
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
  *
  * Returns:
- *   0  - Success
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_allocate_validate_sram(struct tf *tfp,
-			     enum tf_dir dir)
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
 {
-	int rc;
-	int i;
-	struct tf_rm_sram_query sram_query;
-	struct tf_rm_sram_alloc sram_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *sram_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
-
-	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed,"
-			"error_flag:%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
-	}
+	int rc = 0;
+	uint32_t base_index;
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	base_index = db[db_index].alloc.entry.start;
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed,"
-			    " rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
 	}
 
-	return 0;
-
- cleanup:
-
-	return -1;
+	return rc;
 }
 
 /**
- * Helper function used to prune a HW resource array to only hold
- * elements that needs to be flushed.
- *
- * [in] tfs
- *   Session handle
+ * Logs an array of found residual entries to the console.
  *
  * [in] dir
  *   Receive or transmit direction
  *
- * [in] hw_entries
- *   Master HW Resource database
+ * [in] type
+ *   Type of Device Module
  *
- * [in/out] flush_entries
- *   Pruned HW Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master HW
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in] count
+ *   Number of entries in the residual array
  *
- * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
  */
-static int
-tf_rm_hw_to_flush(struct tf_session *tfs,
-		  enum tf_dir dir,
-		  struct tf_rm_entry *hw_entries,
-		  struct tf_rm_entry *flush_entries)
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
 {
-	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
+	int i;
 
-	/* Check all the hw resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
 	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_CTXT_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_INST_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_MIRROR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_UPAR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SP_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_FKB_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
-		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_TBL_SCOPE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
-	} else {
-		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
-			    tf_dir_2_str(dir),
-			    free_cnt,
-			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH0_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH1_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METADATA_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_CT_STATE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_LAG_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
 	}
-
-	return flush_rc;
 }
 
 /**
- * Helper function used to prune a SRAM resource array to only hold
- * elements that needs to be flushed.
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
  *
- * [in] tfs
- *   Session handle
- *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
  *
- * [in] hw_entries
- *   Master SRAM Resource data base
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
  *
- * [in/out] flush_entries
- *   Pruned SRAM Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master SRAM
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
  *
  * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_sram_to_flush(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    struct tf_rm_entry *sram_entries,
-		    struct tf_rm_entry *flush_entries)
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
 {
 	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
-
-	/* Check all the sram resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
-	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_FULL_ACTION_POOL_NAME,
-			rc);
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
-	} else {
-		flush_rc = 1;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
 	}
 
-	/* Only pools for RX direction */
-	if (dir == TF_DIR_RX) {
-		TF_RM_GET_POOLS_RX(tfs, &pool,
-				   TF_SRAM_MCG_POOL_NAME);
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
 		if (rc)
 			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
-		} else {
-			flush_rc = 1;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
 		}
-	} else {
-		/* Always prune TX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		*resv_size = found;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_8B_POOL_NAME,
-			rc);
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
+int
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
+{
+	int rc;
+	int i;
+	int j;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_16B_POOL_NAME,
-			rc);
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-	}
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_SP_SMAC_POOL_NAME,
-			rc);
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
-	}
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
+
+	/* Handle the case where a DB create request really ends up
+	 * being empty. Unsupported (if not rare) case but possible
+	 * that no resources are necessary for a 'direction'.
+	 */
+	if (hcapi_items == 0) {
+		TFP_DRV_LOG(ERR,
+			"%s: DB create request for Zero elements, DB Type:%s\n",
+			tf_dir_2_str(parms->dir),
+			tf_device_module_type_2_str(parms->type));
+
+		parms->rm_db = NULL;
+		return -ENOMEM;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_STATS_64B_POOL_NAME,
-			rc);
+	/* Alloc request, alignment already set */
+	cparms.nitems = (size_t)hcapi_items;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_SPORT_POOL_NAME,
-			rc);
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
-	} else {
-		flush_rc = 1;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_cnt[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		}
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_DPORT_POOL_NAME,
-			rc);
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       hcapi_items,
+				       req,
+				       resv);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_S_IPV4_POOL_NAME,
-			rc);
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db = (void *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_D_IPV4_POOL_NAME,
-			rc);
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
-	return flush_rc;
-}
+	db = rm_db->db;
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
 
-/**
- * Helper function used to generate an error log for the HW types that
- * needs to be flushed. The types should have been cleaned up ahead of
- * invoking tf_close_session.
- *
- * [in] hw_entries
- *   HW Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_hw_flush(enum tf_dir dir,
-		   struct tf_rm_entry *hw_entries)
-{
-	int i;
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
 
-	/* Walk the hw flush array and log the types that wasn't
-	 * cleaned up.
-	 */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entries[i].stride != 0)
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
+		 */
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_hw_2_str(i));
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    db[i].cfg_type);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
+			goto fail;
+		}
 	}
+
+	rm_db->num_entries = parms->num_elements;
+	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
+	*parms->rm_db = (void *)rm_db;
+
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
+	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
-/**
- * Helper function used to generate an error log for the SRAM types
- * that needs to be flushed. The types should have been cleaned up
- * ahead of invoking tf_close_session.
- *
- * [in] sram_entries
- *   SRAM Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_sram_flush(enum tf_dir dir,
-		     struct tf_rm_entry *sram_entries)
+int
+tf_rm_free_db(struct tf *tfp,
+	      struct tf_rm_free_db_parms *parms)
 {
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
+	 */
 
-	/* Walk the sram flush array and log the types that wasn't
-	 * cleaned up.
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
 	 */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entries[i].stride != 0)
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_sram_2_str(i));
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
 	}
-}
 
-void
-tf_rm_init(struct tf *tfp __rte_unused)
-{
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db[i].pool);
 
-	/* This version is host specific and should be checked against
-	 * when attaching as there is no guarantee that a secondary
-	 * would run from same image version.
-	 */
-	tfs->ver.major = TF_SESSION_VER_MAJOR;
-	tfs->ver.minor = TF_SESSION_VER_MINOR;
-	tfs->ver.update = TF_SESSION_VER_UPDATE;
-
-	tfs->session_id.id = 0;
-	tfs->ref_count = 0;
-
-	/* Initialization of Table Scopes */
-	/* ll_init(&tfs->tbl_scope_ll); */
-
-	/* Initialization of HW and SRAM resource DB */
-	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
-
-	/* Initialization of HW Resource Pools */
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records should not be controlled by way of a pool */
-
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Initialization of SRAM Resource Pools
-	 * These pools are set to the TFLIB defined MAX sizes not
-	 * AFM's HW max as to limit the memory consumption
-	 */
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		TF_RSVD_SRAM_FULL_ACTION_RX);
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		TF_RSVD_SRAM_FULL_ACTION_TX);
-	/* Only Multicast Group on RX is supported */
-	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
-		TF_RSVD_SRAM_MCG_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_8B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_8B_TX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_16B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_16B_TX);
-	/* Only Encap 64B on TX is supported */
-	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_64B_TX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		TF_RSVD_SRAM_SP_SMAC_RX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_TX);
-	/* Only SP SMAC IPv4 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-	/* Only SP SMAC IPv6 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
-		TF_RSVD_SRAM_COUNTER_64B_RX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_COUNTER_64B_TX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_SPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_SPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_DPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_DPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_S_IPV4_TX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/* Initialization of pools local to TF Core */
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate_validate(struct tf *tfp)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc;
-	int i;
+	int id;
+	uint32_t index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		rc = tf_rm_allocate_validate_hw(tfp, i);
-		if (rc)
-			return rc;
-		rc = tf_rm_allocate_validate_sram(tfp, i);
-		if (rc)
-			return rc;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	/* With both HW and SRAM allocated and validated we can
-	 * 'scrub' the reservation on the pools.
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
 	 */
-	tf_rm_reserve_hw(tfp);
-	tf_rm_reserve_sram(tfp);
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		rc = -ENOMEM;
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				&index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -EINVAL;
+	}
+
+	*parms->index = index;
 
 	return rc;
 }
 
 int
-tf_rm_close(struct tf *tfp)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
 	int rc;
-	int rc_close = 0;
-	int i;
-	struct tf_rm_entry *hw_entries;
-	struct tf_rm_entry *hw_flush_entries;
-	struct tf_rm_entry *sram_entries;
-	struct tf_rm_entry *sram_flush_entries;
-	struct tf_session *tfs __rte_unused =
-		(struct tf_session *)(tfp->session->core_data);
-
-	struct tf_rm_db flush_resc = tfs->resc;
-
-	/* On close it is assumed that the session has already cleaned
-	 * up all its resources, individually, while destroying its
-	 * flows. No checking is performed thus the behavior is as
-	 * follows.
-	 *
-	 * Session RM will signal FW to release session resources. FW
-	 * will perform invalidation of all the allocated entries
-	 * (assures any outstanding resources has been cleared, then
-	 * free the FW RM instance.
-	 *
-	 * Session will then be freed by tf_close_session() thus there
-	 * is no need to clean each resource pool as the whole session
-	 * is going away.
-	 */
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		if (i == TF_DIR_RX) {
-			hw_entries = tfs->resc.rx.hw_entry;
-			hw_flush_entries = flush_resc.rx.hw_entry;
-			sram_entries = tfs->resc.rx.sram_entry;
-			sram_flush_entries = flush_resc.rx.sram_entry;
-		} else {
-			hw_entries = tfs->resc.tx.hw_entry;
-			hw_flush_entries = flush_resc.tx.hw_entry;
-			sram_entries = tfs->resc.tx.sram_entry;
-			sram_flush_entries = flush_resc.tx.sram_entry;
-		}
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-		/* Check for any not previously freed HW resources and
-		 * flush if required.
-		 */
-		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering HW resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-			/* Log the entries to be flushed */
-			tf_rm_log_hw_flush(i, hw_flush_entries);
-		}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-		/* Check for any not previously freed SRAM resources
-		 * and flush if required.
-		 */
-		rc = tf_rm_sram_to_flush(tfs,
-					 i,
-					 sram_entries,
-					 sram_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
 
-			/* Log the entries to be flushed */
-			tf_rm_log_sram_flush(i, sram_flush_entries);
-		}
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	return rc_close;
-}
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
 
-#if (TF_SHADOW == 1)
-int
-tf_rm_shadow_db_init(struct tf_session *tfs)
-{
-	rc = 1;
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
 
 	return rc;
 }
-#endif /* TF_SHADOW */
 
 int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	int rc;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_L2_CTXT_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_PROF_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_WC_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
 		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
 		return rc;
 	}
 
-	return 0;
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_FULL_ACTION_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_TX)
-			break;
-		TF_RM_GET_POOLS_RX(tfs, pool,
-				   TF_SRAM_MCG_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_8B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_16B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		/* No pools for RX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_SP_SMAC_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_STATS_64B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_SPORT_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_S_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_D_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_INST_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_MIRROR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_UPAR:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_UPAR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH0_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH1_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METADATA:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METADATA_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_CT_STATE_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_ENTRY_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_LAG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_LAG_ENTRY_POOL_NAME,
-				rc);
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-		break;
-	/* No bitalloc pools for these types */
-	case TF_TBL_TYPE_EXT:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	}
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
 	return 0;
 }
 
 int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
-		break;
-	case TF_TBL_TYPE_UPAR:
-		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
-		break;
-	case TF_TBL_TYPE_METADATA:
-		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
-		break;
-	case TF_TBL_TYPE_LAG:
-		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		*hcapi_type = -1;
-		rc = -EOPNOTSUPP;
-	}
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	return rc;
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return 0;
 }
 
 int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index)
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 {
-	int rc;
-	struct tf_rm_resc *resc;
-	uint32_t hcapi_type;
-	uint32_t base_index;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (dir == TF_DIR_RX)
-		resc = &tfs->resc.rx;
-	else if (dir == TF_DIR_TX)
-		resc = &tfs->resc.tx;
-	else
-		return -EOPNOTSUPP;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
-	if (rc)
-		return -1;
-
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-	case TF_TBL_TYPE_MCAST_GROUPS:
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-	case TF_TBL_TYPE_ACT_STATS_64:
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		base_index = resc->sram_entry[hcapi_type].start;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-	case TF_TBL_TYPE_METER_PROF:
-	case TF_TBL_TYPE_METER_INST:
-	case TF_TBL_TYPE_UPAR:
-	case TF_TBL_TYPE_EPOCH0:
-	case TF_TBL_TYPE_EPOCH1:
-	case TF_TBL_TYPE_METADATA:
-	case TF_TBL_TYPE_CT_STATE:
-	case TF_TBL_TYPE_RANGE_PROF:
-	case TF_TBL_TYPE_RANGE_ENTRY:
-	case TF_TBL_TYPE_LAG:
-		base_index = resc->hw_entry[hcapi_type].start;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		return -EOPNOTSUPP;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	switch (c_type) {
-	case TF_RM_CONVERT_RM_BASE:
-		*convert_index = index - base_index;
-		break;
-	case TF_RM_CONVERT_ADD_BASE:
-		*convert_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
 	}
 
-	return 0;
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
+	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 1a09f13a7..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -3,301 +3,444 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
-#include "tf_resources.h"
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
-struct tf_session;
 
-/* Internal macro to determine appropriate allocation pools based on
- * DIRECTION parm, also performs error checking for DIRECTION parm. The
- * SESSION_POOL and SESSION pointers are set appropriately upon
- * successful return (the GLOBAL_POOL is used to globally manage
- * resource allocation and the SESSION_POOL is used to track the
- * resources that have been allocated to the session)
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
  *
- * parameters:
- *   struct tfp        *tfp
- *   enum tf_dir        direction
- *   struct bitalloc  **session_pool
- *   string             base_pool_name - used to form pointers to the
- *					 appropriate bit allocation
- *					 pools, both directions of the
- *					 session pools must have same
- *					 base name, for example if
- *					 POOL_NAME is feat_pool: - the
- *					 ptr's to the session pools
- *					 are feat_pool_rx feat_pool_tx
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
  *
- *  int                  rc - return code
- *			      0 - Success
- *			     -1 - invalid DIRECTION parm
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
-#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
-		(rc) = 0;						\
-		if ((direction) == TF_DIR_RX) {				\
-			*(session_pool) = (tfs)->pool_name ## _RX;	\
-		} else if ((direction) == TF_DIR_TX) {			\
-			*(session_pool) = (tfs)->pool_name ## _TX;	\
-		} else {						\
-			rc = -1;					\
-		}							\
-	} while (0)
 
-#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _RX)
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_new_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
 
-#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _TX)
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled', uses a Pool for internal storage */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
+	TF_RM_TYPE_MAX
+};
 
 /**
- * Resource query single entry
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
  */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
 };
 
 /**
- * Resource single entry
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
  */
-struct tf_rm_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
 };
 
 /**
- * Resource query array of HW entities
+ * Allocation information for a single element.
  */
-struct tf_rm_hw_query {
-	/** array of HW resource entries */
-	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_new_entry entry;
 };
 
 /**
- * Resource allocation array of HW entities
+ * Create RM DB parameters
  */
-struct tf_rm_hw_alloc {
-	/** array of HW resource entries */
-	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements.
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
+	 */
+	uint16_t *alloc_cnt;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void **rm_db;
 };
 
 /**
- * Resource query array of SRAM entities
+ * Free RM DB parameters
  */
-struct tf_rm_sram_query {
-	/** array of SRAM resource entries */
-	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
 };
 
 /**
- * Resource allocation array of SRAM entities
+ * Allocate RM parameters for a single element
  */
-struct tf_rm_sram_alloc {
-	/** array of SRAM resource entries */
-	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
- * Resource Manager arrays for a single direction
+ * Free RM parameters for a single element
  */
-struct tf_rm_resc {
-	/** array of HW resource entries */
-	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
-	/** array of SRAM resource entries */
-	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t index;
 };
 
 /**
- * Resource Manager Database
+ * Is Allocated parameters for a single element
  */
-struct tf_rm_db {
-	struct tf_rm_resc rx;
-	struct tf_rm_resc tx;
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	int *allocated;
 };
 
 /**
- * Helper function used to convert HW HCAPI resource type to a string.
+ * Get Allocation information for a single element
  */
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
 
 /**
- * Helper function used to convert SRAM HCAPI resource type to a string.
+ * Get HCAPI type parameters for a single element
  */
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
 
 /**
- * Initializes the Resource Manager and the associated database
- * entries for HW and SRAM resources. Must be called before any other
- * Resource Manager functions.
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
+/**
+ * @page rm Resource Manager
  *
- * [in] tfp
- *   Pointer to TF handle
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
-void tf_rm_init(struct tf *tfp);
 
 /**
- * Allocates and validates both HW and SRAM resources per the NVM
- * configuration. If any allocation fails all resources for the
- * session is deallocated.
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_rm_allocate_validate(struct tf *tfp);
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
 
 /**
- * Closes the Resource Manager and frees all allocated resources per
- * the associated database.
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
- *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
-int tf_rm_close(struct tf *tfp);
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
 
-#if (TF_SHADOW == 1)
 /**
- * Initializes Shadow DB of configuration elements
+ * Allocates a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to allocate parameters
  *
- * Returns:
- *  0  - Success
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
-int tf_rm_shadow_db_init(struct tf_session *tfs);
-#endif /* TF_SHADOW */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Tcam table type.
- *
- * Function will print error msg if tcam type is unsupported or lookup
- * failed.
+ * Free's a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to free parameters
  *
- * [in] type
- *   Type of the object
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool);
+int tf_rm_free(struct tf_rm_free_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Table type.
- *
- * Function will print error msg if table type is unsupported or
- * lookup failed.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] type
- *   Type of the object
+ * Performs an allocation verification check on a specified element.
  *
- * [in] dir
- *    Receive or transmit direction
+ * [in] parms
+ *   Pointer to is allocated parameters
  *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool);
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
 
 /**
- * Converts the TF Table Type to internal HCAPI_TYPE
- *
- * [in] type
- *   Type to be converted
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
  *
- * [in/out] hcapi_type
- *   Converted type
+ * [in] parms
+ *   Pointer to get info parameters
  *
- * Returns:
- *  0           - Success will set the *hcapi_type
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type);
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
- * TF RM Convert of index methods.
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-enum tf_rm_convert_type {
-	/** Adds the base of the Session Pool to the index */
-	TF_RM_CONVERT_ADD_BASE,
-	/** Removes the Session Pool base from the index */
-	TF_RM_CONVERT_RM_BASE
-};
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
 /**
- * Provides conversion of the Table Type index in relation to the
- * Session Pool base.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in] type
- *   Type of the object
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
  *
- * [in] c_type
- *   Type of conversion to perform
+ * [in] parms
+ *   Pointer to get inuse parameters
  *
- * [in] index
- *   Index to be converted
- *
- * [in/out]  convert_index
- *   Pointer to the converted index
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index);
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
deleted file mode 100644
index 2d9be654a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ /dev/null
@@ -1,907 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-
-#include <rte_common.h>
-
-#include <cfa_resource_types.h>
-
-#include "tf_rm_new.h"
-#include "tf_common.h"
-#include "tf_util.h"
-#include "tf_session.h"
-#include "tf_device.h"
-#include "tfp.h"
-#include "tf_msg.h"
-
-/**
- * Generic RM Element data type that an RM DB is build upon.
- */
-struct tf_rm_element {
-	/**
-	 * RM Element configuration type. If Private then the
-	 * hcapi_type can be ignored. If Null then the element is not
-	 * valid for the device.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/**
-	 * HCAPI RM Type for the element.
-	 */
-	uint16_t hcapi_type;
-
-	/**
-	 * HCAPI RM allocated range information for the element.
-	 */
-	struct tf_rm_alloc_info alloc;
-
-	/**
-	 * Bit allocator pool for the element. Pool size is controlled
-	 * by the struct tf_session_resources at time of session creation.
-	 * Null indicates that the element is not used for the device.
-	 */
-	struct bitalloc *pool;
-};
-
-/**
- * TF RM DB definition
- */
-struct tf_rm_new_db {
-	/**
-	 * Number of elements in the DB
-	 */
-	uint16_t num_entries;
-
-	/**
-	 * Direction this DB controls.
-	 */
-	enum tf_dir dir;
-
-	/**
-	 * Module type, used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-
-	/**
-	 * The DB consists of an array of elements
-	 */
-	struct tf_rm_element *db;
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] cfg
- *   Pointer to the DB configuration
- *
- * [in] reservations
- *   Pointer to the allocation values associated with the module
- *
- * [in] count
- *   Number of DB configuration elements
- *
- * [out] valid_count
- *   Number of HCAPI entries with a reservation value greater than 0
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static void
-tf_rm_count_hcapi_reservations(enum tf_dir dir,
-			       enum tf_device_module_type type,
-			       struct tf_rm_element_cfg *cfg,
-			       uint16_t *reservations,
-			       uint16_t count,
-			       uint16_t *valid_count)
-{
-	int i;
-	uint16_t cnt = 0;
-
-	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
-		    reservations[i] > 0)
-			cnt++;
-
-		/* Only log msg if a type is attempted reserved and
-		 * not supported. We ignore EM module as its using a
-		 * split configuration array thus it would fail for
-		 * this type of check.
-		 */
-		if (type != TF_DEVICE_MODULE_TYPE_EM &&
-		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
-		    reservations[i] > 0) {
-			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-		}
-	}
-
-	*valid_count = cnt;
-}
-
-/**
- * Resource Manager Adjust of base index definitions.
- */
-enum tf_rm_adjust_type {
-	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
-	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [in] action
- *   Adjust action
- *
- * [in] db_index
- *   DB index for the element type
- *
- * [in] index
- *   Index to convert
- *
- * [out] adj_index
- *   Adjusted index
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_adjust_index(struct tf_rm_element *db,
-		   enum tf_rm_adjust_type action,
-		   uint32_t db_index,
-		   uint32_t index,
-		   uint32_t *adj_index)
-{
-	int rc = 0;
-	uint32_t base_index;
-
-	base_index = db[db_index].alloc.entry.start;
-
-	switch (action) {
-	case TF_RM_ADJUST_RM_BASE:
-		*adj_index = index - base_index;
-		break;
-	case TF_RM_ADJUST_ADD_BASE:
-		*adj_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	return rc;
-}
-
-/**
- * Logs an array of found residual entries to the console.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] type
- *   Type of Device Module
- *
- * [in] count
- *   Number of entries in the residual array
- *
- * [in] residuals
- *   Pointer to an array of residual entries. Array is index same as
- *   the DB in which this function is used. Each entry holds residual
- *   value for that entry.
- */
-static void
-tf_rm_log_residuals(enum tf_dir dir,
-		    enum tf_device_module_type type,
-		    uint16_t count,
-		    uint16_t *residuals)
-{
-	int i;
-
-	/* Walk the residual array and log the types that wasn't
-	 * cleaned up to the console.
-	 */
-	for (i = 0; i < count; i++) {
-		if (residuals[i] != 0)
-			TFP_DRV_LOG(ERR,
-				"%s, %s was not cleaned up, %d outstanding\n",
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i),
-				residuals[i]);
-	}
-}
-
-/**
- * Performs a check of the passed in DB for any lingering elements. If
- * a resource type was found to not have been cleaned up by the caller
- * then its residual values are recorded, logged and passed back in an
- * allocate reservation array that the caller can pass to the FW for
- * cleanup.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [out] resv_size
- *   Pointer to the reservation size of the generated reservation
- *   array.
- *
- * [in/out] resv
- *   Pointer Pointer to a reservation array. The reservation array is
- *   allocated after the residual scan and holds any found residual
- *   entries. Thus it can be smaller than the DB that the check was
- *   performed on. Array must be freed by the caller.
- *
- * [out] residuals_present
- *   Pointer to a bool flag indicating if residual was present in the
- *   DB
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
-		      uint16_t *resv_size,
-		      struct tf_rm_resc_entry **resv,
-		      bool *residuals_present)
-{
-	int rc;
-	int i;
-	int f;
-	uint16_t count;
-	uint16_t found;
-	uint16_t *residuals = NULL;
-	uint16_t hcapi_type;
-	struct tf_rm_get_inuse_count_parms iparms;
-	struct tf_rm_get_alloc_info_parms aparms;
-	struct tf_rm_get_hcapi_parms hparms;
-	struct tf_rm_alloc_info info;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_entry *local_resv = NULL;
-
-	/* Create array to hold the entries that have residuals */
-	cparms.nitems = rm_db->num_entries;
-	cparms.size = sizeof(uint16_t);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	residuals = (uint16_t *)cparms.mem_va;
-
-	/* Traverse the DB and collect any residual elements */
-	iparms.rm_db = rm_db;
-	iparms.count = &count;
-	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
-		iparms.db_index = i;
-		rc = tf_rm_get_inuse_count(&iparms);
-		/* Not a device supported entry, just skip */
-		if (rc == -ENOTSUP)
-			continue;
-		if (rc)
-			goto cleanup_residuals;
-
-		if (count) {
-			found++;
-			residuals[i] = count;
-			*residuals_present = true;
-		}
-	}
-
-	if (*residuals_present) {
-		/* Populate a reduced resv array with only the entries
-		 * that have residuals.
-		 */
-		cparms.nitems = found;
-		cparms.size = sizeof(struct tf_rm_resc_entry);
-		cparms.alignment = 0;
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-
-		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-		aparms.rm_db = rm_db;
-		hparms.rm_db = rm_db;
-		hparms.hcapi_type = &hcapi_type;
-		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
-			if (residuals[i] == 0)
-				continue;
-			aparms.db_index = i;
-			aparms.info = &info;
-			rc = tf_rm_get_info(&aparms);
-			if (rc)
-				goto cleanup_all;
-
-			hparms.db_index = i;
-			rc = tf_rm_get_hcapi_type(&hparms);
-			if (rc)
-				goto cleanup_all;
-
-			local_resv[f].type = hcapi_type;
-			local_resv[f].start = info.entry.start;
-			local_resv[f].stride = info.entry.stride;
-			f++;
-		}
-		*resv_size = found;
-	}
-
-	tf_rm_log_residuals(rm_db->dir,
-			    rm_db->type,
-			    rm_db->num_entries,
-			    residuals);
-
-	tfp_free((void *)residuals);
-	*resv = local_resv;
-
-	return 0;
-
- cleanup_all:
-	tfp_free((void *)local_resv);
-	*resv = NULL;
- cleanup_residuals:
-	tfp_free((void *)residuals);
-
-	return rc;
-}
-
-int
-tf_rm_create_db(struct tf *tfp,
-		struct tf_rm_create_db_parms *parms)
-{
-	int rc;
-	int i;
-	int j;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	uint16_t max_types;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_req_entry *query;
-	enum tf_rm_resc_resv_strategy resv_strategy;
-	struct tf_rm_resc_req_entry *req;
-	struct tf_rm_resc_entry *resv;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_element *db;
-	uint32_t pool_size;
-	uint16_t hcapi_items;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc)
-		return rc;
-
-	/* Retrieve device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc)
-		return rc;
-
-	/* Need device max number of elements for the RM QCAPS */
-	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
-	if (rc)
-		return rc;
-
-	cparms.nitems = max_types;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Get Firmware Capabilities */
-	rc = tf_msg_session_resc_qcaps(tfp,
-				       parms->dir,
-				       max_types,
-				       query,
-				       &resv_strategy);
-	if (rc)
-		return rc;
-
-	/* Process capabilities against DB requirements. However, as a
-	 * DB can hold elements that are not HCAPI we can reduce the
-	 * req msg content by removing those out of the request yet
-	 * the DB holds them all as to give a fast lookup. We can also
-	 * remove entries where there are no request for elements.
-	 */
-	tf_rm_count_hcapi_reservations(parms->dir,
-				       parms->type,
-				       parms->cfg,
-				       parms->alloc_cnt,
-				       parms->num_elements,
-				       &hcapi_items);
-
-	/* Alloc request, alignment already set */
-	cparms.nitems = (size_t)hcapi_items;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Alloc reservation, alignment and nitems already set */
-	cparms.size = sizeof(struct tf_rm_resc_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-	/* Build the request */
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			/* Only perform reservation for entries that
-			 * has been requested
-			 */
-			if (parms->alloc_cnt[i] == 0)
-				continue;
-
-			/* Verify that we can get the full amount
-			 * allocated per the qcaps availability.
-			 */
-			if (parms->alloc_cnt[i] <=
-			    query[parms->cfg[i].hcapi_type].max) {
-				req[j].type = parms->cfg[i].hcapi_type;
-				req[j].min = parms->alloc_cnt[i];
-				req[j].max = parms->alloc_cnt[i];
-				j++;
-			} else {
-				TFP_DRV_LOG(ERR,
-					    "%s: Resource failure, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    parms->cfg[i].hcapi_type);
-				TFP_DRV_LOG(ERR,
-					"req:%d, avail:%d\n",
-					parms->alloc_cnt[i],
-					query[parms->cfg[i].hcapi_type].max);
-				return -EINVAL;
-			}
-		}
-	}
-
-	rc = tf_msg_session_resc_alloc(tfp,
-				       parms->dir,
-				       hcapi_items,
-				       req,
-				       resv);
-	if (rc)
-		return rc;
-
-	/* Build the RM DB per the request */
-	cparms.nitems = 1;
-	cparms.size = sizeof(struct tf_rm_new_db);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db = (void *)cparms.mem_va;
-
-	/* Build the DB within RM DB */
-	cparms.nitems = parms->num_elements;
-	cparms.size = sizeof(struct tf_rm_element);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
-
-	db = rm_db->db;
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-
-		/* Skip any non HCAPI types as we didn't include them
-		 * in the reservation request.
-		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
-			continue;
-
-		/* If the element didn't request an allocation no need
-		 * to create a pool nor verify if we got a reservation.
-		 */
-		if (parms->alloc_cnt[i] == 0)
-			continue;
-
-		/* If the element had requested an allocation and that
-		 * allocation was a success (full amount) then
-		 * allocate the pool.
-		 */
-		if (parms->alloc_cnt[i] == resv[j].stride) {
-			db[i].alloc.entry.start = resv[j].start;
-			db[i].alloc.entry.stride = resv[j].stride;
-
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			j++;
-		} else {
-			/* Bail out as we want what we requested for
-			 * all elements, not any less.
-			 */
-			TFP_DRV_LOG(ERR,
-				    "%s: Alloc failed, type:%d\n",
-				    tf_dir_2_str(parms->dir),
-				    db[i].cfg_type);
-			TFP_DRV_LOG(ERR,
-				    "req:%d, alloc:%d\n",
-				    parms->alloc_cnt[i],
-				    resv[j].stride);
-			goto fail;
-		}
-	}
-
-	rm_db->num_entries = parms->num_elements;
-	rm_db->dir = parms->dir;
-	rm_db->type = parms->type;
-	*parms->rm_db = (void *)rm_db;
-
-	printf("%s: type:%d num_entries:%d\n",
-	       tf_dir_2_str(parms->dir),
-	       parms->type,
-	       i);
-
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-
-	return 0;
-
- fail:
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-	tfp_free((void *)db->pool);
-	tfp_free((void *)db);
-	tfp_free((void *)rm_db);
-	parms->rm_db = NULL;
-
-	return -EINVAL;
-}
-
-int
-tf_rm_free_db(struct tf *tfp,
-	      struct tf_rm_free_db_parms *parms)
-{
-	int rc;
-	int i;
-	uint16_t resv_size = 0;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_resc_entry *resv;
-	bool residuals_found = false;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	/* Device unbind happens when the TF Session is closed and the
-	 * session ref count is 0. Device unbind will cleanup each of
-	 * its support modules, i.e. Identifier, thus we're ending up
-	 * here to close the DB.
-	 *
-	 * On TF Session close it is assumed that the session has already
-	 * cleaned up all its resources, individually, while
-	 * destroying its flows.
-	 *
-	 * To assist in the 'cleanup checking' the DB is checked for any
-	 * remaining elements and logged if found to be the case.
-	 *
-	 * Any such elements will need to be 'cleared' ahead of
-	 * returning the resources to the HCAPI RM.
-	 *
-	 * RM will signal FW to flush the DB resources. FW will
-	 * perform the invalidation. TF Session close will return the
-	 * previous allocated elements to the RM and then close the
-	 * HCAPI RM registration. That then saves several 'free' msgs
-	 * from being required.
-	 */
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-
-	/* Check for residuals that the client didn't clean up */
-	rc = tf_rm_check_residuals(rm_db,
-				   &resv_size,
-				   &resv,
-				   &residuals_found);
-	if (rc)
-		return rc;
-
-	/* Invalidate any residuals followed by a DB traversal for
-	 * pool cleanup.
-	 */
-	if (residuals_found) {
-		rc = tf_msg_session_resc_flush(tfp,
-					       parms->dir,
-					       resv_size,
-					       resv);
-		tfp_free((void *)resv);
-		/* On failure we still have to cleanup so we can only
-		 * log that FW failed.
-		 */
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s: Internal Flush error, module:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_device_module_type_2_str(rm_db->type));
-	}
-
-	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db[i].pool);
-
-	tfp_free((void *)parms->rm_db);
-
-	return rc;
-}
-
-int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority)
-		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
-	else
-		id = ba_alloc(rm_db->db[parms->db_index].pool);
-	if (id == BA_FAIL) {
-		rc = -ENOMEM;
-		TFP_DRV_LOG(ERR,
-			    "%s: Allocation failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_ADD_BASE,
-				parms->db_index,
-				id,
-				&index);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Alloc adjust of base index failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return -EINVAL;
-	}
-
-	*parms->index = index;
-
-	return rc;
-}
-
-int
-tf_rm_free(struct tf_rm_free_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
-	/* No logging direction matters and that is not available here */
-	if (rc)
-		return rc;
-
-	return rc;
-}
-
-int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
-				     adj_index);
-
-	return rc;
-}
-
-int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	memcpy(parms->info,
-	       &rm_db->db[parms->db_index].alloc,
-	       sizeof(struct tf_rm_alloc_info));
-
-	return 0;
-}
-
-int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
-
-	return 0;
-}
-
-int
-tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
-{
-	int rc = 0;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail silently (no logging), if the pool is not valid there
-	 * was no elements allocated for it.
-	 */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		*parms->count = 0;
-		return 0;
-	}
-
-	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
-
-	return rc;
-
-}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
deleted file mode 100644
index 5cb68892a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ /dev/null
@@ -1,446 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_RM_NEW_H_
-#define TF_RM_NEW_H_
-
-#include "tf_core.h"
-#include "bitalloc.h"
-#include "tf_device.h"
-
-struct tf;
-
-/**
- * The Resource Manager (RM) module provides basic DB handling for
- * internal resources. These resources exists within the actual device
- * and are controlled by the HCAPI Resource Manager running on the
- * firmware.
- *
- * The RM DBs are all intended to be indexed using TF types there for
- * a lookup requires no additional conversion. The DB configuration
- * specifies the TF Type to HCAPI Type mapping and it becomes the
- * responsibility of the DB initialization to handle this static
- * mapping.
- *
- * Accessor functions are providing access to the DB, thus hiding the
- * implementation.
- *
- * The RM DB will work on its initial allocated sizes so the
- * capability of dynamically growing a particular resource is not
- * possible. If this capability later becomes a requirement then the
- * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
- * structure and several new APIs would need to be added to allow for
- * growth of a single TF resource type.
- *
- * The access functions does not check for NULL pointers as it's a
- * support module, not called directly.
- */
-
-/**
- * Resource reservation single entry result. Used when accessing HCAPI
- * RM on the firmware.
- */
-struct tf_rm_new_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
-};
-
-/**
- * RM Element configuration enumeration. Used by the Device to
- * indicate how the RM elements the DB consists off, are to be
- * configured at time of DB creation. The TF may present types to the
- * ULP layer that is not controlled by HCAPI within the Firmware.
- */
-enum tf_rm_elem_cfg_type {
-	/** No configuration */
-	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
-	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
-	/**
-	 * Shared element thus it belongs to a shared FW Session and
-	 * is not controlled by the Host.
-	 */
-	TF_RM_ELEM_CFG_SHARED,
-	TF_RM_TYPE_MAX
-};
-
-/**
- * RM Reservation strategy enumeration. Type of strategy comes from
- * the HCAPI RM QCAPS handshake.
- */
-enum tf_rm_resc_resv_strategy {
-	TF_RM_RESC_RESV_STATIC_PARTITION,
-	TF_RM_RESC_RESV_STRATEGY_1,
-	TF_RM_RESC_RESV_STRATEGY_2,
-	TF_RM_RESC_RESV_STRATEGY_3,
-	TF_RM_RESC_RESV_MAX
-};
-
-/**
- * RM Element configuration structure, used by the Device to configure
- * how an individual TF type is configured in regard to the HCAPI RM
- * of same type.
- */
-struct tf_rm_element_cfg {
-	/**
-	 * RM Element config controls how the DB for that element is
-	 * processed.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/* If a HCAPI to TF type conversion is required then TF type
-	 * can be added here.
-	 */
-
-	/**
-	 * HCAPI RM Type for the element. Used for TF to HCAPI type
-	 * conversion.
-	 */
-	uint16_t hcapi_type;
-};
-
-/**
- * Allocation information for a single element.
- */
-struct tf_rm_alloc_info {
-	/**
-	 * HCAPI RM allocated range information.
-	 *
-	 * NOTE:
-	 * In case of dynamic allocation support this would have
-	 * to be changed to linked list of tf_rm_entry instead.
-	 */
-	struct tf_rm_new_entry entry;
-};
-
-/**
- * Create RM DB parameters
- */
-struct tf_rm_create_db_parms {
-	/**
-	 * [in] Device module type. Used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-	/**
-	 * [in] Receive or transmit direction.
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Number of elements.
-	 */
-	uint16_t num_elements;
-	/**
-	 * [in] Parameter structure array. Array size is num_elements.
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Resource allocation count array. This array content
-	 * originates from the tf_session_resources that is passed in
-	 * on session open.
-	 * Array size is num_elements.
-	 */
-	uint16_t *alloc_cnt;
-	/**
-	 * [out] RM DB Handle
-	 */
-	void **rm_db;
-};
-
-/**
- * Free RM DB parameters
- */
-struct tf_rm_free_db_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-};
-
-/**
- * Allocate RM parameters for a single element
- */
-struct tf_rm_allocate_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Pointer to the allocated index in normalized
-	 * form. Normalized means the index has been adjusted,
-	 * i.e. Full Action Record offsets.
-	 */
-	uint32_t *index;
-	/**
-	 * [in] Priority, indicates the prority of the entry
-	 * priority  0: allocate from top of the tcam (from index 0
-	 *              or lowest available index)
-	 * priority !0: allocate from bottom of the tcam (from highest
-	 *              available index)
-	 */
-	uint32_t priority;
-};
-
-/**
- * Free RM parameters for a single element
- */
-struct tf_rm_free_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint16_t index;
-};
-
-/**
- * Is Allocated parameters for a single element
- */
-struct tf_rm_is_allocated_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t index;
-	/**
-	 * [in] Pointer to flag that indicates the state of the query
-	 */
-	int *allocated;
-};
-
-/**
- * Get Allocation information for a single element
- */
-struct tf_rm_get_alloc_info_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the requested allocation information for
-	 * the specified db_index
-	 */
-	struct tf_rm_alloc_info *info;
-};
-
-/**
- * Get HCAPI type parameters for a single element
- */
-struct tf_rm_get_hcapi_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the hcapi type for the specified db_index
-	 */
-	uint16_t *hcapi_type;
-};
-
-/**
- * Get InUse count parameters for single element
- */
-struct tf_rm_get_inuse_count_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the inuse count for the specified db_index
-	 */
-	uint16_t *count;
-};
-
-/**
- * @page rm Resource Manager
- *
- * @ref tf_rm_create_db
- *
- * @ref tf_rm_free_db
- *
- * @ref tf_rm_allocate
- *
- * @ref tf_rm_free
- *
- * @ref tf_rm_is_allocated
- *
- * @ref tf_rm_get_info
- *
- * @ref tf_rm_get_hcapi_type
- *
- * @ref tf_rm_get_inuse_count
- */
-
-/**
- * Creates and fills a Resource Manager (RM) DB with requested
- * elements. The DB is indexed per the parms structure.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to create parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- * - Fail on parameter check
- * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
- * - Fail on DB creation if DB already exist
- *
- * - Allocs local DB
- * - Does hcapi qcaps
- * - Does hcapi reservation
- * - Populates the pool with allocated elements
- * - Returns handle to the created DB
- */
-int tf_rm_create_db(struct tf *tfp,
-		    struct tf_rm_create_db_parms *parms);
-
-/**
- * Closes the Resource Manager (RM) DB and frees all allocated
- * resources per the associated database.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free_db(struct tf *tfp,
-		  struct tf_rm_free_db_parms *parms);
-
-/**
- * Allocates a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to allocate parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- *   - (-ENOMEM) if pool is empty
- */
-int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
-
-/**
- * Free's a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free(struct tf_rm_free_parms *parms);
-
-/**
- * Performs an allocation verification check on a specified element.
- *
- * [in] parms
- *   Pointer to is allocated parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- *  - If pool is set to Chip MAX, then the query index must be checked
- *    against the allocated range and query index must be allocated as well.
- *  - If pool is allocated size only, then check if query index is allocated.
- */
-int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
-
-/**
- * Retrieves an elements allocation information from the Resource
- * Manager (RM) DB.
- *
- * [in] parms
- *   Pointer to get info parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type.
- *
- * [in] parms
- *   Pointer to get hcapi parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type inuse count.
- *
- * [in] parms
- *   Pointer to get inuse parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
-
-#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 705bb0955..e4472ed7f 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -14,6 +14,7 @@
 #include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "tf_resources.h"
 #include "stack.h"
 
 /**
@@ -43,7 +44,8 @@
 #define TF_SESSION_EM_POOL_SIZE \
 	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
 
-/** Session
+/**
+ * Session
  *
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
@@ -99,216 +101,6 @@ struct tf_session {
 	/** Device handle */
 	struct tf_dev_info dev;
 
-	/** Session HW and SRAM resources */
-	struct tf_rm_db resc;
-
-	/* Session HW resource pools */
-
-	/** RX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** RX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	/** TX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-
-	/** RX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	/** TX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-
-	/** RX EM Profile ID Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	/** TX EM Key Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/** RX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	/** TX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records are not controlled by way of a pool */
-
-	/** RX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	/** TX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-
-	/** RX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	/** TX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-
-	/** RX Meter Instance Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	/** TX Meter Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-
-	/** RX Mirror Configuration Pool*/
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	/** RX Mirror Configuration Pool */
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-
-	/** RX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	/** TX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	/** RX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	/** TX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	/** RX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	/** TX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	/** RX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	/** TX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-
-	/** RX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	/** TX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-
-	/** RX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	/** TX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-
-	/** RX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	/** TX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-
-	/** RX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	/** TX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-
-	/** RX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	/** TX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-
-	/** RX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	/** TX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-
-	/** RX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	/** TX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Session SRAM pools */
-
-	/** RX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		      TF_RSVD_SRAM_FULL_ACTION_RX);
-	/** TX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		      TF_RSVD_SRAM_FULL_ACTION_TX);
-
-	/** RX Multicast Group Pool, only RX is supported */
-	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
-		      TF_RSVD_SRAM_MCG_RX);
-
-	/** RX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_8B_RX);
-	/** TX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_8B_TX);
-
-	/** RX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_16B_RX);
-	/** TX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_16B_TX);
-
-	/** TX Encap 64B Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_64B_TX);
-
-	/** RX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		      TF_RSVD_SRAM_SP_SMAC_RX);
-	/** TX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_TX);
-
-	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-
-	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-
-	/** RX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_COUNTER_64B_RX);
-	/** TX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_COUNTER_64B_TX);
-
-	/** RX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_SPORT_RX);
-	/** TX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_SPORT_TX);
-
-	/** RX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_DPORT_RX);
-	/** TX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_DPORT_TX);
-
-	/** RX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	/** TX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
-
-	/** RX NAT Destination IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	/** TX NAT IPv4 Destination Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/**
-	 * Pools not allocated from HCAPI RM
-	 */
-
-	/** RX L2 Ctx Remap ID  Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 Ctx Remap ID Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** CRC32 seed table */
-#define TF_LKUP_SEED_MEM_SIZE 512
-	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
-
-	/** Lookup3 init values */
-	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d7f5de4c4..05e866dc6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -5,175 +5,413 @@
 
 /* Truflow Table APIs and supporting code */
 
-#include <stdio.h>
-#include <string.h>
-#include <stdbool.h>
-#include <math.h>
-#include <sys/param.h>
 #include <rte_common.h>
-#include <rte_errno.h>
-#include "hsi_struct_def_dpdk.h"
 
-#include "tf_core.h"
+#include "tf_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
 #include "tf_util.h"
-#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
-#include "hwrm_tf.h"
-#include "bnxt.h"
-#include "tf_resources.h"
-#include "tf_rm.h"
-#include "stack.h"
-#include "tf_common.h"
+
+
+struct tf;
+
+/**
+ * Table DBs.
+ */
+static void *tbl_db[TF_DIR_MAX];
+
+/**
+ * Table Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
 
 /**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
+ * Shadow init flag, set on bind and cleared on unbind
  */
-static int
-tf_bulk_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms)
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table DB already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_tbl_unbind(struct tf *tfp)
 {
 	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms)
+{
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
 	if (rc)
 		return rc;
 
-	index = parms->starting_idx;
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
 
-	/*
-	 * Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->starting_idx,
-			    &index);
+			    parms->idx);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
+{
+	int rc;
+	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
 		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   parms->type,
-		   index);
+		   parms->idx);
 		return -EINVAL;
 	}
 
-	/* Get the entry */
-	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
 }
 
-#if (TF_SHADOW == 1)
-/**
- * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is incremente and
- * returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count incremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
-			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+int
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Alloc with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	int rc;
+	uint16_t hcapi_type;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	return -EOPNOTSUPP;
-}
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
-/**
- * Free Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is decremente and
- * new ref_count returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_free_tbl_entry_shadow(struct tf_session *tfs,
-			 struct tf_free_tbl_entry_parms *parms)
-{
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Free with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
-	return -EOPNOTSUPP;
-}
-#endif /* TF_SHADOW */
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
 
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
 
- /* API defined in tf_core.h */
 int
-tf_bulk_get_tbl_entry(struct tf *tfp,
-		 struct tf_bulk_get_tbl_entry_parms *parms)
+tf_tbl_bulk_get(struct tf *tfp,
+		struct tf_tbl_get_bulk_parms *parms)
 {
-	int rc = 0;
+	int rc;
+	int i;
+	uint16_t hcapi_type;
+	uint32_t idx;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
+	if (!init) {
 		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
+			    "%s: No Table DBs created\n",
 			    tf_dir_2_str(parms->dir));
 
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
+		return -EINVAL;
+	}
+	/* Verify that the entries has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.allocated = &allocated;
+	idx = parms->starting_idx;
+	for (i = 0; i < parms->num_entries; i++) {
+		aparms.index = idx;
+		rc = tf_rm_is_allocated(&aparms);
 		if (rc)
+			return rc;
+
+		if (!allocated) {
 			TFP_DRV_LOG(ERR,
-				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    strerror(-rc));
+				    idx);
+			return -EINVAL;
+		}
+		idx++;
+	}
+
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entries */
+	rc = tf_msg_bulk_get_tbl_entry(tfp,
+				       parms->dir,
+				       hcapi_type,
+				       parms->starting_idx,
+				       parms->num_entries,
+				       parms->entry_sz_in_bytes,
+				       parms->physical_mem_addr);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index b17557345..eb560ffa7 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -3,17 +3,21 @@
  * All rights reserved.
  */
 
-#ifndef _TF_TBL_H_
-#define _TF_TBL_H_
-
-#include <stdint.h>
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
 
 #include "tf_core.h"
 #include "stack.h"
 
-struct tf_session;
+struct tf;
+
+/**
+ * The Table module provides processing of Internal TF table types.
+ */
 
-/** table scope control block content */
+/**
+ * Table scope control block content
+ */
 struct tf_em_caps {
 	uint32_t flags;
 	uint32_t supported;
@@ -35,66 +39,364 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
-	struct tf_em_caps          em_caps[TF_DIR_MAX];
-	struct stack               ext_act_pool[TF_DIR_MAX];
-	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps em_caps[TF_DIR_MAX];
+	struct stack ext_act_pool[TF_DIR_MAX];
+	uint32_t *ext_act_pool_mem[TF_DIR_MAX];
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+};
+
+/**
+ * Table allocation parameters
+ */
+struct tf_tbl_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t *idx;
+};
+
+/**
+ * Table free parameters
+ */
+struct tf_tbl_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
- */
-#define TF_EM_PAGE_SIZE_4K 12
-#define TF_EM_PAGE_SIZE_8K 13
-#define TF_EM_PAGE_SIZE_64K 16
-#define TF_EM_PAGE_SIZE_256K 18
-#define TF_EM_PAGE_SIZE_1M 20
-#define TF_EM_PAGE_SIZE_2M 21
-#define TF_EM_PAGE_SIZE_4M 22
-#define TF_EM_PAGE_SIZE_1G 30
-
-/* Set page size */
-#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
-
-#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
-#else
-#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
-#endif
-
-#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
-#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
-
-/**
- * Initialize table pool structure to indicate
- * no table scope has been associated with the
- * external pool of indexes.
- *
- * [in] session
- */
-void
-tf_init_tbl_pool(struct tf_session *session);
-
-#endif /* _TF_TBL_H_ */
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table set parameters
+ */
+struct tf_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get parameters
+ */
+struct tf_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get bulk parameters
+ */
+struct tf_tbl_get_bulk_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * @page tbl Table
+ *
+ * @ref tf_tbl_bind
+ *
+ * @ref tf_tbl_unbind
+ *
+ * @ref tf_tbl_alloc
+ *
+ * @ref tf_tbl_free
+ *
+ * @ref tf_tbl_alloc_search
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_bulk_get
+ */
+
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table allocation parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
+
+/**
+ * Retrieves bulk block of elements by sending a firmware request to
+ * get the elements.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get bulk parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bulk_get(struct tf *tfp,
+		    struct tf_tbl_get_bulk_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
deleted file mode 100644
index 2f5af6060..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ /dev/null
@@ -1,342 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <rte_common.h>
-
-#include "tf_tbl_type.h"
-#include "tf_common.h"
-#include "tf_rm_new.h"
-#include "tf_util.h"
-#include "tf_msg.h"
-#include "tfp.h"
-
-struct tf;
-
-/**
- * Table DBs.
- */
-static void *tbl_db[TF_DIR_MAX];
-
-/**
- * Table Shadow DBs
- */
-/* static void *shadow_tbl_db[TF_DIR_MAX]; */
-
-/**
- * Init flag, set on bind and cleared on unbind
- */
-static uint8_t init;
-
-/**
- * Shadow init flag, set on bind and cleared on unbind
- */
-/* static uint8_t shadow_init; */
-
-int
-tf_tbl_bind(struct tf *tfp,
-	    struct tf_tbl_cfg_parms *parms)
-{
-	int rc;
-	int i;
-	struct tf_rm_create_db_parms db_cfg = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (init) {
-		TFP_DRV_LOG(ERR,
-			    "Table already initialized\n");
-		return -EINVAL;
-	}
-
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.cfg = parms->cfg;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		db_cfg.dir = i;
-		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
-		db_cfg.rm_db = &tbl_db[i];
-		rc = tf_rm_create_db(tfp, &db_cfg);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Table DB creation failed\n",
-				    tf_dir_2_str(i));
-
-			return rc;
-		}
-	}
-
-	init = 1;
-
-	printf("Table Type - initialized\n");
-
-	return 0;
-}
-
-int
-tf_tbl_unbind(struct tf *tfp __rte_unused)
-{
-	int rc;
-	int i;
-	struct tf_rm_free_db_parms fparms = { 0 };
-
-	TF_CHECK_PARMS1(tfp);
-
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No Table DBs created\n");
-		return -EINVAL;
-	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		fparms.dir = i;
-		fparms.rm_db = tbl_db[i];
-		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
-
-		tbl_db[i] = NULL;
-	}
-
-	init = 0;
-
-	return 0;
-}
-
-int
-tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms)
-{
-	int rc;
-	uint32_t idx;
-	struct tf_rm_allocate_parms aparms = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Allocate requested element */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = &idx;
-	rc = tf_rm_allocate(&aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed allocate, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return rc;
-	}
-
-	*parms->idx = idx;
-
-	return 0;
-}
-
-int
-tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms)
-{
-	int rc;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_free_parms fparms = { 0 };
-	int allocated = 0;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Check if element is in use */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Entry already free, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	/* Free requested element */
-	fparms.rm_db = tbl_db[parms->dir];
-	fparms.db_index = parms->type;
-	fparms.index = parms->idx;
-	rc = tf_rm_free(&fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Free failed, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_alloc_search(struct tf *tfp __rte_unused,
-		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
-{
-	return 0;
-}
-
-int
-tf_tbl_set(struct tf *tfp,
-	   struct tf_tbl_set_parms *parms)
-{
-	int rc;
-	int allocated = 0;
-	uint16_t hcapi_type;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_get(struct tf *tfp,
-	   struct tf_tbl_get_parms *parms)
-{
-	int rc;
-	uint16_t hcapi_type;
-	int allocated = 0;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
deleted file mode 100644
index 3474489a6..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ /dev/null
@@ -1,318 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_TBL_TYPE_H_
-#define TF_TBL_TYPE_H_
-
-#include "tf_core.h"
-
-struct tf;
-
-/**
- * The Table module provides processing of Internal TF table types.
- */
-
-/**
- * Table configuration parameters
- */
-struct tf_tbl_cfg_parms {
-	/**
-	 * Number of table types in each of the configuration arrays
-	 */
-	uint16_t num_elements;
-	/**
-	 * Table Type element configuration array
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Shadow table type configuration array
-	 */
-	struct tf_shadow_tbl_cfg *shadow_cfg;
-	/**
-	 * Boolean controlling the request shadow copy.
-	 */
-	bool shadow_copy;
-	/**
-	 * Session resource allocations
-	 */
-	struct tf_session_resources *resources;
-};
-
-/**
- * Table allocation parameters
- */
-struct tf_tbl_alloc_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t *idx;
-};
-
-/**
- * Table free parameters
- */
-struct tf_tbl_free_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation type
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t idx;
-	/**
-	 * [out] Reference count after free, only valid if session has been
-	 * created with shadow_copy.
-	 */
-	uint16_t ref_cnt;
-};
-
-/**
- * Table allocate search parameters
- */
-struct tf_tbl_alloc_search_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
-	 */
-	uint32_t tbl_scope_id;
-	/**
-	 * [in] Enable search for matching entry. If the table type is
-	 * internal the shadow copy will be searched before
-	 * alloc. Session must be configured with shadow copy enabled.
-	 */
-	uint8_t search_enable;
-	/**
-	 * [in] Result data to search for (if search_enable)
-	 */
-	uint8_t *result;
-	/**
-	 * [in] Result data size in bytes (if search_enable)
-	 */
-	uint16_t result_sz_in_bytes;
-	/**
-	 * [out] If search_enable, set if matching entry found
-	 */
-	uint8_t hit;
-	/**
-	 * [out] Current ref count after allocation (if search_enable)
-	 */
-	uint16_t ref_cnt;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table set parameters
- */
-struct tf_tbl_set_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to set
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [in] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to write to
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table get parameters
- */
-struct tf_tbl_get_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to get
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [out] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to read
-	 */
-	uint32_t idx;
-};
-
-/**
- * @page tbl Table
- *
- * @ref tf_tbl_bind
- *
- * @ref tf_tbl_unbind
- *
- * @ref tf_tbl_alloc
- *
- * @ref tf_tbl_free
- *
- * @ref tf_tbl_alloc_search
- *
- * @ref tf_tbl_set
- *
- * @ref tf_tbl_get
- */
-
-/**
- * Initializes the Table module with the requested DBs. Must be
- * invoked as the first thing before any of the access functions.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table configuration parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_bind(struct tf *tfp,
-		struct tf_tbl_cfg_parms *parms);
-
-/**
- * Cleans up the private DBs and releases all the data.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_unbind(struct tf *tfp);
-
-/**
- * Allocates the requested table type from the internal RM DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table allocation parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc(struct tf *tfp,
-		 struct tf_tbl_alloc_parms *parms);
-
-/**
- * Free's the requested table type and returns it to the DB. If shadow
- * DB is enabled its searched first and if found the element refcount
- * is decremented. If refcount goes to 0 then its returned to the
- * table type DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_free(struct tf *tfp,
-		struct tf_tbl_free_parms *parms);
-
-/**
- * Supported if Shadow DB is configured. Searches the Shadow DB for
- * any matching element. If found the refcount in the shadow DB is
- * updated accordingly. If not found a new element is allocated and
- * installed into the shadow DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc_search(struct tf *tfp,
-			struct tf_tbl_alloc_search_parms *parms);
-
-/**
- * Configures the requested element by sending a firmware request which
- * then installs it into the device internal structures.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_set(struct tf *tfp,
-	       struct tf_tbl_set_parms *parms);
-
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table get parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_get(struct tf *tfp,
-	       struct tf_tbl_get_parms *parms);
-
-#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index a1761ad56..fc047f8f8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -9,7 +9,7 @@
 #include "tf_tcam.h"
 #include "tf_common.h"
 #include "tf_util.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_device.h"
 #include "tfp.h"
 #include "tf_session.h"
@@ -49,7 +49,7 @@ tf_tcam_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "TCAM already initialized\n");
+			    "TCAM DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -86,11 +86,12 @@ tf_tcam_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init)
-		return -EINVAL;
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No TCAM DBs created\n");
+		return 0;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 24/51] net/bnxt: update RM to support HCAPI only
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (22 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 23/51] net/bnxt: update table get to use new design Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 25/51] net/bnxt: remove table scope from session Ajit Khaparde
                           ` (26 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- For the EM Module there is a need to only allocate the EM Records in
  HCAPI RM but the storage control is requested to be outside of the RM DB.
- Add TF_RM_ELEM_CFG_HCAPI_BA.
- Return error when the number of reserved entries for wc tcam is odd
  number in tf_tcam_bind.
- Remove em_pool from session
- Use RM provided start offset and size
- HCAPI returns entry index instead of row index for WC TCAM.
- Move resource type conversion to hrwm set/free tcam functions.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   2 +
 drivers/net/bnxt/tf_core/tf_device_p4.h   |  54 ++++-----
 drivers/net/bnxt/tf_core/tf_em_internal.c | 131 ++++++++++++++--------
 drivers/net/bnxt/tf_core/tf_msg.c         |   6 +-
 drivers/net/bnxt/tf_core/tf_rm.c          |  81 ++++++-------
 drivers/net/bnxt/tf_core/tf_rm.h          |  14 ++-
 drivers/net/bnxt/tf_core/tf_session.h     |   5 -
 drivers/net/bnxt/tf_core/tf_tcam.c        |  21 ++++
 8 files changed, 190 insertions(+), 124 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index e3526672f..1eaf18212 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -68,6 +68,8 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
 		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
 			return -ENOTSUP;
+
+		*num_slices_per_row = 1;
 	} else { /* for other type of tcam */
 		*num_slices_per_row = 1;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 473e4eae5..8fae18012 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,19 +12,19 @@
 #include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
 	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_TCAM },
 	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
@@ -32,26 +32,26 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
 	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
 	/* CFA_RESOURCE_TYPE_P4_UPAR */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_EPOC */
@@ -79,7 +79,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EM_REC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
 };
 
 struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 1c514747d..3129fbe31 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -23,20 +23,28 @@
  */
 static void *em_db[TF_DIR_MAX];
 
+#define TF_EM_DB_EM_REC 0
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
 static uint8_t init;
 
+
+/**
+ * EM Pool
+ */
+static struct stack em_pool[TF_DIR_MAX];
+
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  * [in] num_entries
  *   number of entries to write
+ * [in] start
+ *   starting offset
  *
  * Return:
  *  0       - Success, entry allocated - no search support
@@ -44,54 +52,66 @@ static uint8_t init;
  *          - Failure, entry not allocated, out of resources
  */
 static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
+tf_create_em_pool(enum tf_dir dir,
+		  uint32_t num_entries,
+		  uint32_t start)
 {
 	struct tfp_calloc_parms parms;
 	uint32_t i, j;
 	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 
-	parms.nitems = num_entries;
+	/* Assumes that num_entries has been checked before we get here */
+	parms.nitems = num_entries / TF_SESSION_EM_ENTRY_SIZE;
 	parms.size = sizeof(uint32_t);
 	parms.alignment = 0;
 
 	rc = tfp_calloc(&parms);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool allocation failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+	rc = stack_init(num_entries / TF_SESSION_EM_ENTRY_SIZE,
+			(uint32_t *)parms.mem_va,
+			pool);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack init failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
 
 	/* Fill pool with indexes
 	 */
-	j = num_entries - 1;
+	j = start + num_entries - TF_SESSION_EM_ENTRY_SIZE;
 
-	for (i = 0; i < num_entries; i++) {
+	for (i = 0; i < (num_entries / TF_SESSION_EM_ENTRY_SIZE); i++) {
 		rc = stack_push(pool, j);
 		if (rc) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, EM pool stack push failure %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup;
 		}
-		j--;
+
+		j -= TF_SESSION_EM_ENTRY_SIZE;
 	}
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
@@ -105,18 +125,15 @@ tf_create_em_pool(struct tf_session *session,
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  *
  * Return:
  */
 static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
+tf_free_em_pool(enum tf_dir dir)
 {
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 	uint32_t *ptr;
 
 	ptr = stack_items(pool);
@@ -140,22 +157,19 @@ tf_em_insert_int_entry(struct tf *tfp,
 	uint16_t rptr_index = 0;
 	uint8_t rptr_entry = 0;
 	uint8_t num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 	uint32_t index;
 
 	rc = stack_pop(pool, &index);
 
 	if (rc) {
-		PMD_DRV_LOG
-		  (ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
+		PMD_DRV_LOG(ERR,
+			    "%s, EM entry index allocation failed\n",
+			    tf_dir_2_str(parms->dir));
 		return rc;
 	}
 
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rptr_index = index;
 	rc = tf_msg_insert_em_internal_entry(tfp,
 					     parms,
 					     &rptr_index,
@@ -166,8 +180,9 @@ tf_em_insert_int_entry(struct tf *tfp,
 
 	PMD_DRV_LOG
 		  (ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   "%s, Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   index,
 		   rptr_index,
 		   rptr_entry,
 		   num_of_entries);
@@ -204,15 +219,13 @@ tf_em_delete_int_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	int rc = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 
 	rc = tf_msg_delete_em_entry(tfp, parms);
 
 	/* Return resource to pool */
 	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+		stack_push(pool, parms->index);
 
 	return rc;
 }
@@ -224,8 +237,9 @@ tf_em_int_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
-	struct tf_session *session;
 	uint8_t db_exists = 0;
+	struct tf_rm_get_alloc_info_parms iparms;
+	struct tf_rm_alloc_info info;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -235,14 +249,6 @@ tf_em_int_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		tf_create_em_pool(session,
-				  i,
-				  TF_SESSION_EM_POOL_SIZE);
-	}
-
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -257,6 +263,18 @@ tf_em_int_bind(struct tf *tfp,
 		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
 			continue;
 
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] %
+		    TF_SESSION_EM_ENTRY_SIZE != 0) {
+			rc = -ENOMEM;
+			TFP_DRV_LOG(ERR,
+				    "%s, EM Allocation must be in blocks of %d, failure %s\n",
+				    tf_dir_2_str(i),
+				    TF_SESSION_EM_ENTRY_SIZE,
+				    strerror(-rc));
+
+			return rc;
+		}
+
 		db_cfg.rm_db = &em_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
@@ -272,6 +290,28 @@ tf_em_int_bind(struct tf *tfp,
 	if (db_exists)
 		init = 1;
 
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		iparms.rm_db = em_db[i];
+		iparms.db_index = TF_EM_DB_EM_REC;
+		iparms.info = &info;
+
+		rc = tf_rm_get_info(&iparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB get info failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+
+		rc = tf_create_em_pool(i,
+				       iparms.info->entry.stride,
+				       iparms.info->entry.start);
+		/* Logging handled in tf_create_em_pool */
+		if (rc)
+			return rc;
+	}
+
+
 	return 0;
 }
 
@@ -281,7 +321,6 @@ tf_em_int_unbind(struct tf *tfp)
 	int rc;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
-	struct tf_session *session;
 
 	TF_CHECK_PARMS1(tfp);
 
@@ -292,10 +331,8 @@ tf_em_int_unbind(struct tf *tfp)
 		return 0;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	for (i = 0; i < TF_DIR_MAX; i++)
-		tf_free_em_pool(session, i);
+		tf_free_em_pool(i);
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 02d8a4971..7fffb6baf 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -857,12 +857,12 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < size)
+	if (tfp_le_to_cpu_32(resp.size) != size)
 		return -EINVAL;
 
 	tfp_memcpy(data,
 		   &resp.data,
-		   resp.size);
+		   size);
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
@@ -919,7 +919,7 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < data_size)
+	if (tfp_le_to_cpu_32(resp.size) != data_size)
 		return -EINVAL;
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0469b653..e7af9eb84 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -106,7 +106,8 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 	uint16_t cnt = 0;
 
 	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		if ((cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		     cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) &&
 		    reservations[i] > 0)
 			cnt++;
 
@@ -467,7 +468,8 @@ tf_rm_create_db(struct tf *tfp,
 	/* Build the request */
 	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		    parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 			/* Only perform reservation for entries that
 			 * has been requested
 			 */
@@ -529,7 +531,8 @@ tf_rm_create_db(struct tf *tfp,
 		/* Skip any non HCAPI types as we didn't include them
 		 * in the reservation request.
 		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+		    parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 			continue;
 
 		/* If the element didn't request an allocation no need
@@ -551,29 +554,32 @@ tf_rm_create_db(struct tf *tfp,
 			       resv[j].start,
 			       resv[j].stride);
 
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
+			/* Only allocate BA pool if so requested */
+			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
+				/* Create pool */
+				pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+					     sizeof(struct bitalloc));
+				/* Alloc request, alignment already set */
+				cparms.nitems = pool_size;
+				cparms.size = sizeof(struct bitalloc);
+				rc = tfp_calloc(&cparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool alloc failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
+				db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+				rc = ba_init(db[i].pool, resv[j].stride);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool init failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
 			}
 			j++;
 		} else {
@@ -682,6 +688,9 @@ tf_rm_free_db(struct tf *tfp,
 				    tf_device_module_type_2_str(rm_db->type));
 	}
 
+	/* No need to check for configuration type, even if we do not
+	 * have a BA pool we just delete on a null ptr, no harm
+	 */
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -705,8 +714,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -770,8 +778,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -816,8 +823,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -857,9 +863,9 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	memcpy(parms->info,
@@ -880,9 +886,9 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
@@ -903,8 +909,7 @@ tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail silently (no logging), if the pool is not valid there
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5cb68892a..f44fcca70 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -56,12 +56,18 @@ struct tf_rm_new_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	/** No configuration */
+	/**
+	 * No configuration
+	 */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
+	/** HCAPI 'controlled', no RM storage thus the Device Module
+	 *  using the RM can chose to handle storage locally.
+	 */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
+	/** HCAPI 'controlled', uses a Bit Allocator Pool for internal
+	 *  storage in the RM.
+	 */
+	TF_RM_ELEM_CFG_HCAPI_BA,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
 	 * is not controlled by the Host.
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index e4472ed7f..ebee4db8c 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -103,11 +103,6 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
-
-	/**
-	 * EM Pools
-	 */
-	struct stack em_pool[TF_DIR_MAX];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index fc047f8f8..d5bb4eec1 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -43,6 +43,7 @@ tf_tcam_bind(struct tf *tfp,
 {
 	int rc;
 	int i;
+	struct tf_tcam_resources *tcam_cnt;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -53,6 +54,14 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	tcam_cnt = parms->resources->tcam_cnt;
+	if ((tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2) ||
+	    (tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2)) {
+		TFP_DRV_LOG(ERR,
+			    "Number of WC TCAM entries cannot be odd num\n");
+		return -EINVAL;
+	}
+
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -168,6 +177,18 @@ tf_tcam_alloc(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM &&
+	    (parms->idx % 2) != 0) {
+		rc = tf_rm_allocate(&aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Failed tcam, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type);
+			return rc;
+		}
+	}
+
 	parms->idx *= num_slice_per_row;
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 25/51] net/bnxt: remove table scope from session
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (23 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
                           ` (25 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Remove table scope data from session. Added to EEM.
- Complete move to RM of table scope base and range.
- Fix some err messaging strings.
- Fix the tcam logging message.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      |  2 +-
 drivers/net/bnxt/tf_core/tf_em.h        |  1 -
 drivers/net/bnxt/tf_core/tf_em_common.c | 16 +++++++----
 drivers/net/bnxt/tf_core/tf_em_common.h |  5 +---
 drivers/net/bnxt/tf_core/tf_em_host.c   | 38 ++++++++++---------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 12 +++-----
 drivers/net/bnxt/tf_core/tf_session.h   |  3 --
 drivers/net/bnxt/tf_core/tf_tcam.c      |  6 ++--
 8 files changed, 35 insertions(+), 48 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8727900c4..6410843f6 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -573,7 +573,7 @@ tf_free_tcam_entry(struct tf *tfp,
 	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: TCAM allocation failed, rc:%s\n",
+			    "%s: TCAM free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 6bfcbd59e..617b07587 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,7 +9,6 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
-#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index d0d80daeb..e31a63b46 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,6 +29,8 @@
  */
 void *eem_db[TF_DIR_MAX];
 
+#define TF_EEM_DB_TBL_SCOPE 1
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -39,10 +41,12 @@ static uint8_t init;
  */
 static enum tf_mem_type mem_type;
 
+/** Table scope array */
+struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
+tbl_scope_cb_find(uint32_t tbl_scope_id)
 {
 	int i;
 	struct tf_rm_is_allocated_parms parms;
@@ -50,8 +54,8 @@ tbl_scope_cb_find(struct tf_session *session,
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
@@ -60,8 +64,8 @@ tbl_scope_cb_find(struct tf_session *session,
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
+		if (tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &tbl_scopes[i];
 	}
 
 	return NULL;
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index 45699a7c3..bf01df9b8 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -14,8 +14,6 @@
  * Function to search for table scope control block structure
  * with specified table scope ID.
  *
- * [in] session
- *   Session to use for the search of the table scope control block
  * [in] tbl_scope_id
  *   Table scope ID to search for
  *
@@ -23,8 +21,7 @@
  *  Pointer to the found table scope control block struct or NULL if
  *   table scope control block struct not found
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+struct tf_tbl_scope_cb *tbl_scope_cb_find(uint32_t tbl_scope_id);
 
 /**
  * Create and initialize a stack to use for action entries
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 8be39afdd..543edb54a 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,6 +48,9 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
+#define TF_EEM_DB_TBL_SCOPE 1
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
 /**
  * Function to free a page table
@@ -934,14 +937,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_entry(struct tf *tfp,
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -957,14 +958,12 @@ tf_em_insert_ext_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_entry(struct tf *tfp,
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -981,16 +980,13 @@ tf_em_ext_host_alloc(struct tf *tfp,
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct hcapi_cfa_em_table *em_tables;
-	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 	struct tf_rm_allocate_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -999,8 +995,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 		return rc;
 	}
 
-	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
-	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
 	tbl_scope_cb->index = parms->tbl_scope_id;
 	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
 
@@ -1092,8 +1087,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
 }
@@ -1105,13 +1100,9 @@ tf_em_ext_host_free(struct tf *tfp,
 	int rc = 0;
 	enum tf_dir  dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
 	struct tf_rm_free_parms aparms = { 0 };
 
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Table scope error\n");
@@ -1120,8 +1111,8 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1142,5 +1133,6 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index ee18a0c70..6dd115470 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -63,14 +63,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_sys_entry(struct tf *tfp,
+tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -87,14 +85,12 @@ tf_em_insert_ext_sys_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_sys_entry(struct tf *tfp,
+tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index ebee4db8c..a303fde51 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -100,9 +100,6 @@ struct tf_session {
 
 	/** Device handle */
 	struct tf_dev_info dev;
-
-	/** Table scope array */
-	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index d5bb4eec1..b67159a54 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -287,7 +287,8 @@ tf_tcam_free(struct tf *tfp,
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
@@ -382,7 +383,8 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d set failed, rc:%s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 26/51] net/bnxt: add external action alloc and free
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (24 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 25/51] net/bnxt: remove table scope from session Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
                           ` (24 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Link external action alloc and free to new hcapi interface
- Add parameter range checking
- Fix issues with index allocation check

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 163 ++++++++++++++++-------
 drivers/net/bnxt/tf_core/tf_core.h       |   4 -
 drivers/net/bnxt/tf_core/tf_device.h     |  58 ++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   6 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   2 -
 drivers/net/bnxt/tf_core/tf_em.h         |  95 +++++++++++++
 drivers/net/bnxt/tf_core/tf_em_common.c  | 120 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  80 ++++++++++-
 drivers/net/bnxt/tf_core/tf_em_system.c  |   6 +
 drivers/net/bnxt/tf_core/tf_identifier.c |   4 +-
 drivers/net/bnxt/tf_core/tf_rm.h         |   5 +
 drivers/net/bnxt/tf_core/tf_tbl.c        |  10 +-
 drivers/net/bnxt/tf_core/tf_tbl.h        |  12 ++
 drivers/net/bnxt/tf_core/tf_tcam.c       |   8 +-
 drivers/net/bnxt/tf_core/tf_util.c       |   4 -
 15 files changed, 499 insertions(+), 78 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6410843f6..45accb0ab 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -617,25 +617,48 @@ tf_alloc_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_alloc_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	aparms.dir = parms->dir;
 	aparms.type = parms->type;
 	aparms.idx = &idx;
-	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table allocation failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	aparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_alloc_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_ext_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: External table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+
+	} else {
+		if (dev->ops->tf_dev_alloc_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	parms->idx = idx;
@@ -677,25 +700,47 @@ tf_free_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_free_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	fparms.dir = parms->dir;
 	fparms.type = parms->type;
 	fparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table free failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	fparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_free_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_ext_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_free_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return 0;
@@ -735,27 +780,49 @@ tf_set_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_set_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	sparms.dir = parms->dir;
 	sparms.type = parms->type;
 	sparms.data = parms->data;
 	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
 	sparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table set failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	sparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_set_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_ext_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_set_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index a7a7bd38a..e898f19a0 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -211,10 +211,6 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
 	/** Wh+/SR Action _Modify L4 Dest Port */
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
 	/** Meter Profiles */
 	TF_TBL_TYPE_METER_PROF,
 	/** Meter Instance */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 93f3627d4..58b7a4ab2 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -216,6 +216,26 @@ struct tf_dev_ops {
 	int (*tf_dev_alloc_tbl)(struct tf *tfp,
 				struct tf_tbl_alloc_parms *parms);
 
+	/**
+	 * Allocation of a external table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ext_tbl)(struct tf *tfp,
+				    struct tf_tbl_alloc_parms *parms);
+
 	/**
 	 * Free of a table type element.
 	 *
@@ -235,6 +255,25 @@ struct tf_dev_ops {
 	int (*tf_dev_free_tbl)(struct tf *tfp,
 			       struct tf_tbl_free_parms *parms);
 
+	/**
+	 * Free of a external table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ext_tbl)(struct tf *tfp,
+				   struct tf_tbl_free_parms *parms);
+
 	/**
 	 * Searches for the specified table type element in a shadow DB.
 	 *
@@ -276,6 +315,25 @@ struct tf_dev_ops {
 	int (*tf_dev_set_tbl)(struct tf *tfp,
 			      struct tf_tbl_set_parms *parms);
 
+	/**
+	 * Sets the specified external table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_ext_tbl)(struct tf *tfp,
+				  struct tf_tbl_set_parms *parms);
+
 	/**
 	 * Retrieves the specified table type element.
 	 *
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 1eaf18212..9a3230787 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -85,10 +85,13 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = NULL,
 	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_ext_tbl = NULL,
 	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_ext_tbl = NULL,
 	.tf_dev_free_tbl = NULL,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
+	.tf_dev_set_ext_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
 	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
@@ -113,9 +116,12 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_alloc_ext_tbl = tf_tbl_ext_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 8fae18012..298e100f3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -47,8 +47,6 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 617b07587..39a216341 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -456,4 +456,99 @@ int tf_em_ext_common_free(struct tf *tfp,
  */
 int tf_em_ext_common_alloc(struct tf *tfp,
 			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_system_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
+
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index e31a63b46..39a8412b3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,8 +29,6 @@
  */
 void *eem_db[TF_DIR_MAX];
 
-#define TF_EEM_DB_TBL_SCOPE 1
-
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -54,13 +52,13 @@ tbl_scope_cb_find(uint32_t tbl_scope_id)
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
 
-	if (i < 0 || !allocated)
+	if (i < 0 || allocated != TF_RM_ALLOCATED_ENTRY_IN_USE)
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
@@ -158,6 +156,111 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type);
+		return rc;
+	}
+
+	*parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms)
+{
+	int rc = 0;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
 uint32_t
 tf_em_get_key_mask(int num_entries)
 {
@@ -273,6 +376,15 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_tbl_ext_host_set(tfp, parms);
+	else
+		return tf_tbl_ext_system_set(tfp, parms);
+}
+
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 543edb54a..d7c147a15 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,7 +48,6 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
-#define TF_EEM_DB_TBL_SCOPE 1
 
 extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
@@ -986,7 +985,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -1087,7 +1086,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
@@ -1111,7 +1110,7 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
@@ -1133,6 +1132,77 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
-	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
+	return rc;
+}
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 6dd115470..10768df03 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -112,3 +112,9 @@ tf_em_ext_system_free(struct tf *tfp __rte_unused,
 {
 	return 0;
 }
+
+int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
+			  struct tf_tbl_set_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 211371081..219839272 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -159,13 +159,13 @@ tf_ident_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->id);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index f44fcca70..fd044801f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -12,6 +12,11 @@
 
 struct tf;
 
+/** RM return codes */
+#define TF_RM_ALLOCATED_ENTRY_FREE        0
+#define TF_RM_ALLOCATED_ENTRY_IN_USE      1
+#define TF_RM_ALLOCATED_NO_ENTRY_FOUND   -1
+
 /**
  * The Resource Manager (RM) module provides basic DB handling for
  * internal resources. These resources exists within the actual device
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 05e866dc6..3a3277329 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -172,13 +172,13 @@ tf_tbl_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -233,7 +233,7 @@ tf_tbl_set(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -301,7 +301,7 @@ tf_tbl_get(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -374,7 +374,7 @@ tf_tbl_bulk_get(struct tf *tfp,
 		if (rc)
 			return rc;
 
-		if (!allocated) {
+		if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 			TFP_DRV_LOG(ERR,
 				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index eb560ffa7..2a10b47ce 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -83,6 +83,10 @@ struct tf_tbl_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
@@ -101,6 +105,10 @@ struct tf_tbl_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Index to free
 	 */
@@ -168,6 +176,10 @@ struct tf_tbl_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Entry data
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b67159a54..b1092cd9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -252,13 +252,13 @@ tf_tcam_free(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -362,13 +362,13 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry is not allocated, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Convert TF type to HCAPI RM type */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 5472a9aac..85f6e25f4 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -92,10 +92,6 @@ tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 		return "NAT IPv4 Source";
 	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
 		return "NAT IPv4 Destination";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-		return "NAT IPv6 Source";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-		return "NAT IPv6 Destination";
 	case TF_TBL_TYPE_METER_PROF:
 		return "Meter Profile";
 	case TF_TBL_TYPE_METER_INST:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 27/51] net/bnxt: align CFA resources with RM
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (25 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
                           ` (23 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru

From: Randy Schacher <stuart.schacher@broadcom.com>

- HCAPI resources need to align for Resource Manager
- Clean up unnecessary debug messages

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 250 +++++++++---------
 drivers/net/bnxt/tf_core/tf_identifier.c      |   3 +-
 drivers/net/bnxt/tf_core/tf_msg.c             |  37 ++-
 drivers/net/bnxt/tf_core/tf_rm.c              |  21 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             |   3 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  28 +-
 6 files changed, 197 insertions(+), 145 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 6e79facec..6d6651fde 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -48,232 +48,246 @@
 #define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
 #define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
-/* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
+/* EPOCH 0 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH0          0xfUL
+/* EPOCH 1 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH1          0x10UL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x11UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x12UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x13UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x14UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x15UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x16UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P58_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6         0x6UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT           0x7UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT           0x8UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4          0x9UL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
-/* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
-/* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4          0xaUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER               0xbUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE          0xcUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION         0xdUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION     0xeUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P58_EXT_FORMAT_0_ACTION 0xfUL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_1_ACTION     0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION     0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION     0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION     0x13UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_5_ACTION     0x14UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_6_ACTION     0x15UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM        0x16UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP       0x17UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC           0x18UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM           0x19UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID     0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P58_EM_REC              0x1cUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM             0x1dUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF          0x1eUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR              0x1fUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM             0x20UL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB              0x21UL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB              0x22UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
-#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM            0x23UL
+#define CFA_RESOURCE_TYPE_P58_LAST               CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P45_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P45_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P45_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM             0x21UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM            0x22UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE           0x23UL
+#define CFA_RESOURCE_TYPE_P45_LAST               CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P4_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P4_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P4_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM             0x21UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE           0x22UL
+#define CFA_RESOURCE_TYPE_P4_LAST               CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 219839272..2cc43b40f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -59,7 +59,8 @@ tf_ident_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Identifier - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Identifier - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 7fffb6baf..659065de3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,9 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
+/* Logging defines */
+#define TF_RM_MSG_DEBUG  0
+
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -215,7 +218,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -225,31 +228,39 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nQCAPS\n");
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("type: %d(0x%x) %d %d\n",
 		       query[i].type,
 		       query[i].type,
 		       query[i].min,
 		       query[i].max);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	}
 
 	*resv_strategy = resp.flags &
 	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
 
+cleanup:
 	tf_msg_free_dma_buf(&qcaps_buf);
 
 	return rc;
@@ -293,8 +304,10 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	dma_size = size * sizeof(struct tf_rm_resc_entry);
 	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
-	if (rc)
+	if (rc) {
+		tf_msg_free_dma_buf(&req_buf);
 		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
@@ -320,7 +333,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -330,11 +343,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nRESV\n");
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
@@ -343,14 +359,17 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
 		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
 		       resv[i].type,
 		       resv[i].type,
 		       resv[i].start,
 		       resv[i].stride);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	}
 
+cleanup:
 	tf_msg_free_dma_buf(&req_buf);
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -412,8 +431,6 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	parms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp, &parms);
-	if (rc)
-		return rc;
 
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -434,7 +451,7 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
-	uint32_t flags;
+	uint16_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
@@ -480,7 +497,7 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
+	uint16_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
@@ -726,8 +743,6 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
-	if (rc)
-		goto cleanup;
 
 cleanup:
 	tf_msg_free_dma_buf(&buf);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e7af9eb84..30313e2ea 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -17,6 +17,9 @@
 #include "tfp.h"
 #include "tf_msg.h"
 
+/* Logging defines */
+#define TF_RM_DEBUG  0
+
 /**
  * Generic RM Element data type that an RM DB is build upon.
  */
@@ -120,16 +123,11 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
 		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
+				"%s, %s, %s allocation of %d not supported\n",
 				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-
+				tf_device_module_type_subtype_2_str(type, i),
+				reservations[i]);
 		}
 	}
 
@@ -549,11 +547,6 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
 			/* Only allocate BA pool if so requested */
 			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 				/* Create pool */
@@ -603,10 +596,12 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+#if (TF_RM_DEBUG == 1)
 	printf("%s: type:%d num_entries:%d\n",
 	       tf_dir_2_str(parms->dir),
 	       parms->type,
 	       i);
+#endif /* (TF_RM_DEBUG == 1) */
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 3a3277329..7d4daaf2d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -74,7 +74,8 @@ tf_tbl_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Table Type - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b1092cd9d..1c48b5363 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -81,7 +81,8 @@ tf_tcam_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("TCAM - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "TCAM - initialized\n");
 
 	return 0;
 }
@@ -275,6 +276,31 @@ tf_tcam_free(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		int i;
+
+		for (i = -1; i < 3; i += 3) {
+			aparms.index += i;
+			rc = tf_rm_is_allocated(&aparms);
+			if (rc)
+				return rc;
+
+			if (allocated == TF_RM_ALLOCATED_ENTRY_IN_USE) {
+				/* Free requested element */
+				fparms.index = aparms.index;
+				rc = tf_rm_free(&fparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+						    "%s: Free failed, type:%d, index:%d\n",
+						    tf_dir_2_str(parms->dir),
+						    parms->type,
+						    fparms.index);
+					return rc;
+				}
+			}
+		}
+	}
+
 	/* Convert TF type to HCAPI RM type */
 	hparms.rm_db = tcam_db[parms->dir];
 	hparms.db_index = parms->type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 28/51] net/bnxt: implement IF tables set and get
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (26 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
                           ` (22 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Implement set/get for PROF_SPIF_CTXT, LKUP_PF_DFLT_ARP,
  PROF_PF_ERR_ARP with tunneled HWRM messages
- Add IF table for PROF_PARIF_DFLT_ARP
- Fix page size offset in the HCAPI code
- Fix Entry offset calculation

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h     |  53 +++++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h  |  12 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c    |   8 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h    |  18 +-
 drivers/net/bnxt/meson.build             |   2 +-
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  63 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 116 +++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       | 104 ++++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  21 ++
 drivers/net/bnxt/tf_core/tf_device.h     |  39 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   5 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |  10 +
 drivers/net/bnxt/tf_core/tf_em_common.c  |   5 +-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  12 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |   3 +-
 drivers/net/bnxt/tf_core/tf_if_tbl.c     | 178 +++++++++++++++++
 drivers/net/bnxt/tf_core/tf_if_tbl.h     | 236 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 186 +++++++++++++++---
 drivers/net/bnxt/tf_core/tf_msg.h        |  30 +++
 drivers/net/bnxt/tf_core/tf_session.c    |  14 +-
 21 files changed, 1060 insertions(+), 57 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
index c30e4f49c..3243b3f2b 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -127,6 +127,11 @@ const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
 	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS},
 };
 
 const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
@@ -247,4 +252,52 @@ const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
 	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
 
 };
+
+const struct hcapi_cfa_field cfa_p40_mirror_tbl_layout[] = {
+	{CFA_P40_MIRROR_TBL_SP_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS,
+	 CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_COPY_BITPOS,
+	 CFA_P40_MIRROR_TBL_COPY_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_EN_BITPOS,
+	 CFA_P40_MIRROR_TBL_EN_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_AR_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS},
+};
+
+/* P45 Defines */
+
+const struct hcapi_cfa_field cfa_p45_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
 #endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
index ea8d99d01..53a284887 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -35,10 +35,6 @@
 
 #define CFA_GLOBAL_CFG_DATA_SZ (100)
 
-#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
-#define SUPPORT_CFA_HW_ALL (1)
-#endif
-
 #include "hcapi_cfa_p4.h"
 #define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
 #define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
@@ -121,6 +117,8 @@ struct hcapi_cfa_layout {
 	const struct hcapi_cfa_field *field_array;
 	/** [out] number of HW field entries in the HW layout field array */
 	uint32_t array_sz;
+	/** [out] layout_id - layout id associated with the layout */
+	uint16_t layout_id;
 };
 
 /**
@@ -247,6 +245,8 @@ struct hcapi_cfa_key_tbl {
 	 *  applicable for newer chip
 	 */
 	uint8_t *base1;
+	/** [in] Page size for EEM tables */
+	uint32_t page_size;
 };
 
 /**
@@ -267,7 +267,7 @@ struct hcapi_cfa_key_obj {
 struct hcapi_cfa_key_data {
 	/** [in] For on-chip key table, it is the offset in unit of smallest
 	 *  key. For off-chip key table, it is the byte offset relative
-	 *  to the key record memory base.
+	 *  to the key record memory base and adjusted for page and entry size.
 	 */
 	uint32_t offset;
 	/** [in] HW key data buffer pointer */
@@ -668,5 +668,5 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 			struct hcapi_cfa_key_loc *key_loc);
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset);
+			      uint32_t page);
 #endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index 42b37da0f..a01bbdbbb 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -13,7 +13,6 @@
 #include "hcapi_cfa_defs.h"
 
 #define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
-#define TF_EM_PAGE_SIZE (1 << 21)
 uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
 uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
 bool hcapi_cfa_lkup_init;
@@ -199,10 +198,9 @@ static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
 
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset)
+			      uint32_t page)
 {
 	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
 	uint64_t addr;
 
 	if (mem == NULL)
@@ -362,7 +360,9 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 	op->hw.base_addr =
 		hcapi_get_table_page((struct hcapi_cfa_em_table *)
 				     key_tbl->base0,
-				     key_obj->offset);
+				     key_obj->offset / key_tbl->page_size);
+	/* Offset is adjusted to be the offset into the page */
+	key_obj->offset = key_obj->offset % key_tbl->page_size;
 
 	if (op->hw.base_addr == 0)
 		return -1;
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
index 0661d6363..c6113707f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -21,6 +21,10 @@ enum cfa_p4_tbl_id {
 	CFA_P4_TBL_WC_TCAM_REMAP,
 	CFA_P4_TBL_VEB_TCAM,
 	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT,
+	CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR,
+	CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR,
+	CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR,
 	CFA_P4_TBL_MAX
 };
 
@@ -333,17 +337,29 @@ enum cfa_p4_action_sram_entry_type {
 	 */
 
 	/** SRAM Action Record */
-	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FULL_ACTION,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_0_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_1_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_2_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_3_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_4_ACTION,
+
 	/** SRAM Action Encap 8 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
 	/** SRAM Action Encap 16 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
 	/** SRAM Action Encap 64 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_SRC,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_DEST,
+
 	/** SRAM Action Modify IPv4 Source */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
 	/** SRAM Action Modify IPv4 Destination */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+
 	/** SRAM Action Source Properties SMAC */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
 	/** SRAM Action Source Properties SMAC IPv4 */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 7f3ec6204..f25a9448d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,7 +43,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_shadow_tcam.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm.c',
+	'tf_core/tf_if_tbl.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 9ba60e1c2..1924bef02 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -25,6 +25,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -33,3 +34,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 26836e488..32f152314 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -16,7 +16,9 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
+	HWRM_TFT_IF_TBL_SET = 827,
+	HWRM_TFT_IF_TBL_GET = 828,
+	TF_SUBTYPE_LAST = HWRM_TFT_IF_TBL_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -46,7 +48,17 @@ typedef enum tf_subtype {
 /* WC DMA Address Type */
 #define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
 /* WC Entry */
-#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY				0x30d1UL
+/* SPIF DFLT L2 CTXT Entry */
+#define TF_DEV_DATA_TYPE_SPIF_DFLT_L2_CTXT		  0x3131UL
+/* PARIF DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_DFLT_ACT_REC		0x3132UL
+/* PARIF ERR DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_ERR_DFLT_ACT_REC	 0x3133UL
+/* ILT Entry */
+#define TF_DEV_DATA_TYPE_ILT				0x3134UL
+/* VNIC SVIF entry */
+#define TF_DEV_DATA_TYPE_VNIC_SVIF			0x3135UL
 /* Action Data */
 #define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
 #define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
@@ -56,6 +68,9 @@ typedef enum tf_subtype {
 
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
+struct tf_if_tbl_set_input;
+struct tf_if_tbl_get_input;
+struct tf_if_tbl_get_output;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
@@ -85,4 +100,48 @@ typedef struct tf_tbl_type_bulk_get_output {
 	uint16_t			 size;
 } tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
+/* Input params for if tbl set */
+typedef struct tf_if_tbl_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* index of table entry */
+	uint16_t			 idx;
+	/* size of the data write to table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* data to write into table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_set_input_t, *ptf_if_tbl_set_input_t;
+
+/* Input params for if tbl get */
+typedef struct tf_if_tbl_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* size of the data get from table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* index of table entry */
+	uint16_t			 idx;
+} tf_if_tbl_get_input_t, *ptf_if_tbl_get_input_t;
+
+/* output params for if tbl get */
+typedef struct tf_if_tbl_get_output {
+	/* Value read from table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_get_output_t, *ptf_if_tbl_get_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 45accb0ab..a980a2056 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -1039,3 +1039,119 @@ tf_free_tbl_scope(struct tf *tfp,
 
 	return rc;
 }
+
+int
+tf_set_if_tbl_entry(struct tf *tfp,
+		    struct tf_set_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_set_parms sparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.idx = parms->idx;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_set_if_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_get_if_tbl_entry(struct tf *tfp,
+		    struct tf_get_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_get_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.idx = parms->idx;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_get_if_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e898f19a0..e3d46bd45 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1556,4 +1556,108 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page if_tbl Interface Table Access
+ *
+ * @ref tf_set_if_tbl_entry
+ *
+ * @ref tf_get_if_tbl_entry
+ *
+ * @ref tf_restore_if_tbl_entry
+ */
+/**
+ * Enumeration of TruFlow interface table types.
+ */
+enum tf_if_tbl_type {
+	/** Default Profile L2 Context Entry */
+	TF_IF_TBL_TYPE_PROF_SPIF_DFLT_L2_CTXT,
+	/** Default Profile TCAM/Lookup Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	/** Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	/** Default Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_LKUP_PARIF_DFLT_ACT_REC_PTR,
+	/** SR2 Ingress lookup table */
+	TF_IF_TBL_TYPE_ILT,
+	/** SR2 VNIC/SVIF Table */
+	TF_IF_TBL_TYPE_VNIC_SVIF,
+	TF_IF_TBL_TYPE_MAX
+};
+
+/**
+ * tf_set_if_tbl_entry parameter definition
+ */
+struct tf_set_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Interface to write
+	 */
+	uint32_t idx;
+};
+
+/**
+ * set interface table entry
+ *
+ * Used to set an interface table. This API is used for managing tables indexed
+ * by SVIF/SPIF/PARIF interfaces. In current implementation only the value is
+ * set.
+ * Returns success or failure code.
+ */
+int tf_set_if_tbl_entry(struct tf *tfp,
+			struct tf_set_if_tbl_entry_parms *parms);
+
+/**
+ * tf_get_if_tbl_entry parameter definition
+ */
+struct tf_get_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of table to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * get interface table entry
+ *
+ * Used to retrieve an interface table entry.
+ *
+ * Reads the interface table entry value
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_if_tbl_entry(struct tf *tfp,
+			struct tf_get_if_tbl_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 20b0c5948..a3073c826 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -44,6 +44,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
+	struct tf_if_tbl_cfg_parms if_tbl_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -114,6 +115,19 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * IF_TBL
+	 */
+	if_tbl_cfg.num_elements = TF_IF_TBL_TYPE_MAX;
+	if_tbl_cfg.cfg = tf_if_tbl_p4;
+	if_tbl_cfg.shadow_copy = shadow_copy;
+	rc = tf_if_tbl_bind(tfp, &if_tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "IF Table initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -186,6 +200,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_if_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, IF Table Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 58b7a4ab2..5a0943ad7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl.h"
 #include "tf_tcam.h"
+#include "tf_if_tbl.h"
 
 struct tf;
 struct tf_session;
@@ -567,6 +568,44 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
 				     struct tf_free_tbl_scope_parms *parms);
+
+	/**
+	 * Sets the specified interface table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to interface table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_set_parms *parms);
+
+	/**
+	 * Retrieves the specified interface table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_get_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9a3230787..2dc34b853 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
+#include "tf_if_tbl.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -105,6 +106,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_delete_ext_em_entry = NULL,
 	.tf_dev_alloc_tbl_scope = NULL,
 	.tf_dev_free_tbl_scope = NULL,
+	.tf_dev_set_if_tbl = NULL,
+	.tf_dev_get_if_tbl = NULL,
 };
 
 /**
@@ -135,4 +138,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
 	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
+	.tf_dev_set_if_tbl = tf_if_tbl_set,
+	.tf_dev_get_if_tbl = tf_if_tbl_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 298e100f3..3b03a7c4e 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -10,6 +10,7 @@
 
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_if_tbl.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -86,4 +87,13 @@ struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 };
 
+struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 39a8412b3..23a7fc9c2 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -337,11 +337,10 @@ tf_em_ext_common_bind(struct tf *tfp,
 		db_exists = 1;
 	}
 
-	if (db_exists) {
-		mem_type = parms->mem_type;
+	if (db_exists)
 		init = 1;
-	}
 
+	mem_type = parms->mem_type;
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index d7c147a15..2626a59fe 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -831,7 +831,8 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	op.opcode = HCAPI_CFA_HWOPS_ADD;
 	key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = (uint8_t *)&key_entry;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -847,8 +848,7 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 		key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 
 		rc = hcapi_cfa_key_hw_op(&op,
 					 &key_tbl,
@@ -914,7 +914,8 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
 							  TF_KEY0_TABLE :
 							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = NULL;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -1195,7 +1196,8 @@ int tf_tbl_ext_host_set(struct tf *tfp,
 	op.opcode = HCAPI_CFA_HWOPS_PUT;
 	key_tbl.base0 =
 		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
 	key_obj.data = parms->data;
 	key_obj.size = parms->data_sz_in_bytes;
 
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 2cc43b40f..90aeaa468 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -68,7 +68,7 @@ tf_ident_bind(struct tf *tfp,
 int
 tf_ident_unbind(struct tf *tfp)
 {
-	int rc;
+	int rc = 0;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
 
@@ -89,7 +89,6 @@ tf_ident_unbind(struct tf *tfp)
 			TFP_DRV_LOG(ERR,
 				    "rm free failed on unbind\n");
 		}
-
 		ident_db[i] = NULL;
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.c b/drivers/net/bnxt/tf_core/tf_if_tbl.c
new file mode 100644
index 000000000..dc73ba2d0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.c
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_if_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+
+/**
+ * IF Table DBs.
+ */
+static void *if_tbl_db[TF_DIR_MAX];
+
+/**
+ * IF Table Shadow DBs
+ */
+/* static void *shadow_if_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+/**
+ * Convert if_tbl_type to hwrm type.
+ *
+ * [in] if_tbl_type
+ *   Interface table type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_if_tbl_get_hcapi_type(struct tf_if_tbl_get_hcapi_parms *parms)
+{
+	struct tf_if_tbl_cfg *tbl_cfg;
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	tbl_cfg = (struct tf_if_tbl_cfg *)parms->tbl_db;
+	cfg_type = tbl_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_IF_TBL_CFG)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = tbl_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_if_tbl_bind(struct tf *tfp __rte_unused,
+	       struct tf_if_tbl_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "IF TBL DB already initialized\n");
+		return -EINVAL;
+	}
+
+	if_tbl_db[TF_DIR_RX] = parms->cfg;
+	if_tbl_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_if_tbl_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	if_tbl_db[TF_DIR_RX] = NULL;
+	if_tbl_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_if_tbl_set(struct tf *tfp,
+	      struct tf_if_tbl_set_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	rc = tf_msg_set_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_if_tbl_get(struct tf *tfp,
+	      struct tf_if_tbl_get_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	/* Get the entry */
+	rc = tf_msg_get_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
new file mode 100644
index 000000000..54d4c37f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_IF_TBL_TYPE_H_
+#define TF_IF_TBL_TYPE_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/*
+ * This is the constant used to define invalid CFA
+ * types across all devices.
+ */
+#define CFA_IF_TBL_TYPE_INVALID 65535
+
+struct tf;
+
+/**
+ * The IF Table module provides processing of Internal TF interface table types.
+ */
+
+/**
+ * IF table configuration enumeration.
+ */
+enum tf_if_tbl_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_IF_TBL_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_IF_TBL_CFG,
+};
+
+/**
+ * IF table configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI type.
+ */
+struct tf_if_tbl_cfg {
+	/**
+	 * IF table config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_if_tbl_get_hcapi_parms {
+	/**
+	 * [in] IF Tbl DB Handle
+	 */
+	void *tbl_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_if_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_if_tbl_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_if_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+};
+
+/**
+ * IF Table set parameters
+ */
+struct tf_if_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * IF Table get parameters
+ */
+struct tf_if_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page if tbl Table
+ *
+ * @ref tf_if_tbl_bind
+ *
+ * @ref tf_if_tbl_unbind
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_restore
+ */
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_bind(struct tf *tfp,
+		   struct tf_if_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_unbind(struct tf *tfp);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Interface Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_set(struct tf *tfp,
+		  struct tf_if_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_get(struct tf *tfp,
+		  struct tf_if_tbl_get_parms *parms);
+
+#endif /* TF_IF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 659065de3..6600a14c8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -125,12 +125,19 @@ tf_msg_session_close(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_close_input req = { 0 };
 	struct hwrm_tf_session_close_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_CLOSE;
 	parms.req_data = (uint32_t *)&req;
@@ -150,12 +157,19 @@ tf_msg_session_qcfg(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_QCFG,
 	parms.req_data = (uint32_t *)&req;
@@ -448,13 +462,22 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_insert_input req = { 0 };
 	struct hwrm_tf_em_insert_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
 	uint16_t flags;
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	tfp_memcpy(req.em_key,
 		   em_parms->key,
 		   ((em_parms->key_sz_in_bits + 7) / 8));
@@ -498,11 +521,19 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
 	uint16_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
@@ -789,21 +820,19 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_set_input req = { 0 };
 	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
@@ -840,21 +869,19 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_get_input req = { 0 };
 	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
@@ -897,22 +924,20 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.start_index = tfp_cpu_to_le_32(starting_idx);
@@ -939,3 +964,102 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+int
+tf_msg_get_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_get_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_get_input_t req = { 0 };
+	tf_if_tbl_get_output_t resp;
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_16(params->data_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_IF_TBL_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	if (parms.tf_resp_code != 0)
+		return tfp_le_to_cpu_32(parms.tf_resp_code);
+
+	tfp_memcpy(&params->data[0], resp.data, req.data_sz_in_bytes);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_set_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_set_input_t req = { 0 };
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_32(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_32(params->data_sz_in_bytes);
+	tfp_memcpy(&req.data[0], params->data, params->data_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_IF_TBL_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 7432873d7..37f291016 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -428,4 +428,34 @@ int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			      uint16_t entry_sz_in_bytes,
 			      uint64_t physical_mem_addr);
 
+/**
+ * Sends Set message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_set_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_set_parms *params);
+
+/**
+ * Sends get message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table get parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_get_parms *params);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index b08d06306..529ea5083 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -70,14 +70,24 @@ tf_session_open_session(struct tf *tfp,
 		goto cleanup;
 	}
 	tfp->session->core_data = cparms.mem_va;
+	session_id = &parms->open_cfg->session_id;
+
+	/* Update Session Info, which is what is visible to the caller */
+	tfp->session->ver.major = 0;
+	tfp->session->ver.minor = 0;
+	tfp->session->ver.update = 0;
 
-	/* Initialize Session and Device */
+	tfp->session->session_id.internal.domain = session_id->internal.domain;
+	tfp->session->session_id.internal.bus = session_id->internal.bus;
+	tfp->session->session_id.internal.device = session_id->internal.device;
+	tfp->session->session_id.internal.fw_session_id = fw_session_id;
+
+	/* Initialize Session and Device, which is private */
 	session = (struct tf_session *)tfp->session->core_data;
 	session->ver.major = 0;
 	session->ver.minor = 0;
 	session->ver.update = 0;
 
-	session_id = &parms->open_cfg->session_id;
 	session->session_id.internal.domain = session_id->internal.domain;
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 29/51] net/bnxt: add TF register and unregister
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (27 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
                           ` (21 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TF register/unregister support. Session got session clients to
  keep track of the ctrl-channels/function.
- Add support code to tfp layer

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_core/Makefile     |   1 +
 drivers/net/bnxt/tf_core/ll.c         |  52 +++
 drivers/net/bnxt/tf_core/ll.h         |  46 +++
 drivers/net/bnxt/tf_core/tf_core.c    |  26 +-
 drivers/net/bnxt/tf_core/tf_core.h    | 105 +++--
 drivers/net/bnxt/tf_core/tf_msg.c     |  84 +++-
 drivers/net/bnxt/tf_core/tf_msg.h     |  42 +-
 drivers/net/bnxt/tf_core/tf_rm.c      |   2 +-
 drivers/net/bnxt/tf_core/tf_session.c | 569 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h | 201 ++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.c     |   2 +
 drivers/net/bnxt/tf_core/tf_tcam.c    |   8 +-
 drivers/net/bnxt/tf_core/tfp.c        |  17 +
 drivers/net/bnxt/tf_core/tfp.h        |  15 +
 15 files changed, 1075 insertions(+), 96 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index f25a9448d..54564e02e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -44,6 +44,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
+	'tf_core/ll.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 1924bef02..6210bc70e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -8,6 +8,7 @@
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/ll.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/ll.c b/drivers/net/bnxt/tf_core/ll.c
new file mode 100644
index 000000000..6f58662f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Functions */
+
+#include <stdio.h>
+#include "ll.h"
+
+/* init linked list */
+void ll_init(struct ll *ll)
+{
+	ll->head = NULL;
+	ll->tail = NULL;
+}
+
+/* insert entry in linked list */
+void ll_insert(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == NULL) {
+		ll->head = entry;
+		ll->tail = entry;
+		entry->next = NULL;
+		entry->prev = NULL;
+	} else {
+		entry->next = ll->head;
+		entry->prev = NULL;
+		entry->next->prev = entry;
+		ll->head = entry->next->prev;
+	}
+}
+
+/* delete entry from linked list */
+void ll_delete(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == entry && ll->tail == entry) {
+		ll->head = NULL;
+		ll->tail = NULL;
+	} else if (ll->head == entry) {
+		ll->head = entry->next;
+		ll->head->prev = NULL;
+	} else if (ll->tail == entry) {
+		ll->tail = entry->prev;
+		ll->tail->next = NULL;
+	} else {
+		entry->prev->next = entry->next;
+		entry->next->prev = entry->prev;
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/ll.h b/drivers/net/bnxt/tf_core/ll.h
new file mode 100644
index 000000000..d70917850
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Header File */
+
+#ifndef _LL_H_
+#define _LL_H_
+
+/* linked list entry */
+struct ll_entry {
+	struct ll_entry *prev;
+	struct ll_entry *next;
+};
+
+/* linked list */
+struct ll {
+	struct ll_entry *head;
+	struct ll_entry *tail;
+};
+
+/**
+ * Linked list initialization.
+ *
+ * [in] ll, linked list to be initialized
+ */
+void ll_init(struct ll *ll);
+
+/**
+ * Linked list insert
+ *
+ * [in] ll, linked list where element is inserted
+ * [in] entry, entry to be added
+ */
+void ll_insert(struct ll *ll, struct ll_entry *entry);
+
+/**
+ * Linked list delete
+ *
+ * [in] ll, linked list where element is removed
+ * [in] entry, entry to be deleted
+ */
+void ll_delete(struct ll *ll, struct ll_entry *entry);
+
+#endif /* _LL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a980a2056..489c461d1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -58,21 +58,20 @@ tf_open_session(struct tf *tfp,
 	parms->session_id.internal.device = device;
 	oparms.open_cfg = parms;
 
+	/* Session vs session client is decided in
+	 * tf_session_open_session()
+	 */
+	printf("TF_OPEN, %s\n", parms->ctrl_chan_name);
 	rc = tf_session_open_session(tfp, &oparms);
 	/* Logging handled by tf_session_open_session */
 	if (rc)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
+		    parms->session_id.internal.device);
 
 	return 0;
 }
@@ -152,6 +151,9 @@ tf_close_session(struct tf *tfp)
 
 	cparms.ref_count = &ref_count;
 	cparms.session_id = &session_id;
+	/* Session vs session client is decided in
+	 * tf_session_close_session()
+	 */
 	rc = tf_session_close_session(tfp,
 				      &cparms);
 	/* Logging handled by tf_session_close_session */
@@ -159,16 +161,10 @@ tf_close_session(struct tf *tfp)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Closed session, session_id:%d, ref_count:%d\n",
-		    cparms.session_id->id,
-		    *cparms.ref_count);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    cparms.session_id->internal.domain,
 		    cparms.session_id->internal.bus,
-		    cparms.session_id->internal.device,
-		    cparms.session_id->internal.fw_session_id);
+		    cparms.session_id->internal.device);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e3d46bd45..fea222bee 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -72,7 +72,6 @@ enum tf_mem {
  * @ref tf_close_session
  */
 
-
 /**
  * Session Version defines
  *
@@ -113,6 +112,21 @@ union tf_session_id {
 	} internal;
 };
 
+/**
+ * Session Client Identifier
+ *
+ * Unique identifier for a client within a session. Session Client ID
+ * is constructed from the passed in session and a firmware allocated
+ * fw_session_client_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_client_id {
+	uint16_t id;
+	struct {
+		uint8_t fw_session_id;
+		uint8_t fw_session_client_id;
+	} internal;
+};
+
 /**
  * Session Version
  *
@@ -368,8 +382,8 @@ struct tf_session_info {
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
- * session info at that time. Additional 'opens' can be done using
- * same session_info by using tf_attach_session().
+ * session info at that time. A TruFlow Session can be used by more
+ * than one PF/VF by using the tf_open_session().
  *
  * It is expected that ULP allocates this memory as shared memory.
  *
@@ -506,36 +520,62 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
+	/**
+	 * [in/out] session_client_id
+	 *
+	 * Session_client_id is unique per client.
+	 *
+	 * Session_client_id is composed of session_id and the
+	 * fw_session_client_id fw_session_id. The construction is
+	 * done by parsing the ctrl_chan_name together with allocation
+	 * of a fw_session_client_id during tf_open_session().
+	 *
+	 * A reference count will be incremented in the session on
+	 * which a client is created.
+	 *
+	 * A session can first be closed if there is one Session
+	 * Client left. Session Clients should closed using
+	 * tf_close_session().
+	 */
+	union tf_session_client_id session_client_id;
 	/**
 	 * [in] device type
 	 *
-	 * Device type is passed, one of Wh+, SR, Thor, SR2
+	 * Device type for the session.
 	 */
 	enum tf_device_type device_type;
-	/** [in] resources
+	/**
+	 * [in] resources
 	 *
-	 * Resource allocation
+	 * Resource allocation for the session.
 	 */
 	struct tf_session_resources resources;
 };
 
 /**
- * Opens a new TruFlow management session.
+ * Opens a new TruFlow Session or session client.
+ *
+ * What gets created depends on the passed in tfp content. If the tfp
+ * does not have prior session data a new session with associated
+ * session client. If tfp has a session already a session client will
+ * be created. In both cases the session client is created using the
+ * provided ctrl_chan_name.
  *
- * TruFlow will allocate session specific memory, shared memory, to
- * hold its session data. This data is private to TruFlow.
+ * In case of session creation TruFlow will allocate session specific
+ * memory, shared memory, to hold its session data. This data is
+ * private to TruFlow.
  *
- * Multiple PFs can share the same session. An association, refcount,
- * between session and PFs is maintained within TruFlow. Thus, a PF
- * can attach to an existing session, see tf_attach_session().
+ * No other TruFlow APIs will succeed unless this API is first called
+ * and succeeds.
  *
- * No other TruFlow APIs will succeed unless this API is first called and
- * succeeds.
+ * tf_open_session() returns a session id and session client id that
+ * is used on all other TF APIs.
  *
- * tf_open_session() returns a session id that can be used on attach.
+ * A Session or session client can be closed using tf_close_session().
  *
  * [in] tfp
  *   Pointer to TF handle
+ *
  * [in] parms
  *   Pointer to open parameters
  *
@@ -546,6 +586,11 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+/**
+ * Experimental
+ *
+ * tf_attach_session parameters definition.
+ */
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -595,15 +640,18 @@ struct tf_attach_session_parms {
 };
 
 /**
- * Attaches to an existing session. Used when more than one PF wants
- * to share a single session. In that case all TruFlow management
- * traffic will be sent to the TruFlow firmware using the 'PF' that
- * did the attach not the session ctrl channel.
+ * Experimental
+ *
+ * Allows a 2nd application instance to attach to an existing
+ * session. Used when a session is to be shared between two processes.
  *
  * Attach will increment a ref count as to manage the shared session data.
  *
- * [in] tfp, pointer to TF handle
- * [in] parms, pointer to attach parameters
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to attach parameters
  *
  * Returns
  *   - (0) if successful.
@@ -613,9 +661,15 @@ int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
 
 /**
- * Closes an existing session. Cleans up all hardware and firmware
- * state associated with the TruFlow application session when the last
- * PF associated with the session results in refcount to be zero.
+ * Closes an existing session client or the session it self. The
+ * session client is default closed and if the session reference count
+ * is 0 then the session is closed as well.
+ *
+ * On session close all hardware and firmware state associated with
+ * the TruFlow application is cleaned up.
+ *
+ * The session client is extracted from the tfp. Thus tf_close_session()
+ * cannot close a session client on behalf of another function.
  *
  * Returns success or failure code.
  */
@@ -1056,9 +1110,10 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_set_tbl_entry
  *
  * @ref tf_get_tbl_entry
+ *
+ * @ref tf_bulk_get_tbl_entry
  */
 
-
 /**
  * tf_alloc_tbl_entry parameter definition
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 6600a14c8..8c2dff8ad 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -84,7 +84,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
-		    uint8_t *fw_session_id)
+		    uint8_t *fw_session_id,
+		    uint8_t *fw_session_client_id)
 {
 	int rc;
 	struct hwrm_tf_session_open_input req = { 0 };
@@ -106,7 +107,8 @@ tf_msg_session_open(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	*fw_session_id = resp.fw_session_id;
+	*fw_session_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
+	*fw_session_client_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
 
 	return rc;
 }
@@ -119,6 +121,84 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_msg_session_client_register(struct tf *tfp,
+			       char *ctrl_channel_name,
+			       uint8_t *fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_register_input req = { 0 };
+	struct hwrm_tf_session_register_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	tfp_memcpy(&req.session_client_name,
+		   ctrl_channel_name,
+		   TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_REGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_client_id =
+		(uint8_t)tfp_le_to_cpu_32(resp.fw_session_client_id);
+
+	return rc;
+}
+
+int
+tf_msg_session_client_unregister(struct tf *tfp,
+				 uint8_t fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_unregister_input req = { 0 };
+	struct hwrm_tf_session_unregister_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.fw_session_client_id = tfp_cpu_to_le_32(fw_session_client_id);
+
+	parms.tf_type = HWRM_TF_SESSION_UNREGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+
+	return rc;
+}
+
 int
 tf_msg_session_close(struct tf *tfp)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 37f291016..c02a5203c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -34,7 +34,8 @@ struct tf;
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
-			uint8_t *fw_session_id);
+			uint8_t *fw_session_id,
+			uint8_t *fw_session_client_id);
 
 /**
  * Sends session close request to Firmware
@@ -42,6 +43,9 @@ int tf_msg_session_open(struct tf *tfp,
  * [in] session
  *   Pointer to session handle
  *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
  * [in] fw_session_id
  *   Pointer to the fw_session_id that is assigned to the session at
  *   time of session open
@@ -53,6 +57,42 @@ int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
 			  uint8_t tf_fw_session_id);
 
+/**
+ * Sends session client register request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_register(struct tf *tfp,
+				   char *ctrl_channel_name,
+				   uint8_t *fw_session_client_id);
+
+/**
+ * Sends session client unregister request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_unregister(struct tf *tfp,
+				     uint8_t fw_session_client_id);
+
 /**
  * Sends session close request to Firmware
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 30313e2ea..fdb87ecb8 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -389,7 +389,7 @@ tf_rm_create_db(struct tf *tfp,
 	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 529ea5083..3b355f64e 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -12,14 +12,49 @@
 #include "tf_msg.h"
 #include "tfp.h"
 
-int
-tf_session_open_session(struct tf *tfp,
-			struct tf_session_open_session_parms *parms)
+struct tf_session_client_create_parms {
+	/**
+	 * [in] Pointer to the control channel name string
+	 */
+	char *ctrl_chan_name;
+
+	/**
+	 * [out] Firmware Session Client ID
+	 */
+	union tf_session_client_id *session_client_id;
+};
+
+struct tf_session_client_destroy_parms {
+	/**
+	 * FW Session Client Identifier
+	 */
+	union tf_session_client_id session_client_id;
+};
+
+/**
+ * Creates a Session and the associated client.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_create(struct tf *tfp,
+		  struct tf_session_open_session_parms *parms)
 {
 	int rc;
 	struct tf_session *session = NULL;
+	struct tf_session_client *client;
 	struct tfp_calloc_parms cparms;
 	uint8_t fw_session_id;
+	uint8_t fw_session_client_id;
 	union tf_session_id *session_id;
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -27,7 +62,8 @@ tf_session_open_session(struct tf *tfp,
 	/* Open FW session and get a new session_id */
 	rc = tf_msg_session_open(tfp,
 				 parms->open_cfg->ctrl_chan_name,
-				 &fw_session_id);
+				 &fw_session_id,
+				 &fw_session_client_id);
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
@@ -92,15 +128,46 @@ tf_session_open_session(struct tf *tfp,
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
 	session->session_id.internal.fw_session_id = fw_session_id;
-	/* Return the allocated fw session id */
-	session_id->internal.fw_session_id = fw_session_id;
+	/* Return the allocated session id */
+	session_id->id = session->session_id.id;
 
 	session->shadow_copy = parms->open_cfg->shadow_copy;
 
-	tfp_memcpy(session->ctrl_chan_name,
+	/* Init session client list */
+	ll_init(&session->client_ll);
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	client->session_client_id.internal.fw_session_id = fw_session_id;
+	client->session_client_id.internal.fw_session_client_id =
+		fw_session_client_id;
+
+	tfp_memcpy(client->ctrl_chan_name,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
+	ll_insert(&session->client_ll, &client->ll_entry);
+	session->ref_count++;
+
 	rc = tf_dev_bind(tfp,
 			 parms->open_cfg->device_type,
 			 session->shadow_copy,
@@ -110,7 +177,7 @@ tf_session_open_session(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	session->ref_count++;
+	session->dev_init = true;
 
 	return 0;
 
@@ -121,6 +188,235 @@ tf_session_open_session(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Creates a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_client_create(struct tf *tfp,
+			 struct tf_session_client_create_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tf_session_client *client;
+	struct tfp_calloc_parms cparms;
+	union tf_session_client_id session_client_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &session);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	client = tf_session_find_session_client_by_name(session,
+							parms->ctrl_chan_name);
+	if (client) {
+		TFP_DRV_LOG(ERR,
+			    "Client %s, already registered with this session\n",
+			    parms->ctrl_chan_name);
+		return -EOPNOTSUPP;
+	}
+
+	rc = tf_msg_session_client_register
+		    (tfp,
+		    parms->ctrl_chan_name,
+		    &session_client_id.internal.fw_session_client_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to create client on session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	/* Build the Session Client ID by adding the fw_session_id */
+	rc = tf_session_get_fw_session_id
+			(tfp,
+			&session_client_id.internal.fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session Firmware id lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tfp_memcpy(client->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	client->session_client_id.id = session_client_id.id;
+
+	ll_insert(&session->client_ll, &client->ll_entry);
+
+	session->ref_count++;
+
+	/* Build the return value */
+	parms->session_client_id->id = session_client_id.id;
+
+ cleanup:
+	/* TBD - Add code to unregister newly create client from fw */
+
+	return rc;
+}
+
+
+/**
+ * Destroys a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session client destroy parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTFOUND) error, client not owned by the session.
+ *   - (-ENOTSUPP) error, unable to destroy client as its the last
+ *                 client. Please use the tf_session_close().
+ */
+static int
+tf_session_client_destroy(struct tf *tfp,
+			  struct tf_session_client_destroy_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_session_client *client;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Check session owns this client and that we're not the last client */
+	client = tf_session_get_session_client(tfs,
+					       parms->session_client_id);
+	if (client == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "Client %d, not found within this session\n",
+			    parms->session_client_id.id);
+		return -EINVAL;
+	}
+
+	/* If last client the request is rejected and cleanup should
+	 * be done by session close.
+	 */
+	if (tfs->ref_count == 1)
+		return -EOPNOTSUPP;
+
+	rc = tf_msg_session_client_unregister
+			(tfp,
+			parms->session_client_id.internal.fw_session_client_id);
+
+	/* Log error, but continue. If FW fails we do not really have
+	 * a way to fix this but the client would no longer be valid
+	 * thus we remove from the session.
+	 */
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Client destroy on FW Failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+
+	/* Decrement the session ref_count */
+	tfs->ref_count--;
+
+	tfp_free(client);
+
+	return rc;
+}
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session_client_create_parms scparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Decide if we're creating a new session or session client */
+	if (tfp->session == NULL) {
+		rc = tf_session_create(tfp, parms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to create session, ctrl_chan_name:%s, rc:%s\n",
+				    parms->open_cfg->ctrl_chan_name,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+		       "Session created, session_client_id:%d, session_id:%d\n",
+		       parms->open_cfg->session_client_id.id,
+		       parms->open_cfg->session_id.id);
+	} else {
+		scparms.ctrl_chan_name = parms->open_cfg->ctrl_chan_name;
+		scparms.session_client_id = &parms->open_cfg->session_client_id;
+
+		/* Create the new client and get it associated with
+		 * the session.
+		 */
+		rc = tf_session_client_create(tfp, &scparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+			      "Failed to create client on session %d, rc:%s\n",
+			      parms->open_cfg->session_id.id,
+			      strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Session Client:%d created on session:%d\n",
+			    parms->open_cfg->session_client_id.id,
+			    parms->open_cfg->session_id.id);
+	}
+
+	return 0;
+}
+
 int
 tf_session_attach_session(struct tf *tfp __rte_unused,
 			  struct tf_session_attach_session_parms *parms __rte_unused)
@@ -141,7 +437,10 @@ tf_session_close_session(struct tf *tfp,
 {
 	int rc;
 	struct tf_session *tfs = NULL;
+	struct tf_session_client *client;
 	struct tf_dev_info *tfd;
+	struct tf_session_client_destroy_parms scdparms;
+	uint16_t fid;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -161,7 +460,49 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	tfs->ref_count--;
+	/* Get the client, we need it independently of the closure
+	 * type (client or session closure).
+	 *
+	 * We find the client by way of the fid. Thus one cannot close
+	 * a client on behalf of someone else.
+	 */
+	rc = tfp_get_fid(tfp, &fid);
+	if (rc)
+		return rc;
+
+	client = tf_session_find_session_client_by_fid(tfs,
+						       fid);
+	/* In case multiple clients we chose to close those first */
+	if (tfs->ref_count > 1) {
+		/* Linaro gcc can't static init this structure */
+		memset(&scdparms,
+		       0,
+		       sizeof(struct tf_session_client_destroy_parms));
+
+		scdparms.session_client_id = client->session_client_id;
+		/* Destroy requested client so its no longer
+		 * registered with this session.
+		 */
+		rc = tf_session_client_destroy(tfp, &scdparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to unregister Client %d, rc:%s\n",
+				    client->session_client_id.id,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Closed session client, session_client_id:%d\n",
+			    client->session_client_id.id);
+
+		TFP_DRV_LOG(INFO,
+			    "session_id:%d, ref_count:%d\n",
+			    tfs->session_id.id,
+			    tfs->ref_count);
+
+		return 0;
+	}
 
 	/* Record the session we're closing so the caller knows the
 	 * details.
@@ -176,23 +517,6 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	if (tfs->ref_count > 0) {
-		/* In case we're attached only the session client gets
-		 * closed.
-		 */
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "FW Session close failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		return 0;
-	}
-
-	/* Final cleanup as we're last user of the session */
-
 	/* Unbind the device */
 	rc = tf_dev_unbind(tfp, tfd);
 	if (rc) {
@@ -202,7 +526,6 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
 		/* Log error */
@@ -211,6 +534,21 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
+	/* Final cleanup as we're last user of the session thus we
+	 * also delete the last client.
+	 */
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+	tfp_free(client);
+
+	tfs->ref_count--;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    tfs->session_id.id,
+		    tfs->ref_count);
+
+	tfs->dev_init = false;
+
 	tfp_free(tfp->session->core_data);
 	tfp_free(tfp->session);
 	tfp->session = NULL;
@@ -218,12 +556,31 @@ tf_session_close_session(struct tf *tfp,
 	return 0;
 }
 
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return true;
+	}
+
+	return false;
+}
+
 int
-tf_session_get_session(struct tf *tfp,
-		       struct tf_session **tfs)
+tf_session_get_session_internal(struct tf *tfp,
+				struct tf_session **tfs)
 {
-	int rc;
+	int rc = 0;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -234,7 +591,113 @@ tf_session_get_session(struct tf *tfp,
 
 	*tfs = (struct tf_session *)(tfp->session->core_data);
 
-	return 0;
+	return rc;
+}
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session **tfs)
+{
+	int rc;
+	uint16_t fw_fid;
+	bool supported = false;
+
+	rc = tf_session_get_session_internal(tfp,
+					     tfs);
+	/* Logging done by tf_session_get_session_internal */
+	if (rc)
+		return rc;
+
+	/* As session sharing among functions aka 'individual clients'
+	 * is supported we have to assure that the client is indeed
+	 * registered before we get deep in the TruFlow api stack.
+	 */
+	rc = tfp_get_fid(tfp, &fw_fid);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Internal FID lookup\n, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	supported = tf_session_is_fid_supported(*tfs, fw_fid);
+	if (!supported) {
+		TFP_DRV_LOG
+			(ERR,
+			"Ctrl channel not registered with session\n, rc:%s\n",
+			strerror(-rc));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->session_client_id.id == session_client_id.id)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL || ctrl_chan_name == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (strncmp(client->ctrl_chan_name,
+			    ctrl_chan_name,
+			    TF_SESSION_NAME_MAX) == 0)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return client;
+	}
+
+	return NULL;
 }
 
 int
@@ -253,6 +716,7 @@ tf_session_get_fw_session_id(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs = NULL;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -261,7 +725,15 @@ tf_session_get_fw_session_id(struct tf *tfp,
 		return rc;
 	}
 
-	rc = tf_session_get_session(tfp, &tfs);
+	if (fw_session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -269,3 +741,36 @@ tf_session_get_fw_session_id(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_session_get_session_id(struct tf *tfp,
+			  union tf_session_id *session_id)
+{
+	int rc;
+	struct tf_session *tfs;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*session_id = tfs->session_id;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index a303fde51..aa7a27877 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -16,6 +16,7 @@
 #include "tf_tbl.h"
 #include "tf_resources.h"
 #include "stack.h"
+#include "ll.h"
 
 /**
  * The Session module provides session control support. A session is
@@ -29,7 +30,6 @@
 
 /** Session defines
  */
-#define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
 /**
@@ -50,7 +50,7 @@
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
  * allocations and (if so configured) any shadow copy of flow
- * information.
+ * information. It also holds info about Session Clients.
  *
  * Memory is assigned to the Truflow instance by way of
  * tf_open_session. Memory is allocated and owned by i.e. ULP.
@@ -65,17 +65,10 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Session ID, allocated by FW on tf_open_session() */
-	union tf_session_id session_id;
-
 	/**
-	 * String containing name of control channel interface to be
-	 * used for this session to communicate with firmware.
-	 *
-	 * ctrl_chan_name will be used as part of a name for any
-	 * shared memory allocation.
+	 * Session ID, allocated by FW on tf_open_session()
 	 */
-	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	union tf_session_id session_id;
 
 	/**
 	 * Boolean controlling the use and availability of shadow
@@ -92,14 +85,67 @@ struct tf_session {
 
 	/**
 	 * Session Reference Count. To keep track of functions per
-	 * session the ref_count is incremented. There is also a
+	 * session the ref_count is updated. There is also a
 	 * parallel TruFlow Firmware ref_count in case the TruFlow
 	 * Core goes away without informing the Firmware.
 	 */
 	uint8_t ref_count;
 
-	/** Device handle */
+	/**
+	 * Session Reference Count for attached sessions. To keep
+	 * track of application sharing of a session the
+	 * ref_count_attach is updated.
+	 */
+	uint8_t ref_count_attach;
+
+	/**
+	 * Device handle
+	 */
 	struct tf_dev_info dev;
+	/**
+	 * Device init flag. False if Device is not fully initialized,
+	 * else true.
+	 */
+	bool dev_init;
+
+	/**
+	 * Linked list of clients registered for this session
+	 */
+	struct ll client_ll;
+};
+
+/**
+ * Session Client
+ *
+ * Shared memory for each of the Session Clients. A session can have
+ * one or more clients.
+ */
+struct tf_session_client {
+	/**
+	 * Linked list of clients
+	 */
+	struct ll_entry ll_entry; /* For inserting in link list, must be
+				   * first field of struct.
+				   */
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Firmware FID, learned at time of Session Client create.
+	 */
+	uint16_t fw_fid;
+
+	/**
+	 * Session Client ID, allocated by FW on tf_register_session()
+	 */
+	union tf_session_client_id session_client_id;
 };
 
 /**
@@ -126,7 +172,13 @@ struct tf_session_attach_session_parms {
  * Session close parameter definition
  */
 struct tf_session_close_session_parms {
+	/**
+	 * []
+	 */
 	uint8_t *ref_count;
+	/**
+	 * []
+	 */
 	union tf_session_id *session_id;
 };
 
@@ -139,11 +191,23 @@ struct tf_session_close_session_parms {
  *
  * @ref tf_session_close_session
  *
+ * @ref tf_session_is_fid_supported
+ *
+ * @ref tf_session_get_session_internal
+ *
  * @ref tf_session_get_session
  *
+ * @ref tf_session_get_session_client
+ *
+ * @ref tf_session_find_session_client_by_name
+ *
+ * @ref tf_session_find_session_client_by_fid
+ *
  * @ref tf_session_get_device
  *
  * @ref tf_session_get_fw_session_id
+ *
+ * @ref tf_session_get_session_id
  */
 
 /**
@@ -179,7 +243,8 @@ int tf_session_attach_session(struct tf *tfp,
 			      struct tf_session_attach_session_parms *parms);
 
 /**
- * Closes a previous created session.
+ * Closes a previous created session. Only possible if previous
+ * registered Clients had been unregistered first.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -189,13 +254,53 @@ int tf_session_attach_session(struct tf *tfp,
  *
  * Returns
  *   - (0) if successful.
+ *   - (-EUSERS) if clients are still registered with the session.
  *   - (-EINVAL) on failure.
  */
 int tf_session_close_session(struct tf *tfp,
 			     struct tf_session_close_session_parms *parms);
 
 /**
- * Looks up the private session information from the TF session info.
+ * Verifies that the fid is supported by the session. Used to assure
+ * that a function i.e. client/control channel is registered with the
+ * session.
+ *
+ * [in] tfs
+ *   Pointer to TF Session handle
+ *
+ * [in] fid
+ *   FID value to check
+ *
+ * Returns
+ *   - (true) if successful, else false
+ *   - (-EINVAL) on failure.
+ */
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Does not perform a fid check against the registered
+ * clients. Should be used if tf_session_get_session() was used
+ * previously i.e. at the TF API boundary.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_internal(struct tf *tfp,
+				    struct tf_session **tfs);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Performs a fid check against the clients on the session.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -210,6 +315,53 @@ int tf_session_close_session(struct tf *tfp,
 int tf_session_get_session(struct tf *tfp,
 			   struct tf_session **tfs);
 
+/**
+ * Looks up client within the session.
+ *
+ * [in] tfs
+ *   Pointer pointer to the session
+ *
+ * [in] session_client_id
+ *   Client id to look for within the session
+ *
+ * Returns
+ *   client if successful.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id);
+
+/**
+ * Looks up client using name within the session.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] session_client_name, name of the client to lookup in the session
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name);
+
+/**
+ * Looks up client using the fid.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] fid, fid of the client to find
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid);
+
 /**
  * Looks up the device information from the TF Session.
  *
@@ -227,8 +379,7 @@ int tf_session_get_device(struct tf_session *tfs,
 			  struct tf_dev_info **tfd);
 
 /**
- * Looks up the FW session id of the firmware connection for the
- * requested TF handle.
+ * Looks up the FW Session id the requested TF handle.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -243,4 +394,20 @@ int tf_session_get_device(struct tf_session *tfs,
 int tf_session_get_fw_session_id(struct tf *tfp,
 				 uint8_t *fw_session_id);
 
+/**
+ * Looks up the Session id the requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_id(struct tf *tfp,
+			      union tf_session_id *session_id);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 7d4daaf2d..2b4a7c561 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -269,6 +269,7 @@ tf_tbl_set(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -338,6 +339,7 @@ tf_tbl_get(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 1c48b5363..cbfaa94ee 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -138,7 +138,7 @@ tf_tcam_alloc(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -218,7 +218,7 @@ tf_tcam_free(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -319,6 +319,7 @@ tf_tcam_free(struct tf *tfp,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -353,7 +354,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -415,6 +416,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 69d1c9a1f..426a182a9 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -161,3 +161,20 @@ tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
 {
 	rte_spinlock_unlock(&parms->slock);
 }
+
+int
+tfp_get_fid(struct tf *tfp, uint16_t *fw_fid)
+{
+	struct bnxt *bp = NULL;
+
+	if (tfp == NULL || fw_fid == NULL)
+		return -EINVAL;
+
+	bp = container_of(tfp, struct bnxt, tfp);
+	if (bp == NULL)
+		return -EINVAL;
+
+	*fw_fid = bp->fw_fid;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index fe49b6304..8789eba1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -238,4 +238,19 @@ int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
 #define tfp_bswap_32(val) rte_bswap32(val)
 #define tfp_bswap_64(val) rte_bswap64(val)
 
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 30/51] net/bnxt: add global config set and get APIs
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (28 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
                           ` (20 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Add support to update global configuration for ACT_TECT
  and ACT_ABCR.
- Add support to allow Tunnel and Action global configuration.
- Remove register read and write operations.
- Remove the register read and write support.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/hcapi_cfa.h       |   3 +
 drivers/net/bnxt/meson.build             |   1 +
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  54 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 137 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       |  77 +++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  20 +++
 drivers/net/bnxt/tf_core/tf_device.h     |  33 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   4 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   5 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c | 199 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_global_cfg.h | 170 +++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 109 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |  31 ++++
 14 files changed, 840 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h

diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index 7a67493bd..3d895f088 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -245,6 +245,9 @@ int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
 			     struct hcapi_cfa_data *mirror);
+int hcapi_cfa_p4_global_cfg_hwop(struct hcapi_cfa_hwop *op,
+				 uint32_t type,
+				 struct hcapi_cfa_data *config);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 54564e02e..ace7353be 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -45,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
+	'tf_core/tf_global_cfg.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 6210bc70e..202db4150 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -27,6 +27,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_global_cfg.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -36,3 +37,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_global_cfg.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 32f152314..7ade9927a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -13,8 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_REG_GET = 821,
-	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_GET_GLOBAL_CFG = 821,
+	HWRM_TFT_SET_GLOBAL_CFG = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	HWRM_TFT_IF_TBL_SET = 827,
 	HWRM_TFT_IF_TBL_GET = 828,
@@ -66,18 +66,66 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
+struct tf_set_global_cfg_input;
+struct tf_get_global_cfg_input;
+struct tf_get_global_cfg_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
 struct tf_if_tbl_set_input;
 struct tf_if_tbl_get_input;
 struct tf_if_tbl_get_output;
+/* Input params for global config set */
+typedef struct tf_set_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config type */
+	uint32_t			 type;
+	/* Offset of the type */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_set_global_cfg_input_t, *ptf_set_global_cfg_input_t;
+
+/* Input params for global config to get */
+typedef struct tf_get_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config to retrieve */
+	uint32_t			 type;
+	/* Offset to retrieve */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+} tf_get_global_cfg_input_t, *ptf_get_global_cfg_input_t;
+
+/* Output params for global config */
+typedef struct tf_get_global_cfg_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+	/* Data to get */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_get_global_cfg_output_t, *ptf_get_global_cfg_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
-	uint16_t			 flags;
+	uint32_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
 #define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 489c461d1..0f119b45f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_em.h"
 #include "tf_rm.h"
+#include "tf_global_cfg.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "bitalloc.h"
@@ -277,6 +278,142 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
+/** Get global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_get_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_get_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+/** Set global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_set_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_set_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_identifier(struct tf *tfp,
 		    struct tf_alloc_identifier_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index fea222bee..3f54ab16b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1611,6 +1611,83 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page global Global Configuration
+ *
+ * @ref tf_set_global_cfg
+ *
+ * @ref tf_get_global_cfg
+ */
+/**
+ * Tunnel Encapsulation Offsets
+ */
+enum tf_tunnel_encap_offsets {
+	TF_TUNNEL_ENCAP_L2,
+	TF_TUNNEL_ENCAP_NAT,
+	TF_TUNNEL_ENCAP_MPLS,
+	TF_TUNNEL_ENCAP_VXLAN,
+	TF_TUNNEL_ENCAP_GENEVE,
+	TF_TUNNEL_ENCAP_NVGRE,
+	TF_TUNNEL_ENCAP_GRE,
+	TF_TUNNEL_ENCAP_FULL_GENERIC
+};
+/**
+ * Global Configuration Table Types
+ */
+enum tf_global_config_type {
+	TF_TUNNEL_ENCAP,  /**< Tunnel Encap Config(TECT) */
+	TF_ACTION_BLOCK,  /**< Action Block Config(ABCR) */
+	TF_GLOBAL_CFG_TYPE_MAX
+};
+
+/**
+ * tf_global_cfg parameter definition
+ */
+struct tf_global_cfg_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * Get global configuration
+ *
+ * Retrieve the configuration
+ *
+ * Returns success or failure code.
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
+/**
+ * Update the global configuration table
+ *
+ * Read, modify write the value.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
 /**
  * @page if_tbl Interface Table Access
  *
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index a3073c826..ead958418 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -45,6 +45,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
 	struct tf_if_tbl_cfg_parms if_tbl_cfg;
+	struct tf_global_cfg_cfg_parms global_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -128,6 +129,18 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * GLOBAL_CFG
+	 */
+	global_cfg.num_elements = TF_GLOBAL_CFG_TYPE_MAX;
+	global_cfg.cfg = tf_global_cfg_p4;
+	rc = tf_global_cfg_bind(tfp, &global_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -207,6 +220,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_global_cfg_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Global Cfg Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 5a0943ad7..1740a271f 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 struct tf_session;
@@ -606,6 +607,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_if_tbl)(struct tf *tfp,
 				 struct tf_if_tbl_get_parms *parms);
+
+	/**
+	 * Update global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_set_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
+
+	/**
+	 * Get global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_get_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 2dc34b853..652608264 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -108,6 +108,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_free_tbl_scope = NULL,
 	.tf_dev_set_if_tbl = NULL,
 	.tf_dev_get_if_tbl = NULL,
+	.tf_dev_set_global_cfg = NULL,
+	.tf_dev_get_global_cfg = NULL,
 };
 
 /**
@@ -140,4 +142,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 	.tf_dev_set_if_tbl = tf_if_tbl_set,
 	.tf_dev_get_if_tbl = tf_if_tbl_get,
+	.tf_dev_set_global_cfg = tf_global_cfg_set,
+	.tf_dev_get_global_cfg = tf_global_cfg_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 3b03a7c4e..7fabb4ba8 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -11,6 +11,7 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -96,4 +97,8 @@ struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
 	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
 };
 
+struct tf_global_cfg_cfg tf_global_cfg_p4[TF_GLOBAL_CFG_TYPE_MAX] = {
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_TUNNEL_ENCAP },
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_ACTION_BLOCK },
+};
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.c b/drivers/net/bnxt/tf_core/tf_global_cfg.c
new file mode 100644
index 000000000..4ed4039db
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.c
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_global_cfg.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+/**
+ * Global Cfg DBs.
+ */
+static void *global_cfg_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_global_cfg_get_hcapi_parms {
+	/**
+	 * [in] Global Cfg DB Handle
+	 */
+	void *global_cfg_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Check global_cfg_type and return hwrm type.
+ *
+ * [in] global_cfg_type
+ *   Global Cfg type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_global_cfg_get_hcapi_type(struct tf_global_cfg_get_hcapi_parms *parms)
+{
+	struct tf_global_cfg_cfg *global_cfg;
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	global_cfg = (struct tf_global_cfg_cfg *)parms->global_cfg_db;
+	cfg_type = global_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_GLOBAL_CFG_CFG_HCAPI)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = global_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_global_cfg_bind(struct tf *tfp __rte_unused,
+		   struct tf_global_cfg_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg DB already initialized\n");
+		return -EINVAL;
+	}
+
+	global_cfg_db[TF_DIR_RX] = parms->cfg;
+	global_cfg_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Global Cfg - initialized\n");
+
+	return 0;
+}
+
+int
+tf_global_cfg_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Global Cfg DBs created\n");
+		return 0;
+	}
+
+	global_cfg_db[TF_DIR_RX] = NULL;
+	global_cfg_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_global_cfg_set(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_msg_set_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_global_cfg_get(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.h b/drivers/net/bnxt/tf_core/tf_global_cfg.h
new file mode 100644
index 000000000..5c73bb115
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_GLOBAL_CFG_H_
+#define TF_GLOBAL_CFG_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/**
+ * The global cfg module provides processing of global cfg types.
+ */
+
+struct tf;
+
+/**
+ * Global cfg configuration enumeration.
+ */
+enum tf_global_cfg_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_GLOBAL_CFG_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_GLOBAL_CFG_CFG_HCAPI,
+};
+
+/**
+ * Global cfg configuration structure, used by the Device to configure
+ * how an individual global cfg type is configured in regard to the HCAPI type.
+ */
+struct tf_global_cfg_cfg {
+	/**
+	 * Global cfg config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Global Cfg configuration parameters
+ */
+struct tf_global_cfg_cfg_parms {
+	/**
+	 * Number of table types in the configuration array
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_global_cfg_cfg *cfg;
+};
+
+/**
+ * global cfg parameters
+ */
+struct tf_dev_global_cfg_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * @page global cfg
+ *
+ * @ref tf_global_cfg_bind
+ *
+ * @ref tf_global_cfg_unbind
+ *
+ * @ref tf_global_cfg_set
+ *
+ * @ref tf_global_cfg_get
+ *
+ */
+/**
+ * Initializes the Global Cfg module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_bind(struct tf *tfp,
+		   struct tf_global_cfg_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_unbind(struct tf *tfp);
+
+/**
+ * Updates the global configuration table
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_set(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+/**
+ * Get global configuration
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_get(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+#endif /* TF_GLOBAL_CFG_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 8c2dff8ad..035c0948d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -991,6 +991,111 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+int
+tf_msg_get_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_get_global_cfg_input_t req = { 0 };
+	tf_get_global_cfg_output_t resp = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+	uint16_t resp_size = 0;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_GET_GLOBAL_CFG,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	resp_size = tfp_le_to_cpu_16(resp.size);
+	if (resp_size < params->config_sz_in_bytes)
+		return -EINVAL;
+
+	if (params->config)
+		tfp_memcpy(params->config,
+			   resp.data,
+			   resp_size);
+	else
+		return -EFAULT;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_set_global_cfg_input_t req = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	tfp_memcpy(req.data, params->config,
+		   params->config_sz_in_bytes);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SET_GLOBAL_CFG,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  enum tf_dir dir,
@@ -1066,8 +1171,8 @@ tf_msg_get_if_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
-		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX);
 
 	/* Populate the request */
 	req.fw_session_id =
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index c02a5203c..195710eb8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_tcam.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 
@@ -448,6 +449,36 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+/**
+ * Sends global cfg read request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to read parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_get_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
+/**
+ * Sends global cfg update request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to write parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_set_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 31/51] net/bnxt: add support for EEM System memory
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (29 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
                           ` (19 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Select EEM Host or System memory via config parameter
- Add EEM system memory support for kernel memory
- Dependent on DPDK changes that add support for the HWRM_OEM_CMD.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 config/common_base                      |   1 +
 drivers/net/bnxt/Makefile               |   3 +
 drivers/net/bnxt/bnxt.h                 |   8 +
 drivers/net/bnxt/bnxt_hwrm.c            |  27 +
 drivers/net/bnxt/bnxt_hwrm.h            |   1 +
 drivers/net/bnxt/meson.build            |   2 +-
 drivers/net/bnxt/tf_core/Makefile       |   5 +-
 drivers/net/bnxt/tf_core/tf_core.c      |  13 +-
 drivers/net/bnxt/tf_core/tf_core.h      |   4 +-
 drivers/net/bnxt/tf_core/tf_device.c    |   5 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |   2 +-
 drivers/net/bnxt/tf_core/tf_em.h        | 113 +---
 drivers/net/bnxt/tf_core/tf_em_common.c | 683 ++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_common.h |  30 ++
 drivers/net/bnxt/tf_core/tf_em_host.c   | 689 +-----------------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 541 ++++++++++++++++---
 drivers/net/bnxt/tf_core/tf_if_tbl.h    |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c       |  24 +
 drivers/net/bnxt/tf_core/tf_tbl.h       |   7 +
 drivers/net/bnxt/tf_core/tfp.c          |  12 +
 drivers/net/bnxt/tf_core/tfp.h          |  15 +
 21 files changed, 1319 insertions(+), 870 deletions(-)

diff --git a/config/common_base b/config/common_base
index fe30c515e..370a48f02 100644
--- a/config/common_base
+++ b/config/common_base
@@ -220,6 +220,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
 # Compile burst-oriented Broadcom BNXT PMD driver
 #
 CONFIG_RTE_LIBRTE_BNXT_PMD=y
+CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM=n
 
 #
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 349b09c36..6b9544b5d 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,9 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
 include $(SRCDIR)/hcapi/Makefile
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), y)
+CFLAGS += -DTF_USE_SYSTEM_MEM
+endif
 endif
 
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 65862abdc..43e5e7162 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -563,6 +563,13 @@ struct bnxt_rep_info {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define  MAX_TABLE_SUPPORT 4
+#define  MAX_DIR_SUPPORT   2
+struct bnxt_dmabuf_info {
+	uint32_t entry_num;
+	int      fd[MAX_DIR_SUPPORT][MAX_TABLE_SUPPORT];
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -780,6 +787,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_dmabuf_info dmabuf;
 	struct bnxt_ulp_context	*ulp_ctx;
 	struct bnxt_flow_stat_info *flow_stat;
 	uint8_t			flow_xstat;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index e6a28d07c..2605ef039 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -5506,3 +5506,30 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 
 	return 0;
 }
+
+#ifdef RTE_LIBRTE_BNXT_PMD_SYSTEM
+int
+bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num)
+{
+	struct hwrm_oem_cmd_input req = {0};
+	struct hwrm_oem_cmd_output *resp = bp->hwrm_cmd_resp_addr;
+	struct bnxt_dmabuf_info oem_data;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_OEM_CMD, BNXT_USE_CHIMP_MB);
+	req.IANA = 0x14e4;
+
+	memset(&oem_data, 0, sizeof(struct bnxt_dmabuf_info));
+	oem_data.entry_num = (entry_num);
+	memcpy(&req.oem_data[0], &oem_data, sizeof(struct bnxt_dmabuf_info));
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+	HWRM_CHECK_RESULT();
+
+	bp->dmabuf.entry_num = entry_num;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+#endif /* RTE_LIBRTE_BNXT_PMD_SYSTEM */
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 87cd40779..9e0b79904 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -276,4 +276,5 @@ int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
+int bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num);
 #endif
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index ace7353be..8f6ed419e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -31,7 +31,6 @@ sources = files('bnxt_cpr.c',
         'tf_core/tf_em_common.c',
         'tf_core/tf_em_host.c',
         'tf_core/tf_em_internal.c',
-        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
@@ -46,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
+	'tf_core/tf_em_host.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 202db4150..750c25c5e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -16,8 +16,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), n)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
+else
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM) += tf_core/tf_em_system.c
+endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0f119b45f..00b2775ed 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -540,10 +540,12 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_alloc_parms aparms = { 0 };
+	struct tf_tcam_alloc_parms aparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&aparms, 0, sizeof(struct tf_tcam_alloc_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -598,10 +600,13 @@ tf_set_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_set_parms sparms = { 0 };
+	struct tf_tcam_set_parms sparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&sparms, 0, sizeof(struct tf_tcam_set_parms));
+
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -667,10 +672,12 @@ tf_free_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_free_parms fparms = { 0 };
+	struct tf_tcam_free_parms fparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&fparms, 0, sizeof(struct tf_tcam_free_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3f54ab16b..9e8042606 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1731,7 +1731,7 @@ struct tf_set_if_tbl_entry_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -1768,7 +1768,7 @@ struct tf_get_if_tbl_entry_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index ead958418..f08f7eba7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -92,8 +92,11 @@ tf_dev_bind_p4(struct tf *tfp,
 	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
 	em_cfg.cfg = tf_em_ext_p4;
 	em_cfg.resources = resources;
+#ifdef TF_USE_SYSTEM_MEM
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_SYSTEM;
+#else
 	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
-
+#endif
 	rc = tf_em_ext_common_bind(tfp, &em_cfg);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 652608264..dfe626c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -126,7 +126,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
-	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_common_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 39a216341..089026178 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -16,6 +16,9 @@
 
 #include "hcapi/hcapi_cfa_defs.h"
 
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -69,8 +72,16 @@
 #error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
+/*
+ * System memory always uses 4K pages
+ */
+#ifdef TF_USE_SYSTEM_MEM
+#define TF_EM_PAGE_SIZE (1 << TF_EM_PAGE_SIZE_4K)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SIZE_4K)
+#else
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
 #define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+#endif
 
 /*
  * Used to build GFID:
@@ -168,39 +179,6 @@ struct tf_em_cfg_parms {
  * @ref tf_em_ext_common_alloc
  */
 
-/**
- * Allocates EEM Table scope
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- *   -ENOMEM - Out of memory
- */
-int tf_alloc_eem_tbl_scope(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free's EEM Table scope control block
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			     struct tf_free_tbl_scope_parms *parms);
-
 /**
  * Insert record in to internal EM table
  *
@@ -374,8 +352,8 @@ int tf_em_ext_common_unbind(struct tf *tfp);
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+int tf_em_ext_alloc(struct tf *tfp,
+		    struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using host memory
@@ -390,40 +368,8 @@ int tf_em_ext_host_alloc(struct tf *tfp,
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
-
-/**
- * Alloc for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_alloc(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_free(struct tf *tfp,
-			  struct tf_free_tbl_scope_parms *parms);
+int tf_em_ext_free(struct tf *tfp,
+		   struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
@@ -510,8 +456,8 @@ tf_tbl_ext_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
 
 /**
  * Sets the specified external table type element.
@@ -529,26 +475,11 @@ int tf_tbl_ext_set(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
 
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data by invoking the
- * firmware.
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_system_set(struct tf *tfp,
-			  struct tf_tbl_set_parms *parms);
+int
+tf_em_ext_system_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms);
 
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 23a7fc9c2..8b02b8ba3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -23,6 +23,8 @@
 
 #include "bnxt.h"
 
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
 /**
  * EM DBs.
@@ -281,19 +283,602 @@ tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
 		       struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
+	key_entry->hdr.pointer = result->pointer;
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
 
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
 
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		 uint32_t page_size)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(page_size,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    page_size);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, page_size,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 \
+		    " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * page_size,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    parms->rx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    parms->tx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
 
 #ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
 #endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 =
+			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables
+			[(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
 }
 
+
 int
 tf_em_ext_common_bind(struct tf *tfp,
 		      struct tf_em_cfg_parms *parms)
@@ -341,6 +926,7 @@ tf_em_ext_common_bind(struct tf *tfp,
 		init = 1;
 
 	mem_type = parms->mem_type;
+
 	return 0;
 }
 
@@ -375,31 +961,88 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms)
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_tbl_ext_host_set(tfp, parms);
-	else
-		return tf_tbl_ext_system_set(tfp, parms);
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	return rc;
 }
 
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_alloc(tfp, parms);
-	else
-		return tf_em_ext_system_alloc(tfp, parms);
+	return tf_em_ext_alloc(tfp, parms);
 }
 
 int
 tf_em_ext_common_free(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_free(tfp, parms);
-	else
-		return tf_em_ext_system_free(tfp, parms);
+	return tf_em_ext_free(tfp, parms);
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index bf01df9b8..fa313c458 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -101,4 +101,34 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   uint32_t offset,
 			   enum hcapi_cfa_em_table_type table_type);
 
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		     uint32_t page_size);
+
 #endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 2626a59fe..8cc92c438 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -22,7 +22,6 @@
 
 #include "bnxt.h"
 
-
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
 #define PTU_PTE_NEXT_TO_LAST   0x4UL
@@ -30,20 +29,6 @@
 /* Number of pointers per page_size */
 #define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
 /**
  * EM DBs.
  */
@@ -294,203 +279,6 @@ tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
 /**
  * Unregisters EM Ctx in Firmware
  *
@@ -552,7 +340,7 @@ tf_em_ctx_reg(struct tf *tfp,
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
+			rc = tf_em_size_table(tbl, TF_EM_PAGE_SIZE);
 
 			if (rc)
 				goto cleanup;
@@ -578,403 +366,8 @@ tf_em_ctx_reg(struct tf *tfp,
 	return rc;
 }
 
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	return 0;
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t mask;
-	uint32_t key0_hash;
-	uint32_t key1_hash;
-	uint32_t key0_index;
-	uint32_t key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_insert_eem_entry(tbl_scope_cb, parms);
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
 int
-tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
-}
-
-int
-tf_em_ext_host_alloc(struct tf *tfp,
-		     struct tf_alloc_tbl_scope_parms *parms)
+tf_em_ext_alloc(struct tf *tfp, struct tf_alloc_tbl_scope_parms *parms)
 {
 	int rc;
 	enum tf_dir dir;
@@ -1081,7 +474,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 cleanup_full:
 	free_parms.tbl_scope_id = parms->tbl_scope_id;
-	tf_em_ext_host_free(tfp, &free_parms);
+	tf_em_ext_free(tfp, &free_parms);
 	return -EINVAL;
 
 cleanup:
@@ -1094,8 +487,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 }
 
 int
-tf_em_ext_host_free(struct tf *tfp,
-		    struct tf_free_tbl_scope_parms *parms)
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
 {
 	int rc = 0;
 	enum tf_dir  dir;
@@ -1136,75 +529,3 @@ tf_em_ext_host_free(struct tf *tfp,
 	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
 	return rc;
 }
-
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	uint32_t tbl_scope_id;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	tbl_scope_id = parms->tbl_scope_id;
-
-	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Get the table scope control block associated with the
-	 * external pool
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	op.opcode = HCAPI_CFA_HWOPS_PUT;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = parms->idx;
-	key_obj.data = parms->data;
-	key_obj.size = parms->data_sz_in_bytes;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 10768df03..c47c8b93f 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -4,11 +4,24 @@
  */
 
 #include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <unistd.h>
+#include <string.h>
+
 #include <rte_common.h>
 #include <rte_errno.h>
 #include <rte_log.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
 #include "tf_em.h"
 #include "tf_em_common.h"
 #include "tf_msg.h"
@@ -18,103 +31,503 @@
 
 #include "bnxt.h"
 
+enum tf_em_req_type {
+	TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ = 5,
+};
 
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
+struct tf_em_bnxt_lfc_req_hdr {
+	uint32_t ver;
+	uint32_t bus;
+	uint32_t devfn;
+	enum tf_em_req_type req_type;
+};
+
+struct tf_em_bnxt_lfc_cfa_eem_std_hdr {
+	uint16_t version;
+	uint16_t size;
+	uint32_t flags;
+	#define TF_EM_BNXT_LFC_EEM_CFG_PRIMARY_FUNC     (1 << 0)
+};
+
+struct tf_em_bnxt_lfc_dmabuf_fd {
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+};
+
+#ifndef __user
+#define __user
+#endif
+
+struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req {
+	struct tf_em_bnxt_lfc_cfa_eem_std_hdr std;
+	uint8_t dir;
+	uint32_t flags;
+	void __user *dma_fd;
+};
+
+struct tf_em_bnxt_lfc_req {
+	struct tf_em_bnxt_lfc_req_hdr hdr;
+	union {
+		struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req
+		       eem_dmabuf_export_req;
+		uint64_t hreq;
+	} req;
+};
+
+#define TF_EEM_BNXT_LFC_IOCTL_MAGIC     0x98
+#define BNXT_LFC_REQ    \
+	_IOW(TF_EEM_BNXT_LFC_IOCTL_MAGIC, 1, struct tf_em_bnxt_lfc_req)
+
+/**
+ * EM DBs.
  */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_insert_em_entry_parms *parms __rte_unused)
+extern void *eem_db[TF_DIR_MAX];
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+static void
+tf_em_dmabuf_mem_unmap(struct hcapi_cfa_em_table *tbl)
 {
-	return 0;
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no, pg_count;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			if (tp->pg_va_tbl != NULL &&
+			    tp->pg_va_tbl[page_no] != NULL &&
+			    tp->pg_size != 0) {
+				(void)munmap(tp->pg_va_tbl[page_no],
+					     tp->pg_size);
+			}
+		}
+
+		tfp_free((void *)tp->pg_va_tbl);
+		tfp_free((void *)tp->pg_pa_tbl);
+	}
 }
 
-/** delete EEM hash entry API
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
  *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
  *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
+ * [in] dir
+ *   Receive or transmit direction
  */
+static void
+tf_em_ctx_unreg(struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+			tf_em_dmabuf_mem_unmap(tbl);
+	}
+}
+
+static int tf_export_tbl_scope(int lfc_fd,
+			       int *fd,
+			       int bus,
+			       int devfn)
+{
+	struct tf_em_bnxt_lfc_req tf_lfc_req;
+	struct tf_em_bnxt_lfc_dmabuf_fd *dma_fd;
+	struct tfp_calloc_parms  mparms;
+	int rc;
+
+	memset(&tf_lfc_req, 0, sizeof(struct tf_em_bnxt_lfc_req));
+	tf_lfc_req.hdr.ver = 1;
+	tf_lfc_req.hdr.bus = bus;
+	tf_lfc_req.hdr.devfn = devfn;
+	tf_lfc_req.hdr.req_type = TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ;
+	tf_lfc_req.req.eem_dmabuf_export_req.flags = O_ACCMODE;
+	tf_lfc_req.req.eem_dmabuf_export_req.std.version = 1;
+
+	mparms.nitems = 1;
+	mparms.size = sizeof(struct tf_em_bnxt_lfc_dmabuf_fd);
+	mparms.alignment = 0;
+	tfp_calloc(&mparms);
+	dma_fd = (struct tf_em_bnxt_lfc_dmabuf_fd *)mparms.mem_va;
+	tf_lfc_req.req.eem_dmabuf_export_req.dma_fd = dma_fd;
+
+	rc = ioctl(lfc_fd, BNXT_LFC_REQ, &tf_lfc_req);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EXT EEM export chanel_fd %d, rc=%d\n",
+			    lfc_fd,
+			    rc);
+		tfp_free(dma_fd);
+		return rc;
+	}
+
+	memcpy(fd, dma_fd->fd, sizeof(dma_fd->fd));
+	tfp_free(dma_fd);
+
+	return rc;
+}
+
 static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_delete_em_entry_parms *parms __rte_unused)
+tf_em_dmabuf_mem_map(struct hcapi_cfa_em_table *tbl,
+		     int dmabuf_fd)
 {
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no;
+	uint32_t pg_count;
+	uint32_t offset;
+	struct tfp_calloc_parms parms;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		offset = 0;
+
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0)
+			return -ENOMEM;
+
+		tp->pg_va_tbl = parms.mem_va;
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0) {
+			tfp_free((void *)tp->pg_va_tbl);
+			return -ENOMEM;
+		}
+
+		tp->pg_pa_tbl = parms.mem_va;
+		tp->pg_count = 0;
+		tp->pg_size =  TF_EM_PAGE_SIZE;
+
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			tp->pg_va_tbl[page_no] = mmap(NULL,
+						      TF_EM_PAGE_SIZE,
+						      PROT_READ | PROT_WRITE,
+						      MAP_SHARED,
+						      dmabuf_fd,
+						      offset);
+			if (tp->pg_va_tbl[page_no] == (void *)-1) {
+				TFP_DRV_LOG(ERR,
+		"MMap memory error. level:%d page:%d pg_count:%d - %s\n",
+					    level,
+				     page_no,
+					    pg_count,
+					    strerror(errno));
+				return -ENOMEM;
+			}
+			offset += tp->pg_size;
+			tp->pg_count++;
+		}
+	}
+
 	return 0;
 }
 
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_insert_em_entry_parms *parms)
+static int tf_mmap_tbl_scope(struct tf_tbl_scope_cb *tbl_scope_cb,
+			     enum tf_dir dir,
+			     int tbl_type,
+			     int dmabuf_fd)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *tbl;
+
+	if (tbl_type == TF_EFC_TABLE)
+		return 0;
+
+	tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[tbl_type];
+	return tf_em_dmabuf_mem_map(tbl, dmabuf_fd);
+}
+
+#define TF_LFC_DEVICE "/dev/bnxt_lfc"
+
+static int
+tf_prepare_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	int lfc_fd;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	lfc_fd = open(TF_LFC_DEVICE, O_RDWR);
+	if (!lfc_fd) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: open %s device error\n",
+			    TF_LFC_DEVICE);
+		return -ENOENT;
 	}
 
-	return tf_insert_eem_entry
-		(tbl_scope_cb, parms);
+	tbl_scope_cb->lfc_fd = lfc_fd;
+
+	return 0;
 }
 
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_delete_em_entry_parms *parms)
+static int
+offload_system_mmap(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int rc;
+	int dmabuf_fd;
+	enum tf_dir dir;
+	enum hcapi_cfa_em_table_type tbl_type;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	rc = tf_prepare_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EEM: Prepare bnxt_lfc channel failed\n");
+		return rc;
 	}
 
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
+	rc = tf_export_tbl_scope(tbl_scope_cb->lfc_fd,
+				 (int *)tbl_scope_cb->fd,
+				 tbl_scope_cb->bus,
+				 tbl_scope_cb->devfn);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "export dmabuf fd failed\n");
+		return rc;
+	}
+
+	tbl_scope_cb->valid = true;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (tbl_type = TF_KEY0_TABLE; tbl_type <
+			     TF_MAX_TABLE; tbl_type++) {
+			if (tbl_type == TF_EFC_TABLE)
+				continue;
+
+			dmabuf_fd = tbl_scope_cb->fd[(dir ? 0 : 1)][tbl_type];
+			rc = tf_mmap_tbl_scope(tbl_scope_cb,
+					       dir,
+					       tbl_type,
+					       dmabuf_fd);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "dir:%d tbl:%d mmap failed rc %d\n",
+					    dir,
+					    tbl_type,
+					    rc);
+				break;
+			}
+		}
+	}
+	return 0;
 }
 
-int
-tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
-		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+static int
+tf_destroy_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	close(tbl_scope_cb->lfc_fd);
+
 	return 0;
 }
 
-int
-tf_em_ext_system_free(struct tf *tfp __rte_unused,
-		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+static int
+tf_dmabuf_alloc(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp,
+		tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries);
+	if (rc)
+		PMD_DRV_LOG(ERR, "EEM: Failed to prepare system memory rc:%d\n",
+			    rc);
+
 	return 0;
 }
 
-int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
-			  struct tf_tbl_set_parms *parms __rte_unused)
+static int
+tf_dmabuf_free(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp, 0);
+	if (rc)
+		TFP_DRV_LOG(ERR, "EEM: Failed to cleanup system memory\n");
+
+	tf_destroy_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+
 	return 0;
 }
+
+int
+tf_em_ext_alloc(struct tf *tfp,
+		struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_free_parms fparms = { 0 };
+	int dir;
+	int i;
+	struct hcapi_cfa_em_table *em_tables;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+	tbl_scope_cb->bus = tfs->session_id.internal.bus;
+	tbl_scope_cb->devfn = tfs->session_id.internal.device;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	rc = tf_dmabuf_alloc(tfp, tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System DMA buff alloc failed\n");
+		return -EIO;
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+			if (i == TF_EFC_TABLE)
+				continue;
+
+			em_tables =
+				&tbl_scope_cb->em_ctx_info[dir].em_tables[i];
+
+			rc = tf_em_size_table(em_tables, TF_EM_PAGE_SIZE);
+			if (rc) {
+				TFP_DRV_LOG(ERR, "Size table failed\n");
+				goto cleanup;
+			}
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_create_tbl_pool_external(dir,
+					tbl_scope_cb,
+					em_tables[TF_RECORD_TABLE].num_entries,
+					em_tables[TF_RECORD_TABLE].entry_size);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	rc = offload_system_mmap(tbl_scope_cb);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System alloc mmap failed\n");
+		goto cleanup_full;
+	}
+
+	return rc;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int dir;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+
+		/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+
+		/* Unmap memory */
+		tf_em_ctx_unreg(tbl_scope_cb, dir);
+
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+	}
+
+	tf_dmabuf_free(tfp, tbl_scope_cb);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
index 54d4c37f5..7eb72bd42 100644
--- a/drivers/net/bnxt/tf_core/tf_if_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -113,7 +113,7 @@ struct tf_if_tbl_set_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -143,7 +143,7 @@ struct tf_if_tbl_get_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [out] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 035c0948d..ed506defa 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -813,7 +813,19 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = parms->hcapi_type;
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
@@ -869,7 +881,19 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct hwrm_tf_tcam_free_input req =  { 0 };
 	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(in_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = in_parms->hcapi_type;
 	req.count = 1;
 	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 2a10b47ce..f20e8d729 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -38,6 +38,13 @@ struct tf_em_caps {
  */
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
+#ifdef TF_USE_SYSTEM_MEM
+	int lfc_fd;
+	uint32_t bus;
+	uint32_t devfn;
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+	bool valid;
+#endif
 	int index;
 	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps em_caps[TF_DIR_MAX];
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 426a182a9..3eade3127 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -87,6 +87,18 @@ tfp_send_msg_tunneled(struct tf *tfp,
 	return rc;
 }
 
+#ifdef TF_USE_SYSTEM_MEM
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows)
+{
+	return bnxt_hwrm_oem_cmd(container_of(tfp,
+					      struct bnxt,
+					      tfp),
+				 max_flows);
+}
+#endif /* TF_USE_SYSTEM_MEM */
+
 /**
  * Allocates zero'ed memory from the heap.
  *
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8789eba1f..421a7d9f7 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -170,6 +170,21 @@ int
 tfp_msg_hwrm_oem_cmd(struct tf *tfp,
 		     uint32_t max_flows);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 32/51] net/bnxt: integrate with the latest tf core changes
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (30 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
                           ` (18 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

ULP changes to integrate with the latest session open
interface in tf_core

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 46 ++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index c7281ab9a..a9ed5d92a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -68,6 +68,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 	struct rte_eth_dev		*ethdev = bp->eth_dev;
 	int32_t				rc = 0;
 	struct tf_open_session_parms	params;
+	struct tf_session_resources	*resources;
 
 	memset(&params, 0, sizeof(params));
 
@@ -79,6 +80,51 @@ ulp_ctx_session_open(struct bnxt *bp,
 		return rc;
 	}
 
+	params.shadow_copy = false;
+	params.device_type = TF_DEVICE_TYPE_WH;
+	resources = &params.resources;
+	/** RX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_L2_CTXT] = 16;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 720;
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 720;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 16;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 416;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 2048;
+
+	/** TX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_L2_CTXT] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 8;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 8;
+
+	/* EEM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+
 	rc = tf_open_session(&bp->tfp, &params);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 33/51] net/bnxt: add support for internal encap records
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (31 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 34/51] net/bnxt: add support for if table processing Ajit Khaparde
                           ` (17 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Somnath Kotur, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

Modifications to allow internal encap records to be supported:
- Modified the mapper index table processing to handle encap without an
  action record
- Modified the session open code to reserve some 64 Byte internal encap
  records on tx
- Modified the blob encap swap to support encap without action record

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c   |  3 +++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c | 29 +++++++++++++---------------
 drivers/net/bnxt/tf_ulp/ulp_utils.c  |  2 +-
 3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index a9ed5d92a..4c1a1c44c 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -113,6 +113,9 @@ ulp_ctx_session_open(struct bnxt *bp,
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
 
+	/* ENCAP */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 734db7c6c..a9a625f9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1473,7 +1473,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
 
-	if (!flds || !num_flds) {
+	if (!flds || (!num_flds && !encap_flds)) {
 		BNXT_TF_DBG(ERR, "template undefined for the index table\n");
 		return -EINVAL;
 	}
@@ -1482,7 +1482,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	for (i = 0; i < (num_flds + encap_flds); i++) {
 		/* set the swap index if encap swap bit is enabled */
 		if (parms->device_params->encap_byte_swap && encap_flds &&
-		    ((i + 1) == num_flds))
+		    (i == num_flds))
 			ulp_blob_encap_swap_idx_set(&data);
 
 		/* Process the result fields */
@@ -1495,18 +1495,15 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			BNXT_TF_DBG(ERR, "data field failed\n");
 			return rc;
 		}
+	}
 
-		/* if encap bit swap is enabled perform the bit swap */
-		if (parms->device_params->encap_byte_swap && encap_flds) {
-			if ((i + 1) == (num_flds + encap_flds))
-				ulp_blob_perform_encap_swap(&data);
+	/* if encap bit swap is enabled perform the bit swap */
+	if (parms->device_params->encap_byte_swap && encap_flds) {
+		ulp_blob_perform_encap_swap(&data);
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-			if ((i + 1) == (num_flds + encap_flds)) {
-				BNXT_TF_DBG(INFO, "Dump fter encap swap\n");
-				ulp_mapper_blob_dump(&data);
-			}
+		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
+		ulp_mapper_blob_dump(&data);
 #endif
-		}
 	}
 
 	/* Perform the tf table allocation by filling the alloc params */
@@ -1817,6 +1814,11 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+					    tbl->resource_func);
+				return rc;
+			}
 			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected action resource %d\n",
@@ -1824,11 +1826,6 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 			return -EINVAL;
 		}
 	}
-	if (rc) {
-		BNXT_TF_DBG(ERR, "Resource type %d failed\n",
-			    tbl->resource_func);
-		return rc;
-	}
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
index 3a4157f22..3afaac647 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -478,7 +478,7 @@ ulp_blob_perform_encap_swap(struct ulp_blob *blob)
 		BNXT_TF_DBG(ERR, "invalid argument\n");
 		return; /* failure */
 	}
-	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx);
 	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
 
 	while (idx <= end_idx) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 34/51] net/bnxt: add support for if table processing
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (32 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
                           ` (16 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for if table processing in the ulp mapper
layer. This enables support for the default partition action
record pointer interface table.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   2 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 141 +++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   1 +
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 117 ++++++++-------
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   8 +-
 6 files changed, 187 insertions(+), 83 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4c1a1c44c..4835b951e 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -115,6 +115,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 
 	/* ENCAP */
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_16B] = 16;
 
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 22996e50e..384dc5b2c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -933,7 +933,7 @@ ulp_default_flow_db_cfa_action_get(struct bnxt_ulp_context *ulp_ctx,
 				   uint32_t flow_id,
 				   uint32_t *cfa_action)
 {
-	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX;
+	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION;
 	uint64_t hndl;
 	int32_t rc;
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index a9a625f9f..42bb98557 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -184,7 +184,8 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
 	return &ulp_act_tbl_list[idx];
 }
 
-/** Get a list of classifier tables that implement the flow
+/*
+ * Get a list of classifier tables that implement the flow
  * Gets a device dependent list of tables that implement the class template id
  *
  * dev_id [in] The device id of the forwarding element
@@ -193,13 +194,16 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
  *
  * num_tbls [out] The number of classifier tables in the returned array
  *
+ * fdb_tbl_idx [out] The flow database index Regular or default
+ *
  * returns An array of classifier tables to implement the flow, or NULL on
  * error
  */
 static struct bnxt_ulp_mapper_tbl_info *
 ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 			      uint32_t tid,
-			      uint32_t *num_tbls)
+			      uint32_t *num_tbls,
+			      uint32_t *fdb_tbl_idx)
 {
 	uint32_t idx;
 	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
@@ -212,7 +216,7 @@ ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 	 */
 	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
 	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
-
+	*fdb_tbl_idx = ulp_class_tmpl_list[tidx].flow_db_table_type;
 	return &ulp_class_tbl_list[idx];
 }
 
@@ -256,7 +260,8 @@ ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
-			     uint32_t *num_flds)
+			     uint32_t *num_flds,
+			     uint32_t *num_encap_flds)
 {
 	uint32_t idx;
 
@@ -265,6 +270,7 @@ ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
 
 	idx		= tbl->result_start_idx;
 	*num_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
 
 	/* NOTE: Need template to provide range checking define */
 	return &ulp_class_result_field_list[idx];
@@ -1146,6 +1152,7 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		struct bnxt_ulp_mapper_result_field_info *dflds;
 		struct bnxt_ulp_mapper_ident_info *idents;
 		uint32_t num_dflds, num_idents;
+		uint32_t encap_flds = 0;
 
 		/*
 		 * Since the cache entry is responsible for allocating
@@ -1166,8 +1173,9 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 
 		/* Create the result data blob */
-		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-		if (!dflds || !num_dflds) {
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds,
+						     &encap_flds);
+		if (!dflds || !num_dflds || encap_flds) {
 			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 			rc = -EINVAL;
 			goto error;
@@ -1293,6 +1301,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	int32_t	trc;
 	enum bnxt_ulp_flow_mem_type mtype = parms->device_params->flow_mem_type;
 	int32_t rc = 0;
+	uint32_t encap_flds = 0;
 
 	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
 	if (!kflds || !num_kflds) {
@@ -1327,8 +1336,8 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	 */
 
 	/* Create the result data blob */
-	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-	if (!dflds || !num_dflds) {
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds, &encap_flds);
+	if (!dflds || !num_dflds || encap_flds) {
 		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 		return -EINVAL;
 	}
@@ -1468,7 +1477,8 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Get the result fields list */
 	if (is_class_tbl)
-		flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+		flds = ulp_mapper_result_fields_get(tbl, &num_flds,
+						    &encap_flds);
 	else
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
@@ -1761,6 +1771,76 @@ ulp_mapper_cache_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0;
+	struct tf_set_if_tbl_entry_parms iftbl_params = { 0 };
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	uint32_t encap_flds;
+
+	/* Initialize the blob data */
+	if (!ulp_blob_init(&data, tbl->result_bit_size,
+			   parms->device_params->byte_order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	/* Get the result fields list */
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds, &encap_flds);
+
+	if (!flds || !num_flds || encap_flds) {
+		BNXT_TF_DBG(ERR, "template undefined for the IF table\n");
+		return -EINVAL;
+	}
+
+	/* process the result fields, loop through them */
+	for (i = 0; i < num_flds; i++) {
+		/* Process the result fields */
+		rc = ulp_mapper_result_field_process(parms,
+						     tbl->direction,
+						     &flds[i],
+						     &data,
+						     "IFtable Result");
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	/* Get the index details from computed field */
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+
+	/* Perform the tf table set by filling the set params */
+	iftbl_params.dir = tbl->direction;
+	iftbl_params.type = tbl->resource_type;
+	iftbl_params.data = ulp_blob_data_get(&data, &tmplen);
+	iftbl_params.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+	iftbl_params.idx = idx;
+
+	rc = tf_set_if_tbl_entry(tfp, &iftbl_params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+			    iftbl_params.type,
+			    (iftbl_params.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iftbl_params.idx,
+			    rc);
+		return rc;
+	}
+
+	/*
+	 * TBD: Need to look at the need to store idx in flow db for restore
+	 * the table to its orginial state on deletion of this entry.
+	 */
+	return rc;
+}
+
 static int32_t
 ulp_mapper_glb_resource_info_init(struct tf *tfp,
 				  struct bnxt_ulp_mapper_data *mapper_data)
@@ -1862,6 +1942,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE:
 			rc = ulp_mapper_cache_tbl_process(parms, tbl);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_IF_TABLE:
+			rc = ulp_mapper_if_tbl_process(parms, tbl);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
 				    tbl->resource_func);
@@ -2064,20 +2147,29 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Get the action table entry from device id and act context id */
 	parms.act_tid = cparms->act_tid;
-	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
-						     parms.act_tid,
-						     &parms.num_atbls);
-	if (!parms.atbls || !parms.num_atbls) {
-		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		return -EINVAL;
+
+	/*
+	 * Perform the action table get only if act template is not zero
+	 * for act template zero like for default rules ignore the action
+	 * table processing.
+	 */
+	if (parms.act_tid) {
+		parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+							     parms.act_tid,
+							     &parms.num_atbls);
+		if (!parms.atbls || !parms.num_atbls) {
+			BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			return -EINVAL;
+		}
 	}
 
 	/* Get the class table entry from device id and act context id */
 	parms.class_tid = cparms->class_tid;
 	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
 						    parms.class_tid,
-						    &parms.num_ctbls);
+						    &parms.num_ctbls,
+						    &parms.tbl_idx);
 	if (!parms.ctbls || !parms.num_ctbls) {
 		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
 			    parms.dev_id, parms.class_tid);
@@ -2111,7 +2203,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	 * free each of them.
 	 */
 	rc = ulp_flow_db_fid_alloc(ulp_ctx,
-				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   parms.tbl_idx,
 				   cparms->func_id,
 				   &parms.fid);
 	if (rc) {
@@ -2120,11 +2212,14 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	}
 
 	/* Process the action template list from the selected action table*/
-	rc = ulp_mapper_action_tbls_process(&parms);
-	if (rc) {
-		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		goto flow_error;
+	if (parms.act_tid) {
+		rc = ulp_mapper_action_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "action tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			goto flow_error;
+		}
 	}
 
 	/* All good. Now process the class template */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 89c08ab25..517422338 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -256,6 +256,7 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 			BNXT_TF_DBG(ERR, "Mark index greater than allocated\n");
 			return -EINVAL;
 		}
+		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
 	}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index ac84f88e9..66343b918 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -88,35 +88,36 @@ enum bnxt_ulp_byte_order {
 };
 
 enum bnxt_ulp_cf_idx {
-	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 0,
-	BNXT_ULP_CF_IDX_O_VTAG_NUM = 1,
-	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 2,
-	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 3,
-	BNXT_ULP_CF_IDX_I_VTAG_NUM = 4,
-	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 5,
-	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 6,
-	BNXT_ULP_CF_IDX_INCOMING_IF = 7,
-	BNXT_ULP_CF_IDX_DIRECTION = 8,
-	BNXT_ULP_CF_IDX_SVIF_FLAG = 9,
-	BNXT_ULP_CF_IDX_O_L3 = 10,
-	BNXT_ULP_CF_IDX_I_L3 = 11,
-	BNXT_ULP_CF_IDX_O_L4 = 12,
-	BNXT_ULP_CF_IDX_I_L4 = 13,
-	BNXT_ULP_CF_IDX_DEV_PORT_ID = 14,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 15,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 16,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 17,
-	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 18,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 19,
-	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 20,
-	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 21,
-	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 22,
-	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 23,
-	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 24,
-	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 25,
-	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 26,
-	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 27,
-	BNXT_ULP_CF_IDX_LAST = 28
+	BNXT_ULP_CF_IDX_NOT_USED = 0,
+	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 1,
+	BNXT_ULP_CF_IDX_O_VTAG_NUM = 2,
+	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 3,
+	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 4,
+	BNXT_ULP_CF_IDX_I_VTAG_NUM = 5,
+	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 6,
+	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 7,
+	BNXT_ULP_CF_IDX_INCOMING_IF = 8,
+	BNXT_ULP_CF_IDX_DIRECTION = 9,
+	BNXT_ULP_CF_IDX_SVIF_FLAG = 10,
+	BNXT_ULP_CF_IDX_O_L3 = 11,
+	BNXT_ULP_CF_IDX_I_L3 = 12,
+	BNXT_ULP_CF_IDX_O_L4 = 13,
+	BNXT_ULP_CF_IDX_I_L4 = 14,
+	BNXT_ULP_CF_IDX_DEV_PORT_ID = 15,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 16,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 17,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 18,
+	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 19,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 20,
+	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 21,
+	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 22,
+	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 23,
+	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 24,
+	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 25,
+	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
+	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
+	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
+	BNXT_ULP_CF_IDX_LAST = 29
 };
 
 enum bnxt_ulp_critical_resource {
@@ -133,11 +134,6 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
-enum bnxt_ulp_df_param_type {
-	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
-	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
-};
-
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
@@ -154,7 +150,8 @@ enum bnxt_ulp_flow_mem_type {
 enum bnxt_ulp_glb_regfile_index {
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 2
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
 };
 
 enum bnxt_ulp_hdr_type {
@@ -204,22 +201,22 @@ enum bnxt_ulp_priority {
 };
 
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 0,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 1,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 2,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 3,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 4,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 5,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 6,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 7,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 8,
-	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 9,
-	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 10,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 11,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 12,
-	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 13,
-	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 14,
-	BNXT_ULP_REGFILE_INDEX_NOT_USED = 15,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 1,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 8,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 9,
+	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 10,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 11,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 12,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
+	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
+	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
 	BNXT_ULP_REGFILE_INDEX_LAST = 16
 };
 
@@ -265,10 +262,10 @@ enum bnxt_ulp_resource_func {
 enum bnxt_ulp_resource_sub_type {
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT_INDEX = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT_INDEX = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX = 1,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
 	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
 };
 
@@ -282,7 +279,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
 	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
@@ -398,7 +394,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
 	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
 	BNXT_ULP_SYM_NO = 0,
@@ -489,6 +484,11 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_wh_plus {
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+};
+
 enum bnxt_ulp_act_prop_sz {
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
@@ -588,4 +588,9 @@ enum bnxt_ulp_act_hid {
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
 	BNXT_ULP_ACT_HID_0040 = 0x0040
 };
+
+enum bnxt_ulp_df_tpl {
+	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5c4335847..1188223aa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -150,9 +150,10 @@ struct bnxt_ulp_device_params {
 
 /* Flow Mapper */
 struct bnxt_ulp_mapper_tbl_list_info {
-	uint32_t	device_name;
-	uint32_t	start_tbl_idx;
-	uint32_t	num_tbls;
+	uint32_t		device_name;
+	uint32_t		start_tbl_idx;
+	uint32_t		num_tbls;
+	enum bnxt_ulp_fdb_type	flow_db_table_type;
 };
 
 struct bnxt_ulp_mapper_tbl_info {
@@ -183,6 +184,7 @@ struct bnxt_ulp_mapper_tbl_info {
 
 	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
+	uint32_t			comp_field_idx;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 35/51] net/bnxt: disable Tx vector mode if truflow is enabled
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (33 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 34/51] net/bnxt: add support for if table processing Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
                           ` (15 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The vector mode in the tx handler is disabled when truflow is
enabled since truflow now requires bd action record support.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 697cd6651..355025741 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1116,12 +1116,15 @@ bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
 {
 #ifdef RTE_ARCH_X86
 #ifndef RTE_LIBRTE_IEEE1588
+	struct bnxt *bp = eth_dev->data->dev_private;
+
 	/*
 	 * Vector mode transmit can be enabled only if not using scatter rx
 	 * or tx offloads.
 	 */
 	if (!eth_dev->data->scattered_rx &&
-	    !eth_dev->data->dev_conf.txmode.offloads) {
+	    !eth_dev->data->dev_conf.txmode.offloads &&
+	    !BNXT_TRUFLOW_EN(bp)) {
 		PMD_DRV_LOG(INFO, "Using vector mode transmit for port %d\n",
 			    eth_dev->data->port_id);
 		return bnxt_xmit_pkts_vec;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 36/51] net/bnxt: add index opcode and operand to mapper table
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (34 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
                           ` (14 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Extended the regfile and computed field operations to a common
index opcode operation and added globlal resource operations are
also part of the index opcode operation.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 56 ++++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  9 ++-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 45 +++++----------
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  8 +++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  4 +-
 5 files changed, 80 insertions(+), 42 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 42bb98557..7b3b3d698 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1443,7 +1443,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	struct bnxt_ulp_mapper_result_field_info *flds;
 	struct ulp_flow_db_res_params	fid_parms;
 	struct ulp_blob	data;
-	uint64_t idx;
+	uint64_t idx = 0;
 	uint16_t tmplen;
 	uint32_t i, num_flds;
 	int32_t rc = 0, trc = 0;
@@ -1516,6 +1516,42 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 	}
 
+	/*
+	 * Check for index opcode, if it is Global then
+	 * no need to allocate the table, just set the table
+	 * and exit since it is not maintained in the flow db.
+	 */
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_GLOBAL) {
+		/* get the index from index operand */
+		if (tbl->index_operand < BNXT_ULP_GLB_REGFILE_INDEX_LAST &&
+		    ulp_mapper_glb_resource_read(parms->mapper_data,
+						 tbl->direction,
+						 tbl->index_operand,
+						 &idx)) {
+			BNXT_TF_DBG(ERR, "Glbl regfile[%d] read failed.\n",
+				    tbl->index_operand);
+			return -EINVAL;
+		}
+		/* set the Tf index table */
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->resource_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+		sparms.idx		= tfp_be_to_cpu_64(idx);
+		sparms.tbl_scope_id	= tbl_scope_id;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Glbl Set table[%d][%s][%d] failed rc=%d\n",
+				    sparms.type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+			return rc;
+		}
+		return 0; /* success */
+	}
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1546,11 +1582,13 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Always storing values in Regfile in BE */
 	idx = tfp_cpu_to_be_64(idx);
-	rc = ulp_regfile_write(parms->regfile, tbl->regfile_idx, idx);
-	if (!rc) {
-		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
-			    tbl->regfile_idx);
-		goto error;
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_ALLOCATE) {
+		rc = ulp_regfile_write(parms->regfile, tbl->index_operand, idx);
+		if (!rc) {
+			BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+				    tbl->index_operand);
+			goto error;
+		}
 	}
 
 	/* Perform the tf table set by filling the set params */
@@ -1815,7 +1853,11 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	}
 
 	/* Get the index details from computed field */
-	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+	if (tbl->index_opcode != BNXT_ULP_INDEX_OPCODE_COMP_FIELD) {
+		BNXT_TF_DBG(ERR, "Invalid tbl index opcode\n");
+		return -EINVAL;
+	}
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->index_operand);
 
 	/* Perform the tf table set by filling the set params */
 	iftbl_params.dir = tbl->direction;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 8af23eff1..9b14fa0bd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -76,7 +76,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -90,7 +91,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -104,7 +106,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	}
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index e773afd60..d4c7bfa4d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -113,8 +113,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 0,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -135,8 +134,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -157,8 +155,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -179,8 +176,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -201,8 +197,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -223,8 +218,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -245,8 +239,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -267,8 +260,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -289,8 +281,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -311,8 +302,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -333,8 +323,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -355,8 +344,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -377,8 +365,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -399,8 +386,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -421,8 +407,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 66343b918..0215a5dde 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -161,6 +161,14 @@ enum bnxt_ulp_hdr_type {
 	BNXT_ULP_HDR_TYPE_LAST = 3
 };
 
+enum bnxt_ulp_index_opcode {
+	BNXT_ULP_INDEX_OPCODE_NOT_USED = 0,
+	BNXT_ULP_INDEX_OPCODE_ALLOCATE = 1,
+	BNXT_ULP_INDEX_OPCODE_GLOBAL = 2,
+	BNXT_ULP_INDEX_OPCODE_COMP_FIELD = 3,
+	BNXT_ULP_INDEX_OPCODE_LAST = 4
+};
+
 enum bnxt_ulp_mapper_opc {
 	BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 1188223aa..a3ddd33fd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -182,9 +182,9 @@ struct bnxt_ulp_mapper_tbl_info {
 	uint32_t	ident_start_idx;
 	uint16_t	ident_nums;
 
-	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
-	uint32_t			comp_field_idx;
+	enum bnxt_ulp_index_opcode	index_opcode;
+	uint32_t			index_operand;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 37/51] net/bnxt: add support for global resource templates
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (35 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
                           ` (13 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the global resource templates, so that they
can be reused by the other regular templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 178 +++++++++++++++++-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   1 +
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   3 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   6 +
 4 files changed, 181 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 7b3b3d698..6fd55b2a2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -80,15 +80,20 @@ ulp_mapper_glb_resource_write(struct bnxt_ulp_mapper_data *data,
  * returns 0 on success
  */
 static int32_t
-ulp_mapper_resource_ident_allocate(struct tf *tfp,
+ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 				   struct bnxt_ulp_mapper_data *mapper_data,
 				   struct bnxt_ulp_glb_resource_info *glb_res)
 {
 	struct tf_alloc_identifier_parms iparms = { 0 };
 	struct tf_free_identifier_parms fparms;
 	uint64_t regval;
+	struct tf *tfp;
 	int32_t rc = 0;
 
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
 	iparms.ident_type = glb_res->resource_type;
 	iparms.dir = glb_res->direction;
 
@@ -115,13 +120,76 @@ ulp_mapper_resource_ident_allocate(struct tf *tfp,
 		return rc;
 	}
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res[%s][%d][%d] = 0x%04x\n",
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
 		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
 		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
 #endif
 	return rc;
 }
 
+/*
+ * Internal function to allocate index tbl resource and store it in mapper data.
+ *
+ * returns 0 on success
+ */
+static int32_t
+ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
+				    struct bnxt_ulp_mapper_data *mapper_data,
+				    struct bnxt_ulp_glb_resource_info *glb_res)
+{
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+	uint64_t regval;
+	struct tf *tfp;
+	uint32_t tbl_scope_id;
+	int32_t rc = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
+	/* Get the scope id */
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	aparms.type = glb_res->resource_type;
+	aparms.dir = glb_res->direction;
+	aparms.search_enable = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO;
+	aparms.tbl_scope_id = tbl_scope_id;
+
+	/* Allocate the index tbl using tf api */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to alloc identifier [%s][%d]\n",
+			    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    aparms.type);
+		return rc;
+	}
+
+	/* entries are stored as big-endian format */
+	regval = tfp_cpu_to_be_64((uint64_t)aparms.idx);
+	/* write to the mapper global resource */
+	rc = ulp_mapper_glb_resource_write(mapper_data, glb_res, regval);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to write to global resource id\n");
+		/* Free the identifer when update failed */
+		free_parms.dir = aparms.dir;
+		free_parms.type = aparms.type;
+		free_parms.idx = aparms.idx;
+		tf_free_tbl_entry(tfp, &free_parms);
+		return rc;
+	}
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
+		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
+#endif
+	return rc;
+}
+
 /* Retrieve the cache initialization parameters for the tbl_idx */
 static struct bnxt_ulp_cache_tbl_params *
 ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
@@ -132,6 +200,16 @@ ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
 	return &ulp_cache_tbl_params[tbl_idx];
 }
 
+/* Retrieve the global template table */
+static uint32_t *
+ulp_mapper_glb_template_table_get(uint32_t *num_entries)
+{
+	if (!num_entries)
+		return NULL;
+	*num_entries = BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ;
+	return ulp_glb_template_tbl;
+}
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -659,7 +737,10 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 			return -EINVAL;
 		}
 		act_bit = tfp_be_to_cpu_64(act_bit);
-		act_val = ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit);
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit))
+			act_val = 1;
+		else
+			act_val = 0;
 		if (fld->field_bit_size > ULP_BYTE_2_BITS(sizeof(act_val))) {
 			BNXT_TF_DBG(ERR, "%s field size is incorrect\n", name);
 			return -EINVAL;
@@ -1552,6 +1633,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 		return 0; /* success */
 	}
+
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1616,6 +1698,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	fid_parms.direction	= tbl->direction;
 	fid_parms.resource_func	= tbl->resource_func;
 	fid_parms.resource_type	= tbl->resource_type;
+	fid_parms.resource_sub_type = tbl->resource_sub_type;
 	fid_parms.resource_hndl	= aparms.idx;
 	fid_parms.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO;
 
@@ -1884,7 +1967,7 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 }
 
 static int32_t
-ulp_mapper_glb_resource_info_init(struct tf *tfp,
+ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 				  struct bnxt_ulp_mapper_data *mapper_data)
 {
 	struct bnxt_ulp_glb_resource_info *glb_res;
@@ -1901,15 +1984,23 @@ ulp_mapper_glb_resource_info_init(struct tf *tfp,
 	for (idx = 0; idx < num_glb_res_ids; idx++) {
 		switch (glb_res[idx].resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
-			rc = ulp_mapper_resource_ident_allocate(tfp,
+			rc = ulp_mapper_resource_ident_allocate(ulp_ctx,
 								mapper_data,
 								&glb_res[idx]);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_resource_index_tbl_alloc(ulp_ctx,
+								 mapper_data,
+								 &glb_res[idx]);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Global resource %x not supported\n",
 				    glb_res[idx].resource_func);
+			rc = -EINVAL;
 			break;
 		}
+		if (rc)
+			return rc;
 	}
 	return rc;
 }
@@ -2125,7 +2216,9 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 			res.resource_func = ent->resource_func;
 			res.direction = dir;
 			res.resource_type = ent->resource_type;
-			res.resource_hndl = ent->resource_hndl;
+			/*convert it from BE to cpu */
+			res.resource_hndl =
+				tfp_be_to_cpu_64(ent->resource_hndl);
 			ulp_mapper_resource_free(ulp_ctx, &res);
 		}
 	}
@@ -2144,6 +2237,71 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
 
+/* Function to handle the default global templates that are allocated during
+ * the startup and reused later.
+ */
+static int32_t
+ulp_mapper_glb_template_table_init(struct bnxt_ulp_context *ulp_ctx)
+{
+	uint32_t *glbl_tmpl_list;
+	uint32_t num_glb_tmpls, idx, dev_id;
+	struct bnxt_ulp_mapper_parms parms;
+	struct bnxt_ulp_mapper_data *mapper_data;
+	int32_t rc = 0;
+
+	glbl_tmpl_list = ulp_mapper_glb_template_table_get(&num_glb_tmpls);
+	if (!glbl_tmpl_list || !num_glb_tmpls)
+		return rc; /* No global templates to process */
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mapper_data = bnxt_ulp_cntxt_ptr2_mapper_data_get(ulp_ctx);
+	if (!mapper_data) {
+		BNXT_TF_DBG(ERR, "Failed to get the ulp mapper data\n");
+		return -EINVAL;
+	}
+
+	/* Iterate the global resources and process each one */
+	for (idx = 0; idx < num_glb_tmpls; idx++) {
+		/* Initialize the parms structure */
+		memset(&parms, 0, sizeof(parms));
+		parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+		parms.ulp_ctx = ulp_ctx;
+		parms.dev_id = dev_id;
+		parms.mapper_data = mapper_data;
+
+		/* Get the class table entry from dev id and class id */
+		parms.class_tid = glbl_tmpl_list[idx];
+		parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+							    parms.class_tid,
+							    &parms.num_ctbls,
+							    &parms.tbl_idx);
+		if (!parms.ctbls || !parms.num_ctbls) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		parms.device_params = bnxt_ulp_device_params_get(parms.dev_id);
+		if (!parms.device_params) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		rc = ulp_mapper_class_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "class tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return rc;
+		}
+	}
+	return rc;
+}
+
 /* Function to handle the mapping of the Flow to be compatible
  * with the underlying hardware.
  */
@@ -2316,7 +2474,7 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 	}
 
 	/* Allocate the global resource ids */
-	rc = ulp_mapper_glb_resource_info_init(tfp, data);
+	rc = ulp_mapper_glb_resource_info_init(ulp_ctx, data);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to initialize global resource ids\n");
 		goto error;
@@ -2344,6 +2502,12 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 		}
 	}
 
+	rc = ulp_mapper_glb_template_table_init(ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize global templates\n");
+		goto error;
+	}
+
 	return 0;
 error:
 	/* Ignore the return code in favor of returning the original error. */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 0215a5dde..7c0dc5ee4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -26,6 +26,7 @@
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
 #define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 2efd11447..beca3baa7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -546,3 +546,6 @@ uint32_t bnxt_ulp_encap_vtag_map[] = {
 	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
 	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
+
+uint32_t ulp_glb_template_tbl[] = {
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index a3ddd33fd..4bcd02ba2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -299,4 +299,10 @@ extern struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[];
  */
 extern struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[];
 
+/*
+ * The ulp_global template table is used to initialize default entries
+ * that could be reused by other templates.
+ */
+extern uint32_t ulp_glb_template_tbl[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 38/51] net/bnxt: add support for internal exact match entries
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (36 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
                           ` (12 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the internal exact match entries.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 38 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 13 +++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 21 ++++++----
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  4 ++
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   |  6 +--
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 13 ++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |  7 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  5 +++
 8 files changed, 85 insertions(+), 22 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4835b951e..1b52861d4 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -213,8 +213,27 @@ static int32_t
 ulp_eem_tbl_scope_init(struct bnxt *bp)
 {
 	struct tf_alloc_tbl_scope_parms params = {0};
+	uint32_t dev_id;
+	struct bnxt_ulp_device_params *dparms;
 	int rc;
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope alloc is not required\n");
+		return 0;
+	}
+
 	bnxt_init_tbl_scope_parms(bp, &params);
 
 	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
@@ -240,6 +259,8 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 	struct tf_free_tbl_scope_parms	params = {0};
 	struct tf			*tfp;
 	int32_t				rc = 0;
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id;
 
 	if (!ulp_ctx || !ulp_ctx->cfg_data)
 		return -EINVAL;
@@ -254,6 +275,23 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 		return -EINVAL;
 	}
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope free is not required\n");
+		return 0;
+	}
+
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 384dc5b2c..7696de2a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -114,7 +114,8 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info *resource_info,
 	}
 
 	/* Store the handle as 64bit only for EM table entries */
-	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE &&
+	    params->resource_func != BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
 		resource_info->resource_type = params->resource_type;
 		resource_info->resource_sub_type = params->resource_sub_type;
@@ -145,7 +146,8 @@ ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info *resource_info,
 	/* use the helper function to get the resource func */
 	params->resource_func = ulp_flow_db_resource_func_get(resource_info);
 
-	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    params->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		params->resource_hndl = resource_info->resource_em_handle;
 	} else if (params->resource_func & ULP_FLOW_DB_RES_FUNC_NEED_LOWER) {
 		params->resource_hndl = resource_info->resource_hndl;
@@ -908,7 +910,9 @@ ulp_flow_db_resource_hndl_get(struct bnxt_ulp_context *ulp_ctx,
 				}
 
 			} else if (resource_func ==
-				   BNXT_ULP_RESOURCE_FUNC_EM_TABLE){
+				   BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+				   resource_func ==
+				   BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 				*res_hndl = fid_res->resource_em_handle;
 				return 0;
 			}
@@ -966,7 +970,8 @@ static void ulp_flow_db_res_dump(struct ulp_fdb_resource_info	*r,
 
 	BNXT_TF_DBG(DEBUG, "Resource func = %x, nxt_resource_idx = %x\n",
 		    res_func, (ULP_FLOW_DB_RES_NXT_MASK & r->nxt_resource_idx));
-	if (res_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE)
+	if (res_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    res_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		BNXT_TF_DBG(DEBUG, "EM Handle = 0x%016" PRIX64 "\n",
 			    r->resource_em_handle);
 	else
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 6fd55b2a2..e2b771c9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -556,15 +556,18 @@ ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp,
 }
 
 static inline int32_t
-ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
-			  struct tf *tfp,
-			  struct ulp_flow_db_res_params *res)
+ulp_mapper_em_entry_free(struct bnxt_ulp_context *ulp,
+			 struct tf *tfp,
+			 struct ulp_flow_db_res_params *res)
 {
 	struct tf_delete_em_entry_parms fparms = { 0 };
 	int32_t rc;
 
 	fparms.dir		= res->direction;
-	fparms.mem		= TF_MEM_EXTERNAL;
+	if (res->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE)
+		fparms.mem = TF_MEM_EXTERNAL;
+	else
+		fparms.mem = TF_MEM_INTERNAL;
 	fparms.flow_handle	= res->resource_hndl;
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
@@ -1443,7 +1446,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 
 	/* do the transpose for the internal EM keys */
-	if (tbl->resource_type == TF_MEM_INTERNAL)
+	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		ulp_blob_perform_byte_reverse(&key);
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
@@ -2066,7 +2069,8 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
 			break;
-		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
 			rc = ulp_mapper_em_tbl_process(parms, tbl);
 			break;
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
@@ -2119,8 +2123,9 @@ ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
 	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
 		break;
-	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
-		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+	case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+	case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
+		rc = ulp_mapper_em_entry_free(ulp, tfp, res);
 		break;
 	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 517422338..b3527eccb 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -87,6 +87,9 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	/* Need to allocate 2 * Num flows to account for hash type bit */
 	mark_tbl->gfid_num_entries = dparms->mark_db_gfid_entries;
+	if (!mark_tbl->gfid_num_entries)
+		goto gfid_not_required;
+
 	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
 					 mark_tbl->gfid_num_entries *
 					 sizeof(struct bnxt_gfid_mark_info),
@@ -109,6 +112,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_num_entries - 1,
 		    mark_tbl->gfid_mask);
 
+gfid_not_required:
 	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 	return 0;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index d4c7bfa4d..8eb559050 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -179,7 +179,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -284,7 +284,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -389,7 +389,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 7c0dc5ee4..3168d29a9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -149,10 +149,11 @@ enum bnxt_ulp_flow_mem_type {
 };
 
 enum bnxt_ulp_glb_regfile_index {
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
+	BNXT_ULP_GLB_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 1,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR = 3,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 4
 };
 
 enum bnxt_ulp_hdr_type {
@@ -257,8 +258,8 @@ enum bnxt_ulp_match_type_bitmask {
 
 enum bnxt_ulp_resource_func {
 	BNXT_ULP_RESOURCE_FUNC_INVALID = 0x00,
-	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 0x20,
-	BNXT_ULP_RESOURCE_FUNC_RSVD1 = 0x40,
+	BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE = 0x20,
+	BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE = 0x40,
 	BNXT_ULP_RESOURCE_FUNC_RSVD2 = 0x60,
 	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0x80,
 	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 0x81,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index beca3baa7..7c440e3a4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -321,7 +321,12 @@ struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	.mark_db_gfid_entries   = 65536,
 	.flow_count_db_entries  = 16384,
 	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2
+	.num_phy_ports          = 2,
+	.ext_cntr_table_type    = 0,
+	.byte_count_mask        = 0x00000003ffffffff,
+	.packet_count_mask      = 0xfffffffc00000000,
+	.byte_count_shift       = 0,
+	.packet_count_shift     = 36
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4bcd02ba2..5a7a7b910 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -146,6 +146,11 @@ struct bnxt_ulp_device_params {
 	uint64_t			flow_db_num_entries;
 	uint32_t			flow_count_db_entries;
 	uint32_t			num_resources_per_flow;
+	uint32_t			ext_cntr_table_type;
+	uint64_t			byte_count_mask;
+	uint64_t			packet_count_mask;
+	uint32_t			byte_count_shift;
+	uint32_t			packet_count_shift;
 };
 
 /* Flow Mapper */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 39/51] net/bnxt: add conditional execution of mapper tables
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (37 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 40/51] net/bnxt: enable port MAC qcfg for trusted VF Ajit Khaparde
                           ` (11 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for conditional execution of the mapper tables so that
actions like count will have table processed only if action count
is configured.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 45 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |  8 ++++
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 12 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  2 +
 5 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e2b771c9f..d0931d411 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2008,6 +2008,44 @@ ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 	return rc;
 }
 
+/*
+ * Function to process the conditional opcode of the mapper table.
+ * returns 1 to skip the table.
+ * return 0 to continue processing the table.
+ */
+static int32_t
+ulp_mapper_tbl_cond_opcode_process(struct bnxt_ulp_mapper_parms *parms,
+				   struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	int32_t rc = 1;
+
+	switch (tbl->cond_opcode) {
+	case BNXT_ULP_COND_OPCODE_NOP:
+		rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_COMP_FIELD:
+		if (tbl->cond_operand < BNXT_ULP_CF_IDX_LAST &&
+		    ULP_COMP_FLD_IDX_RD(parms, tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_ACTION_BIT:
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_HDR_BIT:
+		if (ULP_BITMAP_ISSET(parms->hdr_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	default:
+		BNXT_TF_DBG(ERR,
+			    "Invalid arg in mapper tbl for cond opcode\n");
+		break;
+	}
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -2027,6 +2065,9 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	for (i = 0; i < parms->num_atbls; i++) {
 		tbl = &parms->atbls[i];
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
@@ -2065,6 +2106,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 	for (i = 0; i < parms->num_ctbls; i++) {
 		struct bnxt_ulp_mapper_tbl_info *tbl = &parms->ctbls[i];
 
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
@@ -2326,6 +2370,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	memset(&parms, 0, sizeof(parms));
 	parms.act_prop = cparms->act_prop;
 	parms.act_bitmap = cparms->act;
+	parms.hdr_bitmap = cparms->hdr_bitmap;
 	parms.regfile = &regfile;
 	parms.hdr_field = cparms->hdr_field;
 	parms.comp_fld = cparms->comp_fld;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b159081b1..19134830a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -62,6 +62,7 @@ struct bnxt_ulp_mapper_parms {
 	uint32_t				num_ctbls;
 	struct ulp_rte_act_prop			*act_prop;
 	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_bitmap		*hdr_bitmap;
 	struct ulp_rte_hdr_field		*hdr_field;
 	uint32_t				*comp_fld;
 	struct ulp_regfile			*regfile;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 41ac77c6f..8fffaecce 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1128,6 +1128,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv4 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
@@ -1148,6 +1152,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv6 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 3168d29a9..27628a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -118,7 +118,17 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
 	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
 	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
-	BNXT_ULP_CF_IDX_LAST = 29
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 29,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 30,
+	BNXT_ULP_CF_IDX_LAST = 31
+};
+
+enum bnxt_ulp_cond_opcode {
+	BNXT_ULP_COND_OPCODE_NOP = 0,
+	BNXT_ULP_COND_OPCODE_COMP_FIELD = 1,
+	BNXT_ULP_COND_OPCODE_ACTION_BIT = 2,
+	BNXT_ULP_COND_OPCODE_HDR_BIT = 3,
+	BNXT_ULP_COND_OPCODE_LAST = 4
 };
 
 enum bnxt_ulp_critical_resource {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5a7a7b910..df999b18c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -165,6 +165,8 @@ struct bnxt_ulp_mapper_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t			resource_type; /* TF_ enum type */
 	enum bnxt_ulp_resource_sub_type	resource_sub_type;
+	enum bnxt_ulp_cond_opcode	cond_opcode;
+	uint32_t			cond_operand;
 	uint8_t		direction;
 	uint32_t	priority;
 	uint8_t		srch_b4_alloc;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 40/51] net/bnxt: enable port MAC qcfg for trusted VF
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (38 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 41/51] net/bnxt: enhancements for port db Ajit Khaparde
                           ` (10 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_MAC_QCFG command on trusted vf to fetch the port count.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2605ef039..6ade32d1b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3194,14 +3194,14 @@ int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 
 	bp->port_svif = BNXT_SVIF_INVALID;
 
-	if (!BNXT_PF(bp))
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
 		return 0;
 
 	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
-	HWRM_CHECK_RESULT();
+	HWRM_CHECK_RESULT_SILENT();
 
 	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
 	if (port_svif_info &
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 41/51] net/bnxt: enhancements for port db
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (39 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 40/51] net/bnxt: enable port MAC qcfg for trusted VF Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
                           ` (9 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

1. Add "enum bnxt_ulp_intf_type” as the second parameter for the
   port & func helper functions
2. Return vfrep related port & func information in the helper functions
3. Allocate phy_port_list dynamically based on port count
4. Introduce ulp_func_id_tbl array for book keeping func related
   information indexed by func_id

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c           |  64 ++++++++--
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h |   6 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c       |   2 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c  |   9 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.c    | 143 +++++++++++++++++------
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    |  56 +++++++--
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  22 +++-
 8 files changed, 250 insertions(+), 62 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 43e5e7162..32acced60 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -23,6 +23,7 @@
 
 #include "tf_core.h"
 #include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -879,10 +880,11 @@ extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
-uint16_t bnxt_get_vnic_id(uint16_t port);
-uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
-uint16_t bnxt_get_fw_func_id(uint16_t port);
-uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif,
+		       enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_parif(uint16_t port, enum bnxt_ulp_intf_type type);
 uint16_t bnxt_get_phy_port_id(uint16_t port);
 uint16_t bnxt_get_vport(uint16_t port);
 enum bnxt_ulp_intf_type
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 355025741..332644d77 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5067,25 +5067,48 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 }
 
 uint16_t
-bnxt_get_svif(uint16_t port_id, bool func_svif)
+bnxt_get_svif(uint16_t port_id, bool func_svif,
+	      enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->svif;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
 uint16_t
-bnxt_get_vnic_id(uint16_t port)
+bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt_vnic_info *vnic;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->dflt_vnic_id;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
@@ -5094,12 +5117,23 @@ bnxt_get_vnic_id(uint16_t port)
 }
 
 uint16_t
-bnxt_get_fw_func_id(uint16_t port)
+bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return bp->fw_fid;
@@ -5116,8 +5150,14 @@ bnxt_get_interface_type(uint16_t port)
 		return BNXT_ULP_INTF_TYPE_VF_REP;
 
 	bp = eth_dev->data->dev_private;
-	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
-			   : BNXT_ULP_INTF_TYPE_VF;
+	if (BNXT_PF(bp))
+		return BNXT_ULP_INTF_TYPE_PF;
+	else if (BNXT_VF_IS_TRUSTED(bp))
+		return BNXT_ULP_INTF_TYPE_TRUSTED_VF;
+	else if (BNXT_VF(bp))
+		return BNXT_ULP_INTF_TYPE_VF;
+
+	return BNXT_ULP_INTF_TYPE_INVALID;
 }
 
 uint16_t
@@ -5130,6 +5170,9 @@ bnxt_get_phy_port_id(uint16_t port_id)
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
 		vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
 		eth_dev = vfr->parent_dev;
 	}
 
@@ -5139,15 +5182,20 @@ bnxt_get_phy_port_id(uint16_t port_id)
 }
 
 uint16_t
-bnxt_get_parif(uint16_t port_id)
+bnxt_get_parif(uint16_t port_id, enum bnxt_ulp_intf_type type)
 {
-	struct bnxt_vf_representor *vfr;
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
-		vfr = eth_dev->data->dev_private;
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid - 1;
+
 		eth_dev = vfr->parent_dev;
 	}
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f772d4919..ebb71405b 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -6,6 +6,11 @@
 #ifndef _BNXT_TF_COMMON_H_
 #define _BNXT_TF_COMMON_H_
 
+#include <inttypes.h>
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db_enum.h"
+
 #define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
 
 #define BNXT_ULP_EM_FLOWS			8192
@@ -48,6 +53,7 @@ enum ulp_direction_type {
 enum bnxt_ulp_intf_type {
 	BNXT_ULP_INTF_TYPE_INVALID = 0,
 	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_TRUSTED_VF,
 	BNXT_ULP_INTF_TYPE_VF,
 	BNXT_ULP_INTF_TYPE_PF_REP,
 	BNXT_ULP_INTF_TYPE_VF_REP,
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 1b52861d4..e5e7e5f43 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -658,7 +658,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	rc = ulp_dparms_init(bp, bp->ulp_ctx);
 
 	/* create the port database */
-	rc = ulp_port_db_init(bp->ulp_ctx);
+	rc = ulp_port_db_init(bp->ulp_ctx, bp->port_cnt);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to create the port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6eb2d6146..138b0b73d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -128,7 +128,8 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	mapper_cparms.act_prop = &params.act_prop;
 	mapper_cparms.class_tid = class_id;
 	mapper_cparms.act_tid = act_tmpl;
-	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id,
+						    BNXT_ULP_INTF_TYPE_INVALID);
 	mapper_cparms.dir = params.dir;
 
 	/* Call the ulp mapper to create the flow in the hardware. */
@@ -226,7 +227,8 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	}
 
 	flow_id = (uint32_t)(uintptr_t)flow;
-	func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	func_id = bnxt_get_fw_func_id(dev->data->port_id,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
@@ -270,7 +272,8 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	if (ulp_ctx_deinit_allowed(bp)) {
 		ret = ulp_flow_db_session_flow_flush(ulp_ctx);
 	} else if (bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx)) {
-		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id);
+		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id,
+					      BNXT_ULP_INTF_TYPE_INVALID);
 		ret = ulp_flow_db_function_flow_flush(ulp_ctx, func_id);
 	}
 	if (ret)
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index ea27ef41f..659cefa07 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -33,7 +33,7 @@ ulp_port_db_allocate_ifindex(struct bnxt_ulp_port_db *port_db)
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt)
 {
 	struct bnxt_ulp_port_db *port_db;
 
@@ -60,6 +60,18 @@ int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
 			    "Failed to allocate mem for port interface list\n");
 		goto error_free;
 	}
+
+	/* Allocate the phy port list */
+	port_db->phy_port_list = rte_zmalloc("bnxt_ulp_phy_port_list",
+					     port_cnt *
+					     sizeof(struct ulp_phy_port_info),
+					     0);
+	if (!port_db->phy_port_list) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate mem for phy port list\n");
+		goto error_free;
+	}
+
 	return 0;
 
 error_free:
@@ -89,6 +101,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 	bnxt_ulp_cntxt_ptr2_port_db_set(ulp_ctxt, NULL);
 
 	/* Free up all the memory. */
+	rte_free(port_db->phy_port_list);
 	rte_free(port_db->ulp_intf_list);
 	rte_free(port_db);
 	return 0;
@@ -110,6 +123,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	struct ulp_phy_port_info *port_data;
 	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	struct ulp_func_if_info *func;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -134,20 +148,48 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	intf = &port_db->ulp_intf_list[ifindex];
 
 	intf->type = bnxt_get_interface_type(port_id);
+	intf->drv_func_id = bnxt_get_fw_func_id(port_id,
+						BNXT_ULP_INTF_TYPE_INVALID);
+
+	func = &port_db->ulp_func_id_tbl[intf->drv_func_id];
+	if (!func->func_valid) {
+		func->func_svif = bnxt_get_svif(port_id, true,
+						BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_spif = bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+		func->func_valid = true;
+	}
 
-	intf->func_id = bnxt_get_fw_func_id(port_id);
-	intf->func_svif = bnxt_get_svif(port_id, 1);
-	intf->func_spif = bnxt_get_phy_port_id(port_id);
-	intf->func_parif = bnxt_get_parif(port_id);
-	intf->default_vnic = bnxt_get_vnic_id(port_id);
-	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+	if (intf->type == BNXT_ULP_INTF_TYPE_VF_REP) {
+		intf->vf_func_id =
+			bnxt_get_fw_func_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+
+		func = &port_db->ulp_func_id_tbl[intf->vf_func_id];
+		func->func_svif =
+			bnxt_get_svif(port_id, true, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->func_spif =
+			bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+	}
 
-	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
-		port_data = &port_db->phy_port_list[intf->phy_port_id];
-		port_data->port_svif = bnxt_get_svif(port_id, 0);
+	port_data = &port_db->phy_port_list[func->phy_port_id];
+	if (!port_data->port_valid) {
+		port_data->port_svif =
+			bnxt_get_svif(port_id, false,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_spif = bnxt_get_phy_port_id(port_id);
-		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_vport = bnxt_get_vport(port_id);
+		port_data->port_valid = true;
 	}
 
 	return 0;
@@ -194,6 +236,7 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 			    uint32_t ifindex,
+			    uint32_t fid_type,
 			    uint16_t *func_id)
 {
 	struct bnxt_ulp_port_db *port_db;
@@ -203,7 +246,12 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*func_id =  port_db->ulp_intf_list[ifindex].func_id;
+
+	if (fid_type == BNXT_ULP_DRV_FUNC_FID)
+		*func_id =  port_db->ulp_intf_list[ifindex].drv_func_id;
+	else
+		*func_id =  port_db->ulp_intf_list[ifindex].vf_func_id;
+
 	return 0;
 }
 
@@ -212,7 +260,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * svif_type [in] the svif type of the given ifindex.
  * svif [out] the svif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -220,21 +268,27 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t svif_type,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*svif = port_db->ulp_intf_list[ifindex].func_svif;
+
+	if (svif_type == BNXT_ULP_DRV_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
+	} else if (svif_type == BNXT_ULP_VF_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*svif = port_db->phy_port_list[phy_port_id].port_svif;
 	}
 
@@ -246,7 +300,7 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * spif_type [in] the spif type of the given ifindex.
  * spif [out] the spif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -254,21 +308,27 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t spif_type,
 		     uint16_t *spif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+
+	if (spif_type == BNXT_ULP_DRV_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
+	} else if (spif_type == BNXT_ULP_VF_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*spif = port_db->phy_port_list[phy_port_id].port_spif;
 	}
 
@@ -280,7 +340,7 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * parif_type [in] the parif type of the given ifindex.
  * parif [out] the parif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -288,21 +348,26 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t parif_type,
 		     uint16_t *parif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	if (parif_type == BNXT_ULP_DRV_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
+	} else if (parif_type == BNXT_ULP_VF_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*parif = port_db->phy_port_list[phy_port_id].port_parif;
 	}
 
@@ -321,16 +386,26 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 			     uint32_t ifindex,
+			     uint32_t vnic_type,
 			     uint16_t *vnic)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	} else {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	}
+
 	return 0;
 }
 
@@ -348,14 +423,16 @@ ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 		      uint32_t ifindex, uint16_t *vport)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+
+	func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+	phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 	*vport = port_db->phy_port_list[phy_port_id].port_vport;
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 87de3bcbc..b1419a34c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -9,19 +9,54 @@
 #include "bnxt_ulp.h"
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
+#define BNXT_PORT_DB_MAX_FUNC			2048
 
-/* Structure for the Port database resource information. */
-struct ulp_interface_info {
-	enum bnxt_ulp_intf_type	type;
-	uint16_t		func_id;
+enum bnxt_ulp_svif_type {
+	BNXT_ULP_DRV_FUNC_SVIF = 0,
+	BNXT_ULP_VF_FUNC_SVIF,
+	BNXT_ULP_PHY_PORT_SVIF
+};
+
+enum bnxt_ulp_spif_type {
+	BNXT_ULP_DRV_FUNC_SPIF = 0,
+	BNXT_ULP_VF_FUNC_SPIF,
+	BNXT_ULP_PHY_PORT_SPIF
+};
+
+enum bnxt_ulp_parif_type {
+	BNXT_ULP_DRV_FUNC_PARIF = 0,
+	BNXT_ULP_VF_FUNC_PARIF,
+	BNXT_ULP_PHY_PORT_PARIF
+};
+
+enum bnxt_ulp_vnic_type {
+	BNXT_ULP_DRV_FUNC_VNIC = 0,
+	BNXT_ULP_VF_FUNC_VNIC
+};
+
+enum bnxt_ulp_fid_type {
+	BNXT_ULP_DRV_FUNC_FID,
+	BNXT_ULP_VF_FUNC_FID
+};
+
+struct ulp_func_if_info {
+	uint16_t		func_valid;
 	uint16_t		func_svif;
 	uint16_t		func_spif;
 	uint16_t		func_parif;
-	uint16_t		default_vnic;
+	uint16_t		func_vnic;
 	uint16_t		phy_port_id;
 };
 
+/* Structure for the Port database resource information. */
+struct ulp_interface_info {
+	enum bnxt_ulp_intf_type	type;
+	uint16_t		drv_func_id;
+	uint16_t		vf_func_id;
+};
+
 struct ulp_phy_port_info {
+	uint16_t	port_valid;
 	uint16_t	port_svif;
 	uint16_t	port_spif;
 	uint16_t	port_parif;
@@ -35,7 +70,8 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
-	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	*phy_port_list;
+	struct ulp_func_if_info		ulp_func_id_tbl[BNXT_PORT_DB_MAX_FUNC];
 };
 
 /*
@@ -46,7 +82,7 @@ struct bnxt_ulp_port_db {
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt);
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt);
 
 /*
  * Deinitialize the port database. Memory is deallocated in
@@ -94,7 +130,8 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex, uint16_t *func_id);
+			    uint32_t ifindex, uint32_t fid_type,
+			    uint16_t *func_id);
 
 /*
  * Api to get the svif for a given ulp ifindex.
@@ -150,7 +187,8 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex, uint16_t *vnic);
+			     uint32_t ifindex, uint32_t vnic_type,
+			     uint16_t *vnic);
 
 /*
  * Api to get the vport id for a given ulp ifindex.
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 8fffaecce..073b3537f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -166,6 +166,8 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 	uint16_t port_id = svif;
 	uint32_t dir = 0;
 	struct ulp_rte_hdr_field *hdr_field;
+	enum bnxt_ulp_svif_type svif_type;
+	enum bnxt_ulp_intf_type if_type;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -187,7 +189,18 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 				    "Invalid port id\n");
 			return BNXT_TF_RC_ERROR;
 		}
-		ulp_port_db_svif_get(params->ulp_ctx, ifindex, dir, &svif);
+
+		if (dir == ULP_DIR_INGRESS) {
+			svif_type = BNXT_ULP_PHY_PORT_SVIF;
+		} else {
+			if_type = bnxt_get_interface_type(port_id);
+			if (if_type == BNXT_ULP_INTF_TYPE_VF_REP)
+				svif_type = BNXT_ULP_VF_FUNC_SVIF;
+			else
+				svif_type = BNXT_ULP_DRV_FUNC_SVIF;
+		}
+		ulp_port_db_svif_get(params->ulp_ctx, ifindex, svif_type,
+				     &svif);
 		svif = rte_cpu_to_be_16(svif);
 	}
 	hdr_field = &params->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX];
@@ -1256,7 +1269,7 @@ ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
 
 	/* copy the PF of the current device into VNIC Property */
 	svif = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
-	svif = bnxt_get_vnic_id(svif);
+	svif = bnxt_get_vnic_id(svif, BNXT_ULP_INTF_TYPE_INVALID);
 	svif = rte_cpu_to_be_32(svif);
 	memcpy(&params->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 	       &svif, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1280,7 +1293,8 @@ ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using VF conversion */
-		pid = bnxt_get_vnic_id(vf_action->id);
+		pid = bnxt_get_vnic_id(vf_action->id,
+				       BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1307,7 +1321,7 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using port conversion */
-		pid = bnxt_get_vnic_id(port_id->id);
+		pid = bnxt_get_vnic_id(port_id->id, BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 42/51] net/bnxt: manage VF to VFR conduit
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (40 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 41/51] net/bnxt: enhancements for port db Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
                           ` (8 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

When VF-VFR conduits are created, a mark is added to the mark database.
mark_flag indicates whether the mark is valid and has VFR information
(VFR_ID bit in mark_flag). Rx path was checking for this VFR_ID bit.
However, while adding the mark to the mark database, VFR_ID bit is not
set in mark_flag.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index b3527eccb..b2c8c349c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -18,6 +18,8 @@
 						BNXT_ULP_MARK_VALID)
 #define ULP_MARK_DB_ENTRY_IS_INVALID(mark_info) (!((mark_info)->flags &\
 						   BNXT_ULP_MARK_VALID))
+#define ULP_MARK_DB_ENTRY_SET_VFR_ID(mark_info) ((mark_info)->flags |=\
+						 BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_VFR_ID(mark_info) ((mark_info)->flags &\
 						BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_GLOBAL_HW_FID(mark_info) ((mark_info)->flags &\
@@ -263,6 +265,9 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
+
+		if (mark_flag & BNXT_ULP_MARK_VFR_ID)
+			ULP_MARK_DB_ENTRY_SET_VFR_ID(&mtbl->lfid_tbl[fid]);
 	}
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 43/51] net/bnxt: parse reps along with other dev-args
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (41 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
                           ` (7 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Representor dev-args need to be parsed during pci probe as they determine
subsequent probe of VF representor ports as well.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 332644d77..0b38c84e3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -98,8 +98,10 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+#define BNXT_DEVARG_REPRESENTOR	"representor"
 
 static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_REPRESENTOR,
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
 	BNXT_DEVARG_MAX_NUM_KFLOWS,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 44/51] net/bnxt: fill mapper parameters with default rules
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (42 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
                           ` (6 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Default rules are needed for the packets to be punted between the
following entities in the non-offloaded path
1. Device PORT to DPDK App
2. DPDK App to Device PORT
3. VF Representor to VF
4. VF to VF Representor

This patch fills all the relevant information in the computed fields
& the act_prop fields for the flow mapper to create the necessary
tables in the hardware to enable the default rules.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c                |   6 +-
 drivers/net/bnxt/meson.build                  |   1 +
 drivers/net/bnxt/tf_ulp/Makefile              |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |  24 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |  30 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       | 385 ++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  10 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |   3 +-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   5 +
 9 files changed, 444 insertions(+), 21 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 0b38c84e3..de8e11a6e 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1275,9 +1275,6 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
-	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_deinit(bp);
-
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
@@ -1333,6 +1330,9 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
+	if (BNXT_TRUFLOW_EN(bp))
+		bnxt_ulp_deinit(bp);
+
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 8f6ed419e..2939857ca 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -61,6 +61,7 @@ sources = files('bnxt_cpr.c',
 	'tf_ulp/ulp_rte_parser.c',
 	'tf_ulp/bnxt_ulp_flow.c',
 	'tf_ulp/ulp_port_db.c',
+	'tf_ulp/ulp_def_rules.c',
 
 	'rte_pmd_bnxt.c')
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 57341f876..3f1b43bae 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -16,3 +16,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index eecc09cea..3563f63fa 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -12,6 +12,8 @@
 
 #include "rte_ethdev.h"
 
+#include "ulp_template_db_enum.h"
+
 struct bnxt_ulp_data {
 	uint32_t			tbl_scope_id;
 	struct bnxt_ulp_mark_tbl	*mark_tbl;
@@ -49,6 +51,12 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+struct ulp_tlv_param {
+	enum bnxt_ulp_df_param_type type;
+	uint32_t length;
+	uint8_t value[16];
+};
+
 /*
  * Allow the deletion of context only for the bnxt device that
  * created the session
@@ -127,4 +135,20 @@ bnxt_ulp_cntxt_ptr2_port_db_set(struct bnxt_ulp_context	*ulp_ctx,
 struct bnxt_ulp_port_db *
 bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx);
 
+/* Function to create default flows. */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id);
+
+/* Function to destroy default flows. */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev,
+			 uint32_t flow_id);
+
+int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		      struct rte_flow_error *error);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 138b0b73d..7ef306e58 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -207,7 +207,7 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev,
 }
 
 /* Function to destroy the rte flow. */
-static int
+int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 		      struct rte_flow *flow,
 		      struct rte_flow_error *error)
@@ -220,9 +220,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
@@ -233,17 +234,22 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
 		BNXT_TF_DBG(ERR, "Incorrect device params\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
-	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id);
-	if (ret)
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret) {
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+		if (error)
+			rte_flow_error_set(error, -ret,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
+	}
 
 	return ret;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
new file mode 100644
index 000000000..46b558f31
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt_tf_common.h"
+#include "ulp_template_struct.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_db_field.h"
+#include "ulp_utils.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+struct bnxt_ulp_def_param_handler {
+	int32_t (*vfr_func)(struct bnxt_ulp_context *ulp_ctx,
+			    struct ulp_tlv_param *param,
+			    struct bnxt_ulp_mapper_create_parms *mapper_params);
+};
+
+static int32_t
+ulp_set_svif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t svif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t svif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_svif_get(ulp_ctx, ifindex, svif_type, &svif);
+	if (rc)
+		return rc;
+
+	if (svif_type == BNXT_ULP_PHY_PORT_SVIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SVIF;
+	else if (svif_type == BNXT_ULP_DRV_FUNC_SVIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SVIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SVIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, svif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_spif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t spif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t spif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_spif_get(ulp_ctx, ifindex, spif_type, &spif);
+	if (rc)
+		return rc;
+
+	if (spif_type == BNXT_ULP_PHY_PORT_SPIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SPIF;
+	else if (spif_type == BNXT_ULP_DRV_FUNC_SPIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SPIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SPIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, spif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_parif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t  ifindex, uint8_t parif_type,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t parif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_parif_get(ulp_ctx, ifindex, parif_type, &parif);
+	if (rc)
+		return rc;
+
+	if (parif_type == BNXT_ULP_PHY_PORT_PARIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_PARIF;
+	else if (parif_type == BNXT_ULP_DRV_FUNC_PARIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_PARIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_PARIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, parif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vport_in_comp_fld(struct bnxt_ulp_context *ulp_ctx, uint32_t ifindex,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vport;
+	int rc;
+
+	rc = ulp_port_db_vport_get(ulp_ctx, ifindex, &vport);
+	if (rc)
+		return rc;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_PHY_PORT_VPORT,
+			    vport);
+	return 0;
+}
+
+static int32_t
+ulp_set_vnic_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t vnic_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vnic;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_default_vnic_get(ulp_ctx, ifindex, vnic_type, &vnic);
+	if (rc)
+		return rc;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_VNIC;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_VNIC;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, vnic);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vlan_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	struct ulp_rte_act_prop *act_prop = mapper_params->act_prop;
+
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_SET_VLAN_VID)) {
+		BNXT_TF_DBG(ERR,
+			    "VLAN already set, multiple VLANs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	port_id = rte_cpu_to_be_16(port_id);
+
+	ULP_BITMAP_SET(mapper_params->act->bits,
+		       BNXT_ULP_ACTION_BIT_SET_VLAN_VID);
+
+	memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG],
+	       &port_id, sizeof(port_id));
+
+	return 0;
+}
+
+static int32_t
+ulp_set_mark_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		BNXT_TF_DBG(ERR,
+			    "MARK already set, multiple MARKs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_DEV_PORT_ID,
+			    port_id);
+
+	return 0;
+}
+
+static int32_t
+ulp_df_dev_port_handler(struct bnxt_ulp_context *ulp_ctx,
+			struct ulp_tlv_param *param,
+			struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t port_id;
+	uint32_t ifindex;
+	int rc;
+
+	port_id = param->value[0] | param->value[1];
+
+	rc = ulp_port_db_dev_port_to_ulp_index(ulp_ctx, port_id, &ifindex);
+	if (rc) {
+		BNXT_TF_DBG(ERR,
+				"Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* Set port SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_PHY_PORT_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_DRV_FUNC_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_PARIF,
+				       mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set uplink VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, true, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, false, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VPORT */
+	rc = ulp_set_vport_in_comp_fld(ulp_ctx, ifindex, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VLAN */
+	rc = ulp_set_vlan_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set MARK */
+	rc = ulp_set_mark_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
+struct bnxt_ulp_def_param_handler ulp_def_handler_tbl[] = {
+	[BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID] = {
+			.vfr_func = ulp_df_dev_port_handler }
+};
+
+/*
+ * Function to create default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * param_list [in] Ptr to a list of parameters (Currently, only DPDK port_id).
+ * ulp_class_tid [in] Class template ID number.
+ * flow_id [out] Ptr to flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id)
+{
+	struct ulp_rte_hdr_field	hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	uint32_t			comp_fld[BNXT_ULP_CF_IDX_LAST];
+	struct bnxt_ulp_mapper_create_parms mapper_params = { 0 };
+	struct ulp_rte_act_prop		act_prop;
+	struct ulp_rte_act_bitmap	act = { 0 };
+	struct bnxt_ulp_context		*ulp_ctx;
+	uint32_t type;
+	int rc;
+
+	memset(&mapper_params, 0, sizeof(mapper_params));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(comp_fld, 0, sizeof(comp_fld));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	mapper_params.hdr_field = hdr_field;
+	mapper_params.act = &act;
+	mapper_params.act_prop = &act_prop;
+	mapper_params.comp_fld = comp_fld;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized. "
+				 "Failed to create default flow.\n");
+		return -EINVAL;
+	}
+
+	type = param_list->type;
+	while (type != BNXT_ULP_DF_PARAM_TYPE_LAST) {
+		if (ulp_def_handler_tbl[type].vfr_func) {
+			rc = ulp_def_handler_tbl[type].vfr_func(ulp_ctx,
+								param_list,
+								&mapper_params);
+			if (rc) {
+				BNXT_TF_DBG(ERR,
+					    "Failed to create default flow.\n");
+				return rc;
+			}
+		}
+
+		param_list++;
+		type = param_list->type;
+	}
+
+	mapper_params.class_tid = ulp_class_tid;
+
+	rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, flow_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create default flow.\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/*
+ * Function to destroy default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * flow_id [in] Flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int rc;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return -EINVAL;
+	}
+
+	rc = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				     BNXT_ULP_DEFAULT_FLOW_TABLE);
+	if (rc)
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index d0931d411..e39398a1b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2274,16 +2274,15 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 }
 
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type)
 {
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
 		return -EINVAL;
 	}
 
-	return ulp_mapper_resources_free(ulp_ctx,
-					 fid,
-					 BNXT_ULP_REGULAR_FLOW_TABLE);
+	return ulp_mapper_resources_free(ulp_ctx, fid, flow_tbl_type);
 }
 
 /* Function to handle the default global templates that are allocated during
@@ -2486,7 +2485,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 flow_error:
 	/* Free all resources that were allocated during flow creation */
-	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
 	if (trc)
 		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 19134830a..b35065449 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -109,7 +109,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
 
 /* Function that frees all resources associated with the flow. */
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type);
 
 /*
  * Function that frees all resources and can be called on default or regular
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 27628a510..2346797db 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -145,6 +145,11 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
+enum bnxt_ulp_df_param_type {
+	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
+	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
+};
+
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 45/51] net/bnxt: add VF-rep and stat templates
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (43 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
                           ` (5 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The support for VF representor and counters is added to the
ulp templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |   21 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    2 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  424 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5198 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  409 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   87 +-
 7 files changed, 4948 insertions(+), 1656 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e39398a1b..3f175fb51 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -22,7 +22,7 @@ ulp_mapper_glb_resource_info_list_get(uint32_t *num_entries)
 {
 	if (!num_entries)
 		return NULL;
-	*num_entries = BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+	*num_entries = BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 	return ulp_glb_resource_tbl;
 }
 
@@ -119,11 +119,6 @@ ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_identifier(tfp, &fparms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
-		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
-#endif
 	return rc;
 }
 
@@ -182,11 +177,6 @@ ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_tbl_entry(tfp, &free_parms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
-		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
-#endif
 	return rc;
 }
 
@@ -1441,9 +1431,6 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			return rc;
 		}
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	ulp_mapper_result_dump("EEM Result", tbl, &data);
-#endif
 
 	/* do the transpose for the internal EM keys */
 	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
@@ -1594,10 +1581,6 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	/* if encap bit swap is enabled perform the bit swap */
 	if (parms->device_params->encap_byte_swap && encap_flds) {
 		ulp_blob_perform_encap_swap(&data);
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
-		ulp_mapper_blob_dump(&data);
-#endif
 	}
 
 	/*
@@ -2255,7 +2238,7 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Iterate the global resources and process each one */
 	for (dir = TF_DIR_RX; dir < TF_DIR_MAX; dir++) {
-		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 		      idx++) {
 			ent = &mapper_data->glb_res_tbl[dir][idx];
 			if (ent->resource_func ==
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b35065449..f6d55449b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -46,7 +46,7 @@ struct bnxt_ulp_mapper_glb_resource_entry {
 
 struct bnxt_ulp_mapper_data {
 	struct bnxt_ulp_mapper_glb_resource_entry
-		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ];
+		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ];
 	struct bnxt_ulp_mapper_cache_entry
 		*cache_tbl[BNXT_ULP_CACHE_TBL_MAX_SZ];
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 9b14fa0bd..3d6507399 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -9,62 +9,293 @@
 #include "ulp_rte_parser.h"
 
 uint16_t ulp_act_sig_tbl[BNXT_ULP_ACT_SIG_TBL_MAX_SZ] = {
-	[BNXT_ULP_ACT_HID_00a1] = 1,
-	[BNXT_ULP_ACT_HID_0029] = 2,
-	[BNXT_ULP_ACT_HID_0040] = 3
+	[BNXT_ULP_ACT_HID_0002] = 1,
+	[BNXT_ULP_ACT_HID_0022] = 2,
+	[BNXT_ULP_ACT_HID_0026] = 3,
+	[BNXT_ULP_ACT_HID_0006] = 4,
+	[BNXT_ULP_ACT_HID_0009] = 5,
+	[BNXT_ULP_ACT_HID_0029] = 6,
+	[BNXT_ULP_ACT_HID_002d] = 7,
+	[BNXT_ULP_ACT_HID_004b] = 8,
+	[BNXT_ULP_ACT_HID_004a] = 9,
+	[BNXT_ULP_ACT_HID_004f] = 10,
+	[BNXT_ULP_ACT_HID_004e] = 11,
+	[BNXT_ULP_ACT_HID_006c] = 12,
+	[BNXT_ULP_ACT_HID_0070] = 13,
+	[BNXT_ULP_ACT_HID_0021] = 14,
+	[BNXT_ULP_ACT_HID_0025] = 15,
+	[BNXT_ULP_ACT_HID_0043] = 16,
+	[BNXT_ULP_ACT_HID_0042] = 17,
+	[BNXT_ULP_ACT_HID_0047] = 18,
+	[BNXT_ULP_ACT_HID_0046] = 19,
+	[BNXT_ULP_ACT_HID_0064] = 20,
+	[BNXT_ULP_ACT_HID_0068] = 21,
+	[BNXT_ULP_ACT_HID_00a1] = 22,
+	[BNXT_ULP_ACT_HID_00df] = 23
 };
 
 struct bnxt_ulp_act_match_info ulp_act_match_list[] = {
 	[1] = {
-	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_hid = BNXT_ULP_ACT_HID_0002,
 	.act_sig = { .bits =
-		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
-		BNXT_ULP_ACTION_BIT_MARK |
-		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DROP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
-	.act_tid = 0
+	.act_tid = 1
 	},
 	[2] = {
+	.act_hid = BNXT_ULP_ACT_HID_0022,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[3] = {
+	.act_hid = BNXT_ULP_ACT_HID_0026,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[4] = {
+	.act_hid = BNXT_ULP_ACT_HID_0006,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[5] = {
+	.act_hid = BNXT_ULP_ACT_HID_0009,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[6] = {
 	.act_hid = BNXT_ULP_ACT_HID_0029,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
 		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[7] = {
+	.act_hid = BNXT_ULP_ACT_HID_002d,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
 		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.act_tid = 1
 	},
-	[3] = {
-	.act_hid = BNXT_ULP_ACT_HID_0040,
+	[8] = {
+	.act_hid = BNXT_ULP_ACT_HID_004b,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[9] = {
+	.act_hid = BNXT_ULP_ACT_HID_004a,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[10] = {
+	.act_hid = BNXT_ULP_ACT_HID_004f,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[11] = {
+	.act_hid = BNXT_ULP_ACT_HID_004e,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[12] = {
+	.act_hid = BNXT_ULP_ACT_HID_006c,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[13] = {
+	.act_hid = BNXT_ULP_ACT_HID_0070,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[14] = {
+	.act_hid = BNXT_ULP_ACT_HID_0021,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[15] = {
+	.act_hid = BNXT_ULP_ACT_HID_0025,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[16] = {
+	.act_hid = BNXT_ULP_ACT_HID_0043,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[17] = {
+	.act_hid = BNXT_ULP_ACT_HID_0042,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[18] = {
+	.act_hid = BNXT_ULP_ACT_HID_0047,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[19] = {
+	.act_hid = BNXT_ULP_ACT_HID_0046,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[20] = {
+	.act_hid = BNXT_ULP_ACT_HID_0064,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[21] = {
+	.act_hid = BNXT_ULP_ACT_HID_0068,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[22] = {
+	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 2
+	},
+	[23] = {
+	.act_hid = BNXT_ULP_ACT_HID_00df,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_VXLAN_ENCAP |
 		BNXT_ULP_ACTION_BIT_VPORT |
 		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
-	.act_tid = 2
+	.act_tid = 3
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 0
+	.num_tbls = 2,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 1,
-	.start_tbl_idx = 1
+	.start_tbl_idx = 2,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 2
+	.num_tbls = 3,
+	.start_tbl_idx = 3,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_STATS_64,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_ACTION_BIT,
+	.cond_operand = BNXT_ULP_ACTION_BIT_COUNT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
@@ -72,7 +303,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 0,
+	.result_start_idx = 1,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -87,7 +318,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 26,
+	.result_start_idx = 27,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -97,12 +328,46 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 53,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 56,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_TX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 52,
+	.result_start_idx = 59,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
@@ -114,10 +379,19 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
-	.field_bit_size = 14,
+	.field_bit_size = 64,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
 	.field_bit_size = 1,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
@@ -131,7 +405,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_COUNT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -187,7 +471,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -195,11 +489,7 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.result_operand = {
-		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
@@ -212,7 +502,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -224,7 +524,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DROP & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 14,
@@ -308,7 +618,11 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 12,
@@ -336,6 +650,50 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 128,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 14,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index 8eb559050..feac30af2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -10,8 +10,8 @@
 
 uint16_t ulp_class_sig_tbl[BNXT_ULP_CLASS_SIG_TBL_MAX_SZ] = {
 	[BNXT_ULP_CLASS_HID_0080] = 1,
-	[BNXT_ULP_CLASS_HID_0000] = 2,
-	[BNXT_ULP_CLASS_HID_0087] = 3
+	[BNXT_ULP_CLASS_HID_0087] = 2,
+	[BNXT_ULP_CLASS_HID_0000] = 3
 };
 
 struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
@@ -23,1871 +23,4722 @@ struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
 		BNXT_ULP_HDR_BIT_O_UDP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 0,
+	.class_tid = 8,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[2] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0000,
+	.class_hid = BNXT_ULP_CLASS_HID_0087,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
+		BNXT_ULP_HDR_BIT_T_VXLAN |
+		BNXT_ULP_HDR_BIT_I_ETH |
+		BNXT_ULP_HDR_BIT_I_IPV4 |
+		BNXT_ULP_HDR_BIT_I_UDP |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT |
+		BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 1,
+	.class_tid = 9,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[3] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0087,
+	.class_hid = BNXT_ULP_CLASS_HID_0000,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_HDR_BIT_T_VXLAN |
-		BNXT_ULP_HDR_BIT_I_ETH |
-		BNXT_ULP_HDR_BIT_I_IPV4 |
-		BNXT_ULP_HDR_BIT_I_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
 	.field_sig = { .bits =
-		BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT |
-		BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 2,
+	.class_tid = 10,
 	.act_vnic = 0,
 	.wc_pri = 0
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 4,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 2,
+	.start_tbl_idx = 4,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 6,
+	.start_tbl_idx = 6,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((4 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 0
+	.start_tbl_idx = 12,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((5 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 17,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((6 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 20,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((7 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 23,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((8 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 5
+	.start_tbl_idx = 24,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((9 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 5,
+	.start_tbl_idx = 29,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
+	},
+	[((10 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 10
+	.start_tbl_idx = 34,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 0,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
 	.result_start_idx = 0,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 0,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 2,
+	.key_start_idx = 0,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 1,
+	.result_start_idx = 26,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 15,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 14,
-	.result_bit_size = 10,
+	.result_start_idx = 39,
+	.result_bit_size = 32,
 	.result_num_fields = 1,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
-	.ident_nums = 1,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 40,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 41,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 18,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 15,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 67,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 60,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 23,
-	.result_bit_size = 64,
-	.result_num_fields = 9,
-	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 80,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 71,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 32,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 92,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 73,
+	.key_start_idx = 26,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 33,
+	.result_start_idx = 118,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 131,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 86,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 46,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.key_start_idx = 39,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 157,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
-	.ident_nums = 1,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_TX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 89,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 47,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 52,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 170,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 131,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 55,
+	.key_start_idx = 65,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 183,
 	.result_bit_size = 64,
-	.result_num_fields = 9,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 196,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 197,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 142,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 64,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 198,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
-	.ident_nums = 1,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_VFR_FLAG,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 144,
+	.key_start_idx = 78,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 65,
+	.result_start_idx = 224,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 157,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 78,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 237,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 249,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 160,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 79,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 91,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 275,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 202,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 87,
-	.result_bit_size = 64,
+	.result_start_idx = 288,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 104,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 314,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 117,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 327,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 340,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_GLOBAL,
+	.index_operand = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 130,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 366,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 132,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 367,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 145,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 380,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 148,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 381,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 190,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 389,
+	.result_bit_size = 64,
 	.result_num_fields = 9,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
-	}
-};
-
-struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 201,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 398,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 203,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 399,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 216,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 412,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 219,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 413,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 261,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 421,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 272,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 430,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 274,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 431,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 287,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 444,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 290,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 445,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 332,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 453,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	}
+};
+
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_PARIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_PARIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_DST_PORT & 0xff,
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT & 0xff,
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x02}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_DST_PORT & 0xff,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_DST_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
-	}
-};
-
-struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
 	.field_bit_size = 10,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
@@ -2309,7 +5160,12 @@ struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 16,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 2346797db..695546437 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -6,7 +6,7 @@
 #ifndef ULP_TEMPLATE_DB_H_
 #define ULP_TEMPLATE_DB_H_
 
-#define BNXT_ULP_REGFILE_MAX_SZ 16
+#define BNXT_ULP_REGFILE_MAX_SZ 17
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_CACHE_TBL_MAX_SZ 4
@@ -18,15 +18,15 @@
 #define BNXT_ULP_CLASS_HID_SHFTL 23
 #define BNXT_ULP_CLASS_HID_MASK 255
 #define BNXT_ULP_ACT_SIG_TBL_MAX_SZ 256
-#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 4
+#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 24
 #define BNXT_ULP_ACT_HID_LOW_PRIME 7919
 #define BNXT_ULP_ACT_HID_HIGH_PRIME 7919
-#define BNXT_ULP_ACT_HID_SHFTR 0
+#define BNXT_ULP_ACT_HID_SHFTR 23
 #define BNXT_ULP_ACT_HID_SHFTL 23
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
-#define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
-#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
+#define BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ 5
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -242,7 +242,8 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
 	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
 	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
-	BNXT_ULP_REGFILE_INDEX_LAST = 16
+	BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR = 16,
+	BNXT_ULP_REGFILE_INDEX_LAST = 17
 };
 
 enum bnxt_ulp_search_before_alloc {
@@ -252,18 +253,18 @@ enum bnxt_ulp_search_before_alloc {
 };
 
 enum bnxt_ulp_fdb_resource_flags {
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01,
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00,
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01
 };
 
 enum bnxt_ulp_fdb_type {
-	BNXT_ULP_FDB_TYPE_DEFAULT = 1,
-	BNXT_ULP_FDB_TYPE_REGULAR = 0
+	BNXT_ULP_FDB_TYPE_REGULAR = 0,
+	BNXT_ULP_FDB_TYPE_DEFAULT = 1
 };
 
 enum bnxt_ulp_flow_dir_bitmask {
-	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000,
-	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000
+	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000,
+	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000
 };
 
 enum bnxt_ulp_match_type_bitmask {
@@ -285,190 +286,66 @@ enum bnxt_ulp_resource_func {
 };
 
 enum bnxt_ulp_resource_sub_type {
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1
 };
 
 enum bnxt_ulp_sym {
-	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
-	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
-	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
-	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
-	BNXT_ULP_SYM_ECV_VALID_NO = 0,
-	BNXT_ULP_SYM_ECV_VALID_YES = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
-	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
-	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
-	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
-	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
-	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
-	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
-	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
-	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
-	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
-	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
-	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
-	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
-	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
-	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
-	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
-	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
-	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
-	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
-	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
-	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_PKT_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
-	BNXT_ULP_SYM_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_POP_VLAN_YES = 1,
 	BNXT_ULP_SYM_RECYCLE_CNT_IGNORE = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
 	BNXT_ULP_SYM_RECYCLE_CNT_ONE = 1,
-	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
 	BNXT_ULP_SYM_RECYCLE_CNT_TWO = 2,
-	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
+	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
+	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
 	BNXT_ULP_SYM_RESERVED_IGNORE = 0,
-	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
-	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
+	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
 	BNXT_ULP_SYM_TL2_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_NO = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_NO = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_YES = 1,
@@ -478,40 +355,164 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_TL4_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
-	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
+	BNXT_ULP_SYM_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_ECV_VALID_NO = 0,
+	BNXT_ULP_SYM_ECV_VALID_YES = 1,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
 	BNXT_ULP_SYM_WH_PLUS_INT_ACT_REC = 1,
-	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
-	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
 	BNXT_ULP_SYM_WH_PLUS_UC_ACT_REC = 0,
+	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
+	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
+	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
+	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
+	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
+	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
+	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
+	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
 enum bnxt_ulp_wh_plus {
-	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448
 };
 
 enum bnxt_ulp_act_prop_sz {
@@ -604,18 +605,44 @@ enum bnxt_ulp_act_prop_idx {
 
 enum bnxt_ulp_class_hid {
 	BNXT_ULP_CLASS_HID_0080 = 0x0080,
-	BNXT_ULP_CLASS_HID_0000 = 0x0000,
-	BNXT_ULP_CLASS_HID_0087 = 0x0087
+	BNXT_ULP_CLASS_HID_0087 = 0x0087,
+	BNXT_ULP_CLASS_HID_0000 = 0x0000
 };
 
 enum bnxt_ulp_act_hid {
-	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_0002 = 0x0002,
+	BNXT_ULP_ACT_HID_0022 = 0x0022,
+	BNXT_ULP_ACT_HID_0026 = 0x0026,
+	BNXT_ULP_ACT_HID_0006 = 0x0006,
+	BNXT_ULP_ACT_HID_0009 = 0x0009,
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
-	BNXT_ULP_ACT_HID_0040 = 0x0040
+	BNXT_ULP_ACT_HID_002d = 0x002d,
+	BNXT_ULP_ACT_HID_004b = 0x004b,
+	BNXT_ULP_ACT_HID_004a = 0x004a,
+	BNXT_ULP_ACT_HID_004f = 0x004f,
+	BNXT_ULP_ACT_HID_004e = 0x004e,
+	BNXT_ULP_ACT_HID_006c = 0x006c,
+	BNXT_ULP_ACT_HID_0070 = 0x0070,
+	BNXT_ULP_ACT_HID_0021 = 0x0021,
+	BNXT_ULP_ACT_HID_0025 = 0x0025,
+	BNXT_ULP_ACT_HID_0043 = 0x0043,
+	BNXT_ULP_ACT_HID_0042 = 0x0042,
+	BNXT_ULP_ACT_HID_0047 = 0x0047,
+	BNXT_ULP_ACT_HID_0046 = 0x0046,
+	BNXT_ULP_ACT_HID_0064 = 0x0064,
+	BNXT_ULP_ACT_HID_0068 = 0x0068,
+	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_00df = 0x00df
 };
 
 enum bnxt_ulp_df_tpl {
 	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
-	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2,
+	BNXT_ULP_DF_TPL_VFREP_TO_VF = 3,
+	BNXT_ULP_DF_TPL_VF_TO_VFREP = 4,
+	BNXT_ULP_DF_TPL_DRV_FUNC_SVIF_PUSH_VLAN = 5,
+	BNXT_ULP_DF_TPL_PORT_SVIF_VID_VNIC_POP_VLAN = 6,
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC = 7
 };
+
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
index 84b952304..769542042 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
@@ -6,220 +6,275 @@
 #ifndef ULP_HDR_FIELD_ENUMS_H_
 #define ULP_HDR_FIELD_ENUMS_H_
 
-enum bnxt_ulp_hf0 {
-	BNXT_ULP_HF0_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF0_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF0_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF0_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF0_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF0_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF0_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF0_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF0_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF0_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF0_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF0_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF0_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF0_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF0_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF0_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF0_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF0_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF0_IDX_O_UDP_CSUM              = 23
-};
-
 enum bnxt_ulp_hf1 {
-	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF1_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF1_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF1_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF1_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF1_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF1_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF1_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF1_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF1_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF1_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF1_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF1_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF1_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF1_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF1_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF1_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF1_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF1_IDX_O_UDP_CSUM              = 23
+	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0
 };
 
 enum bnxt_ulp_hf2 {
-	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF2_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF2_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF2_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF2_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF2_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF2_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF2_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF2_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF2_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF2_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF2_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF2_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF2_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF2_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF2_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF2_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF2_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF2_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF2_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF2_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF2_IDX_O_UDP_CSUM              = 23,
-	BNXT_ULP_HF2_IDX_T_VXLAN_FLAGS           = 24,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD0           = 25,
-	BNXT_ULP_HF2_IDX_T_VXLAN_VNI             = 26,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD1           = 27,
-	BNXT_ULP_HF2_IDX_I_ETH_DMAC              = 28,
-	BNXT_ULP_HF2_IDX_I_ETH_SMAC              = 29,
-	BNXT_ULP_HF2_IDX_I_ETH_TYPE              = 30,
-	BNXT_ULP_HF2_IDX_IO_VLAN_CFI_PRI         = 31,
-	BNXT_ULP_HF2_IDX_IO_VLAN_VID             = 32,
-	BNXT_ULP_HF2_IDX_IO_VLAN_TYPE            = 33,
-	BNXT_ULP_HF2_IDX_II_VLAN_CFI_PRI         = 34,
-	BNXT_ULP_HF2_IDX_II_VLAN_VID             = 35,
-	BNXT_ULP_HF2_IDX_II_VLAN_TYPE            = 36,
-	BNXT_ULP_HF2_IDX_I_IPV4_VER              = 37,
-	BNXT_ULP_HF2_IDX_I_IPV4_TOS              = 38,
-	BNXT_ULP_HF2_IDX_I_IPV4_LEN              = 39,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_ID          = 40,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_OFF         = 41,
-	BNXT_ULP_HF2_IDX_I_IPV4_TTL              = 42,
-	BNXT_ULP_HF2_IDX_I_IPV4_NEXT_PID         = 43,
-	BNXT_ULP_HF2_IDX_I_IPV4_CSUM             = 44,
-	BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR         = 45,
-	BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR         = 46,
-	BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT          = 47,
-	BNXT_ULP_HF2_IDX_I_UDP_DST_PORT          = 48,
-	BNXT_ULP_HF2_IDX_I_UDP_LENGTH            = 49,
-	BNXT_ULP_HF2_IDX_I_UDP_CSUM              = 50
-};
-
-enum bnxt_ulp_hf_bitmask0 {
-	BNXT_ULP_HF0_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf3 {
+	BNXT_ULP_HF3_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf4 {
+	BNXT_ULP_HF4_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf5 {
+	BNXT_ULP_HF5_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf6 {
+	BNXT_ULP_HF6_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf7 {
+	BNXT_ULP_HF7_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf8 {
+	BNXT_ULP_HF8_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF8_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF8_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF8_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF8_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF8_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF8_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF8_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF8_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF8_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF8_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF8_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF8_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF8_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF8_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF8_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF8_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF8_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF8_IDX_O_UDP_CSUM              = 23
+};
+
+enum bnxt_ulp_hf9 {
+	BNXT_ULP_HF9_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF9_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF9_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF9_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF9_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF9_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF9_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF9_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF9_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF9_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF9_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF9_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF9_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF9_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF9_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF9_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF9_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF9_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF9_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF9_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF9_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF9_IDX_O_UDP_CSUM              = 23,
+	BNXT_ULP_HF9_IDX_T_VXLAN_FLAGS           = 24,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD0           = 25,
+	BNXT_ULP_HF9_IDX_T_VXLAN_VNI             = 26,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD1           = 27,
+	BNXT_ULP_HF9_IDX_I_ETH_DMAC              = 28,
+	BNXT_ULP_HF9_IDX_I_ETH_SMAC              = 29,
+	BNXT_ULP_HF9_IDX_I_ETH_TYPE              = 30,
+	BNXT_ULP_HF9_IDX_IO_VLAN_CFI_PRI         = 31,
+	BNXT_ULP_HF9_IDX_IO_VLAN_VID             = 32,
+	BNXT_ULP_HF9_IDX_IO_VLAN_TYPE            = 33,
+	BNXT_ULP_HF9_IDX_II_VLAN_CFI_PRI         = 34,
+	BNXT_ULP_HF9_IDX_II_VLAN_VID             = 35,
+	BNXT_ULP_HF9_IDX_II_VLAN_TYPE            = 36,
+	BNXT_ULP_HF9_IDX_I_IPV4_VER              = 37,
+	BNXT_ULP_HF9_IDX_I_IPV4_TOS              = 38,
+	BNXT_ULP_HF9_IDX_I_IPV4_LEN              = 39,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_ID          = 40,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_OFF         = 41,
+	BNXT_ULP_HF9_IDX_I_IPV4_TTL              = 42,
+	BNXT_ULP_HF9_IDX_I_IPV4_PROTO_ID         = 43,
+	BNXT_ULP_HF9_IDX_I_IPV4_CSUM             = 44,
+	BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR         = 45,
+	BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR         = 46,
+	BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT          = 47,
+	BNXT_ULP_HF9_IDX_I_UDP_DST_PORT          = 48,
+	BNXT_ULP_HF9_IDX_I_UDP_LENGTH            = 49,
+	BNXT_ULP_HF9_IDX_I_UDP_CSUM              = 50
+};
+
+enum bnxt_ulp_hf10 {
+	BNXT_ULP_HF10_IDX_SVIF_INDEX             = 0,
+	BNXT_ULP_HF10_IDX_O_ETH_DMAC             = 1,
+	BNXT_ULP_HF10_IDX_O_ETH_SMAC             = 2,
+	BNXT_ULP_HF10_IDX_O_ETH_TYPE             = 3,
+	BNXT_ULP_HF10_IDX_OO_VLAN_CFI_PRI        = 4,
+	BNXT_ULP_HF10_IDX_OO_VLAN_VID            = 5,
+	BNXT_ULP_HF10_IDX_OO_VLAN_TYPE           = 6,
+	BNXT_ULP_HF10_IDX_OI_VLAN_CFI_PRI        = 7,
+	BNXT_ULP_HF10_IDX_OI_VLAN_VID            = 8,
+	BNXT_ULP_HF10_IDX_OI_VLAN_TYPE           = 9,
+	BNXT_ULP_HF10_IDX_O_IPV4_VER             = 10,
+	BNXT_ULP_HF10_IDX_O_IPV4_TOS             = 11,
+	BNXT_ULP_HF10_IDX_O_IPV4_LEN             = 12,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_ID         = 13,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_OFF        = 14,
+	BNXT_ULP_HF10_IDX_O_IPV4_TTL             = 15,
+	BNXT_ULP_HF10_IDX_O_IPV4_PROTO_ID        = 16,
+	BNXT_ULP_HF10_IDX_O_IPV4_CSUM            = 17,
+	BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR        = 18,
+	BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR        = 19,
+	BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT         = 20,
+	BNXT_ULP_HF10_IDX_O_UDP_DST_PORT         = 21,
+	BNXT_ULP_HF10_IDX_O_UDP_LENGTH           = 22,
+	BNXT_ULP_HF10_IDX_O_UDP_CSUM             = 23
 };
 
 enum bnxt_ulp_hf_bitmask1 {
-	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000
 };
 
 enum bnxt_ulp_hf_bitmask2 {
-	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_VID         = 0x0000000010000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_VER          = 0x0000000004000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_NEXT_PID     = 0x0000000000100000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_CSUM          = 0x0000000000002000
+	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask3 {
+	BNXT_ULP_HF3_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask4 {
+	BNXT_ULP_HF4_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask5 {
+	BNXT_ULP_HF5_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask6 {
+	BNXT_ULP_HF6_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask7 {
+	BNXT_ULP_HF7_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask8 {
+	BNXT_ULP_HF8_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+};
+
+enum bnxt_ulp_hf_bitmask9 {
+	BNXT_ULP_HF9_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_VID         = 0x0000000010000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_VER          = 0x0000000004000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_PROTO_ID     = 0x0000000000100000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_CSUM          = 0x0000000000002000
 };
 
+enum bnxt_ulp_hf_bitmask10 {
+	BNXT_ULP_HF10_BITMASK_SVIF_INDEX         = 0x8000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_DMAC         = 0x4000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_SMAC         = 0x2000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_TYPE         = 0x1000000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_CFI_PRI    = 0x0800000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_VID        = 0x0400000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_TYPE       = 0x0200000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_CFI_PRI    = 0x0100000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_VID        = 0x0080000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_TYPE       = 0x0040000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_VER         = 0x0020000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TOS         = 0x0010000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_LEN         = 0x0008000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_ID     = 0x0004000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_OFF    = 0x0002000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TTL         = 0x0001000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_PROTO_ID    = 0x0000800000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_CSUM        = 0x0000400000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR    = 0x0000200000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR    = 0x0000100000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT     = 0x0000080000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT     = 0x0000040000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_LENGTH       = 0x0000020000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_CSUM         = 0x0000010000000000
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 7c440e3a4..f0a57cf65 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -294,60 +294,72 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 
 struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[] = {
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	}
 };
 
 struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
-	.flow_mem_type          = BNXT_ULP_FLOW_MEM_TYPE_EXT,
-	.byte_order             = BNXT_ULP_BYTE_ORDER_LE,
-	.encap_byte_swap        = 1,
-	.flow_db_num_entries    = 32768,
-	.mark_db_lfid_entries   = 65536,
-	.mark_db_gfid_entries   = 65536,
-	.flow_count_db_entries  = 16384,
-	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2,
-	.ext_cntr_table_type    = 0,
-	.byte_count_mask        = 0x00000003ffffffff,
-	.packet_count_mask      = 0xfffffffc00000000,
-	.byte_count_shift       = 0,
-	.packet_count_shift     = 36
+		.flow_mem_type           = BNXT_ULP_FLOW_MEM_TYPE_EXT,
+		.byte_order              = BNXT_ULP_BYTE_ORDER_LE,
+		.encap_byte_swap         = 1,
+		.flow_db_num_entries     = 32768,
+		.mark_db_lfid_entries    = 65536,
+		.mark_db_gfid_entries    = 65536,
+		.flow_count_db_entries   = 16384,
+		.num_resources_per_flow  = 8,
+		.num_phy_ports           = 2,
+		.ext_cntr_table_type     = 0,
+		.byte_count_mask         = 0x0000000fffffffff,
+		.packet_count_mask       = 0xffffffff00000000,
+		.byte_count_shift        = 0,
+		.packet_count_shift      = 36
 	}
 };
 
 struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[] = {
 	[0] = {
-	.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index       = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction               = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_RX
 	},
 	[1] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction          = TF_DIR_TX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_TX
 	},
 	[2] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_L2_CTXT,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
-	.direction          = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_RX
+	},
+	[3] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_TX
+	},
+	[4] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+		.resource_type           = TF_TBL_TYPE_FULL_ACT_RECORD,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR,
+		.direction               = TF_DIR_TX
 	}
 };
 
@@ -547,10 +559,11 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
 };
 
 uint32_t bnxt_ulp_encap_vtag_map[] = {
-	[0] = BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
-	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
-	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
 
 uint32_t ulp_glb_template_tbl[] = {
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC
 };
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 46/51] net/bnxt: create default flow rules for the VF-rep
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (44 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
                           ` (4 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Invoked 3 new APIs for the default flow create/destroy and to get
the action ptr for a default flow.
Changed ulp_intf_update() to accept rte_eth_dev as input and invoke
the same from the VF rep start function.
ULP Mark Manager will indicate if the cfa_code returned in the
Rx completion descriptor was for one of the default flow rules
created for the VF representor conduit. The mark_id returned
in such a case would be the VF rep's DPDK Port id, which can be
used to get the corresponding rte_eth_dev struct in bnxt_vf_recv

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   4 +-
 drivers/net/bnxt/bnxt_reps.c | 134 ++++++++++++++++++++++++-----------
 drivers/net/bnxt/bnxt_reps.h |   3 +-
 drivers/net/bnxt/bnxt_rxr.c  |  25 +++----
 drivers/net/bnxt/bnxt_txq.h  |   1 +
 5 files changed, 111 insertions(+), 56 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 32acced60..f16bf3319 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -806,8 +806,10 @@ struct bnxt_vf_representor {
 	uint16_t		fw_fid;
 	uint16_t		dflt_vnic_id;
 	uint16_t		svif;
-	uint16_t		tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	uint16_t		rx_cfa_code;
+	uint32_t		rep2vf_flow_id;
+	uint32_t		vf2rep_flow_id;
 	/* Private data store of associated PF/Trusted VF */
 	struct rte_eth_dev	*parent_dev;
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index ea6f0010f..a37a06184 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -12,6 +12,9 @@
 #include "bnxt_txr.h"
 #include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
+#include "bnxt_tf_common.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
@@ -29,30 +32,20 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 };
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf)
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf)
 {
 	struct bnxt_sw_rx_bd *prod_rx_buf;
 	struct bnxt_rx_ring_info *rep_rxr;
 	struct bnxt_rx_queue *rep_rxq;
 	struct rte_eth_dev *vfr_eth_dev;
 	struct bnxt_vf_representor *vfr_bp;
-	uint16_t vf_id;
 	uint16_t mask;
 	uint8_t que;
 
-	vf_id = bp->cfa_code_map[cfa_code];
-	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
-	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
-		return 1;
-	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	vfr_eth_dev = &rte_eth_devices[port_id];
 	if (!vfr_eth_dev)
 		return 1;
 	vfr_bp = vfr_eth_dev->data->dev_private;
-	if (vfr_bp->rx_cfa_code != cfa_code) {
-		/* cfa_code not meant for this VF rep!!?? */
-		return 1;
-	}
 	/* If rxq_id happens to be > max rep_queue, use rxq0 */
 	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
 	rep_rxq = vfr_bp->rx_queues[que];
@@ -127,7 +120,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	pthread_mutex_lock(&parent->rep_info->vfr_lock);
 	ptxq = parent->tx_queues[qid];
 
-	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+	ptxq->vfr_tx_cfa_action = vf_rep_bp->vfr_tx_cfa_action;
 
 	for (i = 0; i < nb_pkts; i++) {
 		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
@@ -135,7 +128,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	}
 
 	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
-	ptxq->tx_cfa_action = 0;
+	ptxq->vfr_tx_cfa_action = 0;
 	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
 
 	return rc;
@@ -252,10 +245,67 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
-static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+static int bnxt_tf_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
+{
+	int rc;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
+	struct rte_eth_dev *parent_dev = vfr->parent_dev;
+	struct bnxt *parent_bp = parent_dev->data->dev_private;
+	uint16_t vfr_port_id = vfr_ethdev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(vfr_port_id >> 8) & 0xff, vfr_port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	ulp_port_db_dev_port_intf_update(parent_bp->ulp_ctx, vfr_ethdev);
+
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VFREP_TO_VF,
+				     &vfr->rep2vf_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VFR->VF failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VFR->VF! ***\n");
+	BNXT_TF_DBG(DEBUG, "rep2vf_flow_id = %d\n", vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_db_cfa_action_get(parent_bp->ulp_ctx,
+						vfr->rep2vf_flow_id,
+						&vfr->vfr_tx_cfa_action);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Failed to get action_ptr for VFR->VF dflt rule\n");
+		return -EIO;
+	}
+	BNXT_TF_DBG(DEBUG, "tx_cfa_action = %d\n", vfr->vfr_tx_cfa_action);
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VF_TO_VFREP,
+				     &vfr->vf2rep_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VF->VFR failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VF->VFR! ***\n");
+	BNXT_TF_DBG(DEBUG, "vfr2rep_flow_id = %d\n", vfr->vf2rep_flow_id);
+
+	return 0;
+}
+
+static int bnxt_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
 {
 	int rc = 0;
-	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
 
 	if (!vfr || !vfr->parent_dev) {
 		PMD_DRV_LOG(ERR,
@@ -263,10 +313,8 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 		return -ENOMEM;
 	}
 
-	parent_bp = vfr->parent_dev->data->dev_private;
-
 	/* Check if representor has been already allocated in FW */
-	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+	if (vfr->vfr_tx_cfa_action && vfr->rx_cfa_code)
 		return 0;
 
 	/*
@@ -274,24 +322,14 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 	 * Otherwise the FW will create the VF-rep rules with
 	 * default drop action.
 	 */
-
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
-				     vfr->vf_id,
-				     &vfr->tx_cfa_action,
-				     &vfr->rx_cfa_code);
-	 */
-	if (!rc) {
-		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+	rc = bnxt_tf_vfr_alloc(vfr_ethdev);
+	if (!rc)
 		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
 			    vfr->vf_id);
-	} else {
+	else
 		PMD_DRV_LOG(ERR,
 			    "Failed to alloc representor %d in FW\n",
 			    vfr->vf_id);
-	}
 
 	return rc;
 }
@@ -312,7 +350,7 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
 	int rc;
 
-	rc = bnxt_vfr_alloc(rep_bp);
+	rc = bnxt_vfr_alloc(eth_dev);
 
 	if (!rc) {
 		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
@@ -327,6 +365,25 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
+static int bnxt_tf_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->rep2vf_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed rep2vf flowid: %d\n",
+			    vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->vf2rep_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed vf2rep flowid: %d\n",
+			    vfr->vf2rep_flow_id);
+	return 0;
+}
+
 static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 {
 	int rc = 0;
@@ -341,15 +398,10 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp = vfr->parent_dev->data->dev_private;
 
 	/* Check if representor has been already freed in FW */
-	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+	if (!vfr->vfr_tx_cfa_action && !vfr->rx_cfa_code)
 		return 0;
 
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
-				    vfr->vf_id);
-	 */
+	rc = bnxt_tf_vfr_free(vfr);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
 			    "Failed to free representor %d in FW\n",
@@ -360,7 +412,7 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
 	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
 		    vfr->vf_id);
-	vfr->tx_cfa_action = 0;
+	vfr->vfr_tx_cfa_action = 0;
 	vfr->rx_cfa_code = 0;
 
 	return rc;
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 5c2e0a0b9..418b95afc 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -13,8 +13,7 @@
 #define BNXT_VF_IDX_INVALID             0xffff
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf);
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 37b534fc2..64058879e 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -403,9 +403,9 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
-static void
+static uint32_t
 bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
-			  struct rte_mbuf *mbuf)
+			  struct rte_mbuf *mbuf, uint32_t *vfr_flag)
 {
 	uint32_t cfa_code;
 	uint32_t meta_fmt;
@@ -415,8 +415,6 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	uint32_t flags2;
 	uint32_t gfid_support = 0;
 	int rc;
-	uint32_t vfr_flag;
-
 
 	if (BNXT_GFID_ENABLED(bp))
 		gfid_support = 1;
@@ -485,19 +483,21 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	}
 
 	rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid,
-				  cfa_code, &vfr_flag, &mark_id);
+				  cfa_code, vfr_flag, &mark_id);
 	if (!rc) {
 		/* Got the mark, write it to the mbuf and return */
 		mbuf->hash.fdir.hi = mark_id;
 		mbuf->udata64 = (cfa_code & 0xffffffffull) << 32;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
 		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-		return;
+		return mark_id;
 	}
 
 skip_mark:
 	mbuf->hash.fdir.hi = 0;
 	mbuf->hash.fdir.id = 0;
+
+	return 0;
 }
 
 void bnxt_set_mark_in_mbuf(struct bnxt *bp,
@@ -553,7 +553,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	int rc = 0;
 	uint8_t agg_buf = 0;
 	uint16_t cmp_type;
-	uint32_t flags2_f = 0;
+	uint32_t flags2_f = 0, vfr_flag = 0, mark_id = 0;
 	uint16_t flags_type;
 	struct bnxt *bp = rxq->bp;
 
@@ -632,7 +632,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	}
 
 	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+		mark_id = bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf,
+						    &vfr_flag);
 	else
 		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
@@ -736,10 +737,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
-	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
-	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
-		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
-				   mbuf)) {
+	if (BNXT_TRUFLOW_EN(bp) &&
+	    (BNXT_VF_IS_TRUSTED(bp) || BNXT_PF(bp)) &&
+	    vfr_flag) {
+		if (!bnxt_vfr_recv(mark_id, rxq->queue_id, mbuf)) {
 			/* Now return an error so that nb_rx_pkts is not
 			 * incremented.
 			 * This packet was meant to be given to the representor.
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 69ff89aab..a1ab3f39a 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -30,6 +30,7 @@ struct bnxt_tx_queue {
 	int			index;
 	int			tx_wake_thresh;
 	uint32_t                tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 47/51] net/bnxt: add port default rules for ingress and egress
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (45 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
                           ` (3 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

ingress & egress port default rules are needed to send the packet
from port_to_dpdk & dpdk_to_port respectively.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c     | 76 +++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h |  3 ++
 2 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de8e11a6e..2a19c5040 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -29,6 +29,7 @@
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
 #include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -1162,6 +1163,73 @@ static int bnxt_handle_if_change_status(struct bnxt *bp)
 	return rc;
 }
 
+static int32_t
+bnxt_create_port_app_df_rule(struct bnxt *bp, uint8_t flow_type,
+			     uint32_t *flow_id)
+{
+	uint16_t port_id = bp->eth_dev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(port_id >> 8) & 0xff, port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	return ulp_default_flow_create(bp->eth_dev, param_list, flow_type,
+				       flow_id);
+}
+
+static int32_t
+bnxt_create_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+	int rc;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_PORT_TO_VS,
+					  &cfg_data->port_to_app_flow_id);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to create port to app default rule\n");
+		return rc;
+	}
+
+	BNXT_TF_DBG(DEBUG, "***** created port to app default rule ******\n");
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_VS_TO_PORT,
+					  &cfg_data->app_to_port_flow_id);
+	if (!rc) {
+		rc = ulp_default_flow_db_cfa_action_get(bp->ulp_ctx,
+							cfg_data->app_to_port_flow_id,
+							&cfg_data->tx_cfa_action);
+		if (rc)
+			goto err;
+
+		BNXT_TF_DBG(DEBUG,
+			    "***** created app to port default rule *****\n");
+		return 0;
+	}
+
+err:
+	BNXT_TF_DBG(DEBUG, "Failed to create app to port default rule\n");
+	return rc;
+}
+
+static void
+bnxt_destroy_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->port_to_app_flow_id);
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->app_to_port_flow_id);
+}
+
 static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
@@ -1330,8 +1398,11 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
-	if (BNXT_TRUFLOW_EN(bp))
+	if (BNXT_TRUFLOW_EN(bp)) {
+		if (bp->rep_info != NULL)
+			bnxt_destroy_df_rules(bp);
 		bnxt_ulp_deinit(bp);
+	}
 
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
@@ -1581,6 +1652,9 @@ static int bnxt_promiscuous_disable_op(struct rte_eth_dev *eth_dev)
 	if (rc != 0)
 		vnic->flags = old_flags;
 
+	if (BNXT_TRUFLOW_EN(bp) && bp->rep_info != NULL)
+		bnxt_create_df_rules(bp);
+
 	return rc;
 }
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 3563f63fa..4843da562 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,9 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	uint32_t			port_to_app_flow_id;
+	uint32_t			app_to_port_flow_id;
+	uint32_t			tx_cfa_action;
 };
 
 struct bnxt_ulp_context {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 48/51] net/bnxt: fill cfa action in the Tx descriptor
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (46 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
                           ` (2 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Currently, only vfrep transmit requires cfa_action to be filled
in the tx buffer descriptor. However with truflow, dpdk(non vfrep)
to port also requires cfa_action to be filled in the tx buffer
descriptor.

This patch uses the correct cfa_action pointer while transmitting
the packet. Based on whether the packet is transmitted on non-vfrep
or vfrep, tx_cfa_action or vfr_tx_cfa_action inside txq will be
filled in the tx buffer descriptor.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index d7e193d38..f5884268e 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
+				PKT_TX_QINQ_PKT) ||
+	     txq->bp->ulp_ctx->cfg_data->tx_cfa_action ||
+	     txq->vfr_tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +186,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = txq->tx_cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp)) {
+			if (txq->vfr_tx_cfa_action)
+				cfa_action = txq->vfr_tx_cfa_action;
+			else
+				cfa_action =
+				      txq->bp->ulp_ctx->cfg_data->tx_cfa_action;
+		}
+
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
@@ -212,7 +222,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					&txr->tx_desc_ring[txr->tx_prod];
 		txbd1->lflags = 0;
 		txbd1->cfa_meta = vlan_tag_flags;
-		txbd1->cfa_action = cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp))
+			txbd1->cfa_action = cfa_action;
 
 		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
 			uint16_t hdr_size;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 49/51] net/bnxt: add ULP Flow counter Manager
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (47 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 51/51] doc: update release notes Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

The Flow counter manager allocates memory to hold the software view
of the counters where the on-chip counter data will be accumulated
along with another memory block that will be shadowing the on-chip
counter data i.e where the raw counter data will be DMAed into from
the chip.
It also keeps track of the first HW counter ID as that will be needed
to retrieve the counter data in bulk using a TF API. It issues this cmd
in an rte_alarm thread that keeps running every second.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_ulp/Makefile      |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h    |   8 +
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c  | 465 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h  | 148 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c |  27 ++
 7 files changed, 685 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 2939857ca..5fb0ed380 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -46,6 +46,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
 	'tf_core/tf_em_host.c',
+	'tf_ulp/ulp_fc_mgr.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 3f1b43bae..abb68150d 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -17,3 +17,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_fc_mgr.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index e5e7e5f43..c05861150 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -18,6 +18,7 @@
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "ulp_mark_mgr.h"
+#include "ulp_fc_mgr.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 #include "ulp_port_db.h"
@@ -705,6 +706,12 @@ bnxt_ulp_init(struct bnxt *bp)
 		goto jump_to_error;
 	}
 
+	rc = ulp_fc_mgr_init(bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize ulp flow counter mgr\n");
+		goto jump_to_error;
+	}
+
 	return rc;
 
 jump_to_error:
@@ -752,6 +759,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	/* cleanup the ulp mapper */
 	ulp_mapper_deinit(bp->ulp_ctx);
 
+	/* Delete the Flow Counter Manager */
+	ulp_fc_mgr_deinit(bp->ulp_ctx);
+
 	/* Delete the Port database */
 	ulp_port_db_deinit(bp->ulp_ctx);
 
@@ -963,3 +973,28 @@ bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx)
 
 	return ulp_ctx->cfg_data->port_db;
 }
+
+/* Function to set the flow counter info into the context */
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->fc_info = ulp_fc_info;
+
+	return 0;
+}
+
+/* Function to retrieve the flow counter info from the context. */
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->fc_info;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 4843da562..a13328426 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,7 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	struct bnxt_ulp_fc_info		*fc_info;
 	uint32_t			port_to_app_flow_id;
 	uint32_t			app_to_port_flow_id;
 	uint32_t			tx_cfa_action;
@@ -154,4 +155,11 @@ int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *error);
 
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info);
+
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
new file mode 100644
index 000000000..f70d4a295
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -0,0 +1,465 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_alarm.h>
+#include "bnxt.h"
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "ulp_fc_mgr.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_struct.h"
+#include "tf_tbl.h"
+
+static int
+ulp_fc_mgr_shadow_mem_alloc(struct hw_fc_mem_info *parms, int size)
+{
+	/* Allocate memory*/
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("ulp_fc_info",
+				    RTE_CACHE_LINE_ROUNDUP(size),
+				    4096);
+	if (parms->mem_va == NULL) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	rte_mem_lock_page(parms->mem_va);
+
+	parms->mem_pa = (void *)(uintptr_t)rte_mem_virt2phy(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void
+ulp_fc_mgr_shadow_mem_free(struct hw_fc_mem_info *parms)
+{
+	rte_free(parms->mem_va);
+}
+
+/*
+ * Allocate and Initialize all Flow Counter Manager resources for this ulp
+ * context.
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager.
+ *
+ */
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id, sw_acc_cntr_tbl_sz, hw_fc_mem_info_sz;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i, rc;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	ulp_fc_info = rte_zmalloc("ulp_fc_info", sizeof(*ulp_fc_info), 0);
+	if (!ulp_fc_info)
+		goto error;
+
+	rc = pthread_mutex_init(&ulp_fc_info->fc_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Failed to initialize fc mutex\n");
+		goto error;
+	}
+
+	/* Add the FC info tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, ulp_fc_info);
+
+	sw_acc_cntr_tbl_sz = sizeof(struct sw_acc_counter) *
+				dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		ulp_fc_info->sw_acc_tbl[i] = rte_zmalloc("ulp_sw_acc_cntr_tbl",
+							 sw_acc_cntr_tbl_sz, 0);
+		if (!ulp_fc_info->sw_acc_tbl[i])
+			goto error;
+	}
+
+	hw_fc_mem_info_sz = sizeof(uint64_t) * dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_fc_mgr_shadow_mem_alloc(&ulp_fc_info->shadow_hw_tbl[i],
+						 hw_fc_mem_info_sz);
+		if (rc)
+			goto error;
+	}
+
+	return 0;
+
+error:
+	ulp_fc_mgr_deinit(ctxt);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for fc mgr\n");
+
+	return -ENOMEM;
+}
+
+/*
+ * Release all resources in the Flow Counter Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EINVAL;
+
+	ulp_fc_mgr_thread_cancel(ctxt);
+
+	pthread_mutex_destroy(&ulp_fc_info->fc_lock);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		rte_free(ulp_fc_info->sw_acc_tbl[i]);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		ulp_fc_mgr_shadow_mem_free(&ulp_fc_info->shadow_hw_tbl[i]);
+
+
+	rte_free(ulp_fc_info);
+
+	/* Safe to ignore on deinit */
+	(void)bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, NULL);
+
+	return 0;
+}
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	return !!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD);
+}
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD)) {
+		rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+				  ulp_fc_mgr_alarm_cb,
+				  (void *)ctxt);
+		ulp_fc_info->flags |= ULP_FLAG_FC_THREAD;
+	}
+
+	return 0;
+}
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	ulp_fc_info->flags &= ~ULP_FLAG_FC_THREAD;
+	rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt);
+}
+
+/*
+ * DMA-in the raw counter data from the HW and accumulate in the
+ * local accumulator table using the TF-Core API
+ *
+ * tfp [in] The TF-Core context
+ *
+ * fc_info [in] The ULP Flow counter info ptr
+ *
+ * dir [in] The direction of the flow
+ *
+ * num_counters [in] The number of counters
+ *
+ */
+static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+				       struct bnxt_ulp_fc_info *fc_info,
+				       enum tf_dir dir, uint32_t num_counters)
+{
+	int rc = 0;
+	struct tf_tbl_get_bulk_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD: Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t *stats = NULL;
+	uint16_t i = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
+	parms.num_entries = num_counters;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.entry_sz_in_bytes = sizeof(uint64_t);
+	stats = (uint64_t *)fc_info->shadow_hw_tbl[dir].mem_va;
+	parms.physical_mem_addr = (uintptr_t)fc_info->shadow_hw_tbl[dir].mem_pa;
+
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Memory not initialized id:0x%x dir:%d\n",
+			    parms.starting_idx, dir);
+		return -EINVAL;
+	}
+
+	rc = tf_tbl_bulk_get(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Get failed for id:0x%x rc:%d\n",
+			    parms.starting_idx, rc);
+		return rc;
+	}
+
+	for (i = 0; i < num_counters; i++) {
+		/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+		sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
+		if (!sw_acc_tbl_entry->valid)
+			continue;
+		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i]);
+		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i]);
+	}
+
+	return rc;
+}
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg)
+{
+	int rc = 0, i;
+	struct bnxt_ulp_context *ctxt = arg;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct bnxt_ulp_device_params *dparms;
+	struct tf *tfp;
+	uint32_t dev_id;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ctxt);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return;
+	}
+
+	/*
+	 * Take the fc_lock to ensure no flow is destroyed
+	 * during the bulk get
+	 */
+	if (pthread_mutex_trylock(&ulp_fc_info->fc_lock))
+		goto out;
+
+	if (!ulp_fc_info->num_entries) {
+		pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
+					     dparms->flow_count_db_entries);
+		if (rc)
+			break;
+	}
+
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	/*
+	 * If cmd fails once, no need of
+	 * invoking again every second
+	 */
+
+	if (rc) {
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+out:
+	rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+			  ulp_fc_mgr_alarm_cb,
+			  (void *)ctxt);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	/* Assuming start_idx of 0 is invalid */
+	return (ulp_fc_info->shadow_hw_tbl[dir].start_idx != 0);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+				 uint32_t start_idx)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EIO;
+
+	/* Assuming that 0 is an invalid counter ID ? */
+	if (ulp_fc_info->shadow_hw_tbl[dir].start_idx == 0)
+		ulp_fc_info->shadow_hw_tbl[dir].start_idx = start_idx;
+
+	return 0;
+}
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			    uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->num_entries++;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
+
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			      uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
+	ulp_fc_info->num_entries--;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
new file mode 100644
index 000000000..faa77dd75
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FC_MGR_H_
+#define _ULP_FC_MGR_H_
+
+#include "bnxt_ulp.h"
+#include "tf_core.h"
+
+#define ULP_FLAG_FC_THREAD			BIT(0)
+#define ULP_FC_TIMER	1/* Timer freq in Sec Flow Counters */
+
+/* Macros to extract packet/byte counters from a 64-bit flow counter. */
+#define FLOW_CNTR_BYTE_WIDTH 36
+#define FLOW_CNTR_BYTE_MASK  (((uint64_t)1 << FLOW_CNTR_BYTE_WIDTH) - 1)
+
+#define FLOW_CNTR_PKTS(v) ((v) >> FLOW_CNTR_BYTE_WIDTH)
+#define FLOW_CNTR_BYTES(v) ((v) & FLOW_CNTR_BYTE_MASK)
+
+struct sw_acc_counter {
+	uint64_t pkt_count;
+	uint64_t byte_count;
+	bool	valid;
+};
+
+struct hw_fc_mem_info {
+	/*
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/*
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+	uint32_t start_idx;
+};
+
+struct bnxt_ulp_fc_info {
+	struct sw_acc_counter	*sw_acc_tbl[TF_DIR_MAX];
+	struct hw_fc_mem_info	shadow_hw_tbl[TF_DIR_MAX];
+	uint32_t		flags;
+	uint32_t		num_entries;
+	pthread_mutex_t		fc_lock;
+};
+
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Release all resources in the flow counter manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg);
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			     uint32_t start_idx);
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			uint32_t hw_cntr_id);
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			  uint32_t hw_cntr_id);
+/*
+ * Check if the starting HW counter ID value is set in the
+ * flow counter manager.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_FC_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 7696de2a5..a3cfe54bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -10,6 +10,7 @@
 #include "ulp_utils.h"
 #include "ulp_template_struct.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 
 #define ULP_FLOW_DB_RES_DIR_BIT		31
 #define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
@@ -484,6 +485,21 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 		ulp_flow_db_res_params_to_info(fid_resource, params);
 	}
 
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		/* Store the first HW counter ID for this table */
+		if (!ulp_fc_mgr_start_idx_isset(ulp_ctxt, params->direction))
+			ulp_fc_mgr_start_idx_set(ulp_ctxt, params->direction,
+						 params->resource_hndl);
+
+		ulp_fc_mgr_cntr_set(ulp_ctxt, params->direction,
+				    params->resource_hndl);
+
+		if (!ulp_fc_mgr_thread_isstarted(ulp_ctxt))
+			ulp_fc_mgr_thread_start(ulp_ctxt);
+	}
+
 	/* all good, return success */
 	return 0;
 }
@@ -574,6 +590,17 @@ int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
 					nxt_idx);
 	}
 
+	/* Now that the HW Flow counter resource is deleted, reset it's
+	 * corresponding slot in the SW accumulation table in the Flow Counter
+	 * manager
+	 */
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		ulp_fc_mgr_cntr_reset(ulp_ctxt, params->direction,
+				      params->resource_hndl);
+	}
+
 	/* all good, return success */
 	return 0;
 }
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 50/51] net/bnxt: add support for count action in flow query
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (48 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 51/51] doc: update release notes Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use the flow counter manager to fetch the accumulated stats for
a flow.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c |  45 +++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c    | 141 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h    |  17 ++-
 3 files changed, 196 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 7ef306e58..36a014184 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -9,6 +9,7 @@
 #include "ulp_matcher.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 #include <rte_malloc.h>
 
 static int32_t
@@ -289,11 +290,53 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	return ret;
 }
 
+/* Function to query the rte flows. */
+static int32_t
+bnxt_ulp_flow_query(struct rte_eth_dev *eth_dev,
+		    struct rte_flow *flow,
+		    const struct rte_flow_action *action,
+		    void *data,
+		    struct rte_flow_error *error)
+{
+	int rc = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	struct rte_flow_query_count *count;
+	uint32_t flow_id;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to query flow.");
+		return -EINVAL;
+	}
+
+	flow_id = (uint32_t)(uintptr_t)flow;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		count = data;
+		rc = ulp_fc_mgr_query_count_get(ulp_ctx, flow_id, count);
+		if (rc) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to query flow.");
+		}
+		break;
+	default:
+		rte_flow_error_set(error, -rc, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "Unsupported action item");
+	}
+
+	return rc;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
 	.flush = bnxt_ulp_flow_flush,
-	.query = NULL,
+	.query = bnxt_ulp_flow_query,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index f70d4a295..9944e9e5c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -11,6 +11,7 @@
 #include "bnxt_ulp.h"
 #include "bnxt_tf_common.h"
 #include "ulp_fc_mgr.h"
+#include "ulp_flow_db.h"
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "tf_tbl.h"
@@ -226,9 +227,10 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
  * num_counters [in] The number of counters
  *
  */
-static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+__rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 				       struct bnxt_ulp_fc_info *fc_info,
 				       enum tf_dir dir, uint32_t num_counters)
+/* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
 {
 	int rc = 0;
 	struct tf_tbl_get_bulk_parms parms = { 0 };
@@ -275,6 +277,45 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 
 	return rc;
 }
+
+static int ulp_get_single_flow_stat(struct tf *tfp,
+				    struct bnxt_ulp_fc_info *fc_info,
+				    enum tf_dir dir,
+				    uint32_t hw_cntr_id)
+{
+	int rc = 0;
+	struct tf_get_tbl_entry_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD:Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t stats = 0;
+	uint32_t sw_cntr_indx = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.idx = hw_cntr_id;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.data_sz_in_bytes = sizeof(uint64_t);
+	parms.data = (uint8_t *)&stats;
+	rc = tf_get_tbl_entry(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Get failed for id:0x%x rc:%d\n",
+			    parms.idx, rc);
+		return rc;
+	}
+
+	/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+	sw_cntr_indx = hw_cntr_id - fc_info->shadow_hw_tbl[dir].start_idx;
+	sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][sw_cntr_indx];
+	sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats);
+	sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats);
+
+	return rc;
+}
+
 /*
  * Alarm handler that will issue the TF-Core API to fetch
  * data from the chip's internal flow counters
@@ -282,15 +323,18 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
+
 void
 ulp_fc_mgr_alarm_cb(void *arg)
 {
-	int rc = 0, i;
+	int rc = 0;
+	unsigned int j;
+	enum tf_dir i;
 	struct bnxt_ulp_context *ctxt = arg;
 	struct bnxt_ulp_fc_info *ulp_fc_info;
 	struct bnxt_ulp_device_params *dparms;
 	struct tf *tfp;
-	uint32_t dev_id;
+	uint32_t dev_id, hw_cntr_id = 0;
 
 	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
 	if (!ulp_fc_info)
@@ -325,13 +369,27 @@ ulp_fc_mgr_alarm_cb(void *arg)
 		ulp_fc_mgr_thread_cancel(ctxt);
 		return;
 	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
+	/*
+	 * Commented for now till GET_BULK is resolved, just get the first flow
+	 * stat for now
+	 for (i = 0; i < TF_DIR_MAX; i++) {
 		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
 					     dparms->flow_count_db_entries);
 		if (rc)
 			break;
 	}
+	*/
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		for (j = 0; j < ulp_fc_info->num_entries; j++) {
+			if (!ulp_fc_info->sw_acc_tbl[i][j].valid)
+				continue;
+			hw_cntr_id = ulp_fc_info->sw_acc_tbl[i][j].hw_cntr_id;
+			rc = ulp_get_single_flow_stat(tfp, ulp_fc_info, i,
+						      hw_cntr_id);
+			if (rc)
+				break;
+		}
+	}
 
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -425,6 +483,7 @@ int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = hw_cntr_id;
 	ulp_fc_info->num_entries++;
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -456,6 +515,7 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
 	ulp_fc_info->num_entries--;
@@ -463,3 +523,74 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 
 	return 0;
 }
+
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ctxt,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count)
+{
+	int rc = 0;
+	uint32_t nxt_resource_index = 0;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct ulp_flow_db_res_params params;
+	enum tf_dir dir;
+	uint32_t hw_cntr_id = 0, sw_cntr_idx = 0;
+	struct sw_acc_counter sw_acc_tbl_entry;
+	bool found_cntr_resource = false;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -ENODEV;
+
+	do {
+		rc = ulp_flow_db_resource_get(ctxt,
+					      BNXT_ULP_REGULAR_FLOW_TABLE,
+					      flow_id,
+					      &nxt_resource_index,
+					      &params);
+		if (params.resource_func ==
+		     BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE &&
+		     (params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT ||
+		      params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT)) {
+			found_cntr_resource = true;
+			break;
+		}
+
+	} while (!rc);
+
+	if (rc)
+		return rc;
+
+	if (found_cntr_resource) {
+		dir = params.direction;
+		hw_cntr_id = params.resource_hndl;
+		sw_cntr_idx = hw_cntr_id -
+				ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+		sw_acc_tbl_entry = ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx];
+		if (params.resource_sub_type ==
+			BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+			count->hits_set = 1;
+			count->bytes_set = 1;
+			count->hits = sw_acc_tbl_entry.pkt_count;
+			count->bytes = sw_acc_tbl_entry.byte_count;
+		} else {
+			/* TBD: Handle External counters */
+			rc = -EINVAL;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
index faa77dd75..207267049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -23,6 +23,7 @@ struct sw_acc_counter {
 	uint64_t pkt_count;
 	uint64_t byte_count;
 	bool	valid;
+	uint32_t hw_cntr_id;
 };
 
 struct hw_fc_mem_info {
@@ -142,7 +143,21 @@ bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
-
 bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ulp_ctx,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count);
 #endif /* _ULP_FC_MGR_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v3 51/51] doc: update release notes
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
                           ` (49 preceding siblings ...)
  2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
@ 2020-07-02  4:11         ` Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02  4:11 UTC (permalink / raw)
  To: dev

Update release notes with enhancements in Broadcom PMD.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/rel_notes/release_20_08.rst | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 5cbc4ce14..9bcea29ba 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -91,6 +91,16 @@ New Features
 
   * Added support for DCF datapath configuration.
 
+* **Updated Broadcom bnxt driver.**
+
+  Updated the Broadcom bnxt driver with new features and improvements, including:
+
+  * Added support for VF representors.
+  * Added support for multiple devices.
+  * Added support for new resource manager API.
+  * Added support for VXLAN encap/decap.
+  * Added support for rte_flow_query for COUNT action.
+
 * **Added support for BPF_ABS/BPF_IND load instructions.**
 
   Added support for two BPF non-generic instructions:
@@ -107,7 +117,6 @@ New Features
   * Dump ``rte_flow`` memory consumption.
   * Measure packet per second forwarding.
 
-
 Removed Items
 -------------
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management
  2020-07-01 21:31     ` Ferruh Yigit
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
@ 2020-07-02 23:27       ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
                           ` (50 more replies)
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
  2 siblings, 51 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev

v1->v2:
 - update commit message
 - rebase patches against latest changes in the tree
 - fix signed-off-by tags
 - update release notes

v2->v3:
 - fix compilation issues

v3->v4:
 - rebase against latest dpdk-next-net

Ajit Khaparde (1):
  doc: update release notes

Jay Ding (5):
  net/bnxt: implement support for TCAM access
  net/bnxt: support two level priority for TCAMs
  net/bnxt: add external action alloc and free
  net/bnxt: implement IF tables set and get
  net/bnxt: add global config set and get APIs

Kishore Padmanabha (8):
  net/bnxt: integrate with the latest tf core changes
  net/bnxt: add support for if table processing
  net/bnxt: disable Tx vector mode if truflow is enabled
  net/bnxt: add index opcode and operand to mapper table
  net/bnxt: add support for global resource templates
  net/bnxt: add support for internal exact match entries
  net/bnxt: add support for conditional execution of mapper tables
  net/bnxt: add VF-rep and stat templates

Lance Richardson (1):
  net/bnxt: initialize parent PF information

Michael Wildt (7):
  net/bnxt: add multi device support
  net/bnxt: update multi device design support
  net/bnxt: multiple device implementation
  net/bnxt: update identifier with remap support
  net/bnxt: update RM with residual checker
  net/bnxt: update table get to use new design
  net/bnxt: add TF register and unregister

Mike Baucom (1):
  net/bnxt: add support for internal encap records

Peter Spreadborough (7):
  net/bnxt: add support for exact match
  net/bnxt: modify EM insert and delete to use HWRM direct
  net/bnxt: add HCAPI interface support
  net/bnxt: support EM and TCAM lookup with table scope
  net/bnxt: update RM to support HCAPI only
  net/bnxt: remove table scope from session
  net/bnxt: add support for EEM System memory

Randy Schacher (2):
  net/bnxt: add core changes for EM and EEM lookups
  net/bnxt: align CFA resources with RM

Shahaji Bhosle (2):
  net/bnxt: support bulk table get and mirror
  net/bnxt: support two-level priority for TCAMs

Somnath Kotur (7):
  net/bnxt: add basic infrastructure for VF representors
  net/bnxt: add support for VF-reps data path
  net/bnxt: get IDs for VF-Rep endpoint
  net/bnxt: parse representor along with other dev-args
  net/bnxt: create default flow rules for the VF-rep conduit
  net/bnxt: add ULP Flow counter Manager
  net/bnxt: add support for count action in flow query

Venkat Duvvuru (10):
  net/bnxt: modify port db dev interface
  net/bnxt: get port and function info
  net/bnxt: add support for hwrm port phy qcaps
  net/bnxt: modify port db to handle more info
  net/bnxt: enable port MAC qcfg command for trusted VF
  net/bnxt: enhancements for port db
  net/bnxt: manage VF to VFR conduit
  net/bnxt: fill mapper parameters with default rules info
  net/bnxt: add port default rules for ingress and egress
  net/bnxt: fill cfa action in the Tx descriptor

 config/common_base                            |    1 +
 doc/guides/rel_notes/release_20_08.rst        |   11 +-
 drivers/net/bnxt/Makefile                     |    8 +-
 drivers/net/bnxt/bnxt.h                       |  121 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  519 +-
 drivers/net/bnxt/bnxt_hwrm.c                  |  122 +-
 drivers/net/bnxt/bnxt_hwrm.h                  |    7 +
 drivers/net/bnxt/bnxt_reps.c                  |  773 +++
 drivers/net/bnxt/bnxt_reps.h                  |   45 +
 drivers/net/bnxt/bnxt_rxr.c                   |   39 +-
 drivers/net/bnxt/bnxt_rxr.h                   |    1 +
 drivers/net/bnxt/bnxt_txq.h                   |    2 +
 drivers/net/bnxt/bnxt_txr.c                   |   18 +-
 drivers/net/bnxt/hcapi/Makefile               |   10 +
 drivers/net/bnxt/hcapi/cfa_p40_hw.h           |  781 +++
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h          |  303 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h            |  276 +
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h       |  672 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c         |  399 ++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h         |  467 ++
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++--
 drivers/net/bnxt/meson.build                  |   21 +-
 drivers/net/bnxt/tf_core/Makefile             |   29 +-
 drivers/net/bnxt/tf_core/bitalloc.c           |  107 +
 drivers/net/bnxt/tf_core/bitalloc.h           |    5 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h |  293 +
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  995 +---
 drivers/net/bnxt/tf_core/ll.c                 |   52 +
 drivers/net/bnxt/tf_core/ll.h                 |   46 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_common.h          |   43 +
 drivers/net/bnxt/tf_core/tf_core.c            | 1495 +++--
 drivers/net/bnxt/tf_core/tf_core.h            |  874 ++-
 drivers/net/bnxt/tf_core/tf_device.c          |  271 +
 drivers/net/bnxt/tf_core/tf_device.h          |  650 ++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  147 +
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  104 +
 drivers/net/bnxt/tf_core/tf_em.c              |  515 --
 drivers/net/bnxt/tf_core/tf_em.h              |  492 +-
 drivers/net/bnxt/tf_core/tf_em_common.c       | 1048 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  134 +
 drivers/net/bnxt/tf_core/tf_em_host.c         |  531 ++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  352 ++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  533 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c      |  199 +
 drivers/net/bnxt/tf_core/tf_global_cfg.h      |  170 +
 drivers/net/bnxt/tf_core/tf_identifier.c      |  186 +
 drivers/net/bnxt/tf_core/tf_identifier.h      |  147 +
 drivers/net/bnxt/tf_core/tf_if_tbl.c          |  178 +
 drivers/net/bnxt/tf_core/tf_if_tbl.h          |  236 +
 drivers/net/bnxt/tf_core/tf_msg.c             | 1681 +++---
 drivers/net/bnxt/tf_core/tf_msg.h             |  409 +-
 drivers/net/bnxt/tf_core/tf_resources.h       |  531 --
 drivers/net/bnxt/tf_core/tf_rm.c              | 3840 +++---------
 drivers/net/bnxt/tf_core/tf_rm.h              |  554 +-
 drivers/net/bnxt/tf_core/tf_session.c         |  776 +++
 drivers/net/bnxt/tf_core/tf_session.h         |  565 +-
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      |  240 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     |  239 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1930 +-----
 drivers/net/bnxt/tf_core/tf_tbl.h             |  469 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  430 ++
 drivers/net/bnxt/tf_core/tf_tcam.h            |  360 ++
 drivers/net/bnxt/tf_core/tf_util.c            |  176 +
 drivers/net/bnxt/tf_core/tf_util.h            |   98 +
 drivers/net/bnxt/tf_core/tfp.c                |   33 +-
 drivers/net/bnxt/tf_core/tfp.h                |  153 +-
 drivers/net/bnxt/tf_ulp/Makefile              |    2 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   16 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  129 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   35 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |   84 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       |  385 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  596 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |  163 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   42 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  481 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    6 +-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   10 +
 drivers/net/bnxt/tf_ulp/ulp_port_db.c         |  235 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.h         |  122 +-
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |   30 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  433 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5217 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  537 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   85 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   23 +-
 drivers/net/bnxt/tf_ulp/ulp_utils.c           |    2 +-
 94 files changed, 28009 insertions(+), 11247 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 01/51] net/bnxt: add basic infrastructure for VF reps
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
                           ` (49 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru, Kalesh AP

From: Somnath Kotur <somnath.kotur@broadcom.com>

Defines data structures and code to init/uninit
VF representors during pci_probe and pci_remove
respectively.
Most of the dev_ops for the VF representor are just
stubs for now and will be will be filled out in next patch.

To create a representor using testpmd:
testpmd -c 0xff -wB:D.F,representor=1 -- -i
testpmd -c 0xff -w05:02.0,representor=[1] -- -i

To create a representor using ovs-dpdk:
1. Firt add the trusted VF port to a bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0
2. Add the representor port to the bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0,representor=1

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile      |   2 +
 drivers/net/bnxt/bnxt.h        |  64 +++++++-
 drivers/net/bnxt/bnxt_ethdev.c | 225 ++++++++++++++++++++------
 drivers/net/bnxt/bnxt_reps.c   | 287 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_reps.h   |  35 ++++
 drivers/net/bnxt/meson.build   |   1 +
 6 files changed, 566 insertions(+), 48 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index a375299c3..365627499 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -14,6 +14,7 @@ LIB = librte_pmd_bnxt.a
 EXPORT_MAP := rte_pmd_bnxt_version.map
 
 CFLAGS += -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 CFLAGS += $(WERROR_FLAGS)
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
@@ -38,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_reps.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c
 ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index d455f8d84..9b7b87cee 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -220,6 +220,7 @@ struct bnxt_child_vf_info {
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
+#define BNXT_MAX_VF_REPS	64
 #define BNXT_TOTAL_VFS(bp)	((bp)->pf->total_vfs)
 #define BNXT_FIRST_VF_FID	128
 #define BNXT_PF_RINGS_USED(bp)	bnxt_get_num_queues(bp)
@@ -492,6 +493,10 @@ struct bnxt_mark_info {
 	bool		valid;
 };
 
+struct bnxt_rep_info {
+	struct rte_eth_dev	*vfr_eth_dev;
+};
+
 /* address space location of register */
 #define BNXT_FW_STATUS_REG_TYPE_MASK	3
 /* register is located in PCIe config space */
@@ -515,6 +520,40 @@ struct bnxt_mark_info {
 #define BNXT_FW_STATUS_HEALTHY		0x8000
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
+#define BNXT_ETH_RSS_SUPPORT (	\
+	ETH_RSS_IPV4 |		\
+	ETH_RSS_NONFRAG_IPV4_TCP |	\
+	ETH_RSS_NONFRAG_IPV4_UDP |	\
+	ETH_RSS_IPV6 |		\
+	ETH_RSS_NONFRAG_IPV6_TCP |	\
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
+				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_CKSUM | \
+				     DEV_TX_OFFLOAD_UDP_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_TSO | \
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_QINQ_INSERT | \
+				     DEV_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
+				     DEV_RX_OFFLOAD_VLAN_STRIP | \
+				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_UDP_CKSUM | \
+				     DEV_RX_OFFLOAD_TCP_CKSUM | \
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
+				     DEV_RX_OFFLOAD_KEEP_CRC | \
+				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
+				     DEV_RX_OFFLOAD_TCP_LRO | \
+				     DEV_RX_OFFLOAD_SCATTER | \
+				     DEV_RX_OFFLOAD_RSS_HASH)
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -682,6 +721,9 @@ struct bnxt {
 #define BNXT_MAX_RINGS(bp) \
 	(RTE_MIN((((bp)->max_cp_rings - BNXT_NUM_ASYNC_CPR(bp)) / 2U), \
 		 BNXT_MAX_TX_RINGS(bp)))
+
+#define BNXT_MAX_VF_REP_RINGS	8
+
 	uint16_t		max_nq_rings;
 	uint16_t		max_l2_ctx;
 	uint16_t		max_rx_em_flows;
@@ -711,7 +753,9 @@ struct bnxt {
 
 	uint16_t		fw_reset_min_msecs;
 	uint16_t		fw_reset_max_msecs;
-
+	uint16_t		switch_domain_id;
+	uint16_t		num_reps;
+	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -732,6 +776,18 @@ struct bnxt {
 
 #define BNXT_FC_TIMER	1 /* Timer freq in Sec Flow Counters */
 
+/**
+ * Structure to store private data for each VF representor instance
+ */
+struct bnxt_vf_representor {
+	uint16_t switch_domain_id;
+	uint16_t vf_id;
+	/* Private data store of associated PF/Trusted VF */
+	struct bnxt	*parent_priv;
+	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
 int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 		     bool exp_link_status);
@@ -744,7 +800,13 @@ void bnxt_schedule_fw_health_check(struct bnxt *bp);
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
 bool bnxt_stratus_device(struct bnxt *bp);
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete);
+
 extern const struct rte_flow_ops bnxt_flow_ops;
+
 #define bnxt_acquire_flow_lock(bp) \
 	pthread_mutex_lock(&(bp)->flow_lock)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7022f6d52..4911745af 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -18,6 +18,7 @@
 #include "bnxt_filter.h"
 #include "bnxt_hwrm.h"
 #include "bnxt_irq.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxq.h"
 #include "bnxt_rxr.h"
@@ -93,40 +94,6 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-#define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
-				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_VLAN_STRIP | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
-
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
@@ -163,7 +130,6 @@ static int bnxt_devarg_max_num_kflow_invalid(uint16_t max_num_kflows)
 }
 
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev);
 static int bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev);
@@ -198,7 +164,7 @@ static uint16_t bnxt_rss_ctxts(const struct bnxt *bp)
 				    BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
-static uint16_t  bnxt_rss_hash_tbl_size(const struct bnxt *bp)
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 {
 	if (!BNXT_CHIP_THOR(bp))
 		return HW_HASH_INDEX_SIZE;
@@ -1047,7 +1013,7 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	return -ENOSPC;
 }
 
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 {
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
@@ -1273,6 +1239,12 @@ static int bnxt_dev_set_link_down_op(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static void bnxt_free_switch_domain(struct bnxt *bp)
+{
+	if (bp->switch_domain_id)
+		rte_eth_switch_domain_free(bp->switch_domain_id);
+}
+
 /* Unload the driver, release resources */
 static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
@@ -1341,6 +1313,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
+	bnxt_free_switch_domain(bp);
+
 	bnxt_uninit_resources(bp, false);
 
 	bnxt_free_leds_info(bp);
@@ -1522,8 +1496,8 @@ int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 	return rc;
 }
 
-static int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
-			       int wait_to_complete)
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete)
 {
 	return bnxt_link_update(eth_dev, wait_to_complete, ETH_LINK_UP);
 }
@@ -5477,8 +5451,26 @@ bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
 	rte_kvargs_free(kvlist);
 }
 
+static int bnxt_alloc_switch_domain(struct bnxt *bp)
+{
+	int rc = 0;
+
+	if (BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) {
+		rc = rte_eth_switch_domain_alloc(&bp->switch_domain_id);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Failed to alloc switch domain: %d\n", rc);
+		else
+			PMD_DRV_LOG(INFO,
+				    "Switch domain allocated %d\n",
+				    bp->switch_domain_id);
+	}
+
+	return rc;
+}
+
 static int
-bnxt_dev_init(struct rte_eth_dev *eth_dev)
+bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	static int version_printed;
@@ -5557,6 +5549,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	if (rc)
 		goto error_free;
 
+	bnxt_alloc_switch_domain(bp);
+
 	/* Pass the information to the rte_eth_dev_close() that it should also
 	 * release the private port resources.
 	 */
@@ -5689,25 +5683,162 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *bp = eth_dev->data->dev_private;
+	struct rte_eth_dev *vf_rep_eth_dev;
+	int ret = 0, i;
+
+	if (!bp)
+		return -EINVAL;
+
+	for (i = 0; i < bp->num_reps; i++) {
+		vf_rep_eth_dev = bp->rep_info[i].vfr_eth_dev;
+		if (!vf_rep_eth_dev)
+			continue;
+		rte_eth_dev_destroy(vf_rep_eth_dev, bnxt_vf_representor_uninit);
+	}
+	ret = rte_eth_dev_destroy(eth_dev, bnxt_dev_uninit);
+
+	return ret;
+}
+
 static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct bnxt),
-		bnxt_dev_init);
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
+	uint16_t num_rep;
+	int i, ret = 0;
+	struct bnxt *backing_bp;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+	}
+
+	if (num_rep > BNXT_MAX_VF_REPS) {
+		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
+			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
+		ret = -EINVAL;
+		return ret;
+	}
+
+	/* probe representor ports now */
+	if (!backing_eth_dev)
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = -ENODEV;
+		return ret;
+	}
+	backing_bp = backing_eth_dev->data->dev_private;
+
+	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
+		PMD_DRV_LOG(ERR,
+			    "Not a PF or trusted VF. No Representor support\n");
+		/* Returning an error is not an option.
+		 * Applications are not handling this correctly
+		 */
+		return ret;
+	}
+
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		struct bnxt_vf_representor representor = {
+			.vf_id = eth_da.representor_ports[i],
+			.switch_domain_id = backing_bp->switch_domain_id,
+			.parent_priv = backing_bp
+		};
+
+		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
+			PMD_DRV_LOG(ERR, "VF-Rep id %d >= %d MAX VF ID\n",
+				    representor.vf_id, BNXT_MAX_VF_REPS);
+			continue;
+		}
+
+		/* representor port net_bdf_port */
+		snprintf(name, sizeof(name), "net_%s_representor_%d",
+			 pci_dev->device.name, eth_da.representor_ports[i]);
+
+		ret = rte_eth_dev_create(&pci_dev->device, name,
+					 sizeof(struct bnxt_vf_representor),
+					 NULL, NULL,
+					 bnxt_vf_representor_init,
+					 &representor);
+
+		if (!ret) {
+			vf_rep_eth_dev = rte_eth_dev_allocated(name);
+			if (!vf_rep_eth_dev) {
+				PMD_DRV_LOG(ERR, "Failed to find the eth_dev"
+					    " for VF-Rep: %s.", name);
+				bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+				ret = -ENODEV;
+				return ret;
+			}
+			backing_bp->rep_info[representor.vf_id].vfr_eth_dev =
+				vf_rep_eth_dev;
+			backing_bp->num_reps++;
+		} else {
+			PMD_DRV_LOG(ERR, "failed to create bnxt vf "
+				    "representor %s.", name);
+			bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+		}
+	}
+
+	return ret;
 }
 
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
-		return rte_eth_dev_pci_generic_remove(pci_dev,
-				bnxt_dev_uninit);
-	else
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!eth_dev)
+		return ENODEV; /* Invoked typically only by OVS-DPDK, by the
+				* time it comes here the eth_dev is already
+				* deleted by rte_eth_dev_close(), so returning
+				* +ve value will atleast help in proper cleanup
+				*/
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_vf_representor_uninit);
+		else
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_dev_uninit);
+	} else {
 		return rte_eth_dev_pci_generic_remove(pci_dev, NULL);
+	}
 }
 
 static struct rte_pci_driver bnxt_rte_pmd = {
 	.id_table = bnxt_pci_id_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+			RTE_PCI_DRV_PROBE_AGAIN, /* Needed in case of VF-REPs
+						  * and OVS-DPDK
+						  */
 	.probe = bnxt_pci_probe,
 	.remove = bnxt_pci_remove,
 };
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
new file mode 100644
index 000000000..21f1b0765
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_ring.h"
+#include "bnxt_reps.h"
+#include "hsi_struct_def_dpdk.h"
+
+static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
+	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
+	.dev_configure = bnxt_vf_rep_dev_configure_op,
+	.dev_start = bnxt_vf_rep_dev_start_op,
+	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.link_update = bnxt_vf_rep_link_update_op,
+	.dev_close = bnxt_vf_rep_dev_close_op,
+	.dev_stop = bnxt_vf_rep_dev_stop_op
+};
+
+static uint16_t
+bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
+		     __rte_unused struct rte_mbuf **rx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
+		     __rte_unused struct rte_mbuf **tx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
+{
+	struct bnxt_vf_representor *vf_rep_bp = eth_dev->data->dev_private;
+	struct bnxt_vf_representor *rep_params =
+				 (struct bnxt_vf_representor *)params;
+	struct rte_eth_link *link;
+	struct bnxt *parent_bp;
+
+	vf_rep_bp->vf_id = rep_params->vf_id;
+	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
+	vf_rep_bp->parent_priv = rep_params->parent_priv;
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+	eth_dev->data->representor_id = rep_params->vf_id;
+
+	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
+	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
+	       sizeof(vf_rep_bp->mac_addr));
+	eth_dev->data->mac_addrs =
+		(struct rte_ether_addr *)&vf_rep_bp->mac_addr;
+	eth_dev->dev_ops = &bnxt_vf_rep_dev_ops;
+
+	/* No data-path, but need stub Rx/Tx functions to avoid crash
+	 * when testing with ovs-dpdk
+	 */
+	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
+	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
+	/* Link state. Inherited from PF or trusted VF */
+	parent_bp = vf_rep_bp->parent_priv;
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+
+	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
+	bnxt_print_link_info(eth_dev);
+
+	/* Pass the information to the rte_eth_dev_close() that it should also
+	 * release the private port resources.
+	 */
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+	PMD_DRV_LOG(INFO,
+		    "Switch domain id %d: Representor Device %d init done\n",
+		    vf_rep_bp->switch_domain_id, vf_rep_bp->vf_id);
+
+	return 0;
+}
+
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+
+	uint16_t vf_id;
+
+	eth_dev->data->mac_addrs = NULL;
+
+	parent_bp = rep->parent_priv;
+	if (parent_bp) {
+		parent_bp->num_reps--;
+		vf_id = rep->vf_id;
+		if (parent_bp->rep_info) {
+			memset(&parent_bp->rep_info[vf_id], 0,
+			       sizeof(parent_bp->rep_info[vf_id]));
+			/* mark that this representor has been freed */
+		}
+	}
+	eth_dev->dev_ops = NULL;
+	return 0;
+}
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+	struct rte_eth_link *link;
+	int rc;
+
+	parent_bp = rep->parent_priv;
+	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
+
+	/* Link state. Inherited from PF or trusted VF */
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+	bnxt_print_link_info(eth_dev);
+
+	return rc;
+}
+
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_rep_link_update_op(eth_dev, 1);
+
+	return 0;
+}
+
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
+{
+	eth_dev = eth_dev;
+}
+
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_representor_uninit(eth_dev);
+}
+
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp;
+	uint16_t max_vnics, i, j, vpool, vrxq;
+	unsigned int max_rx_rings;
+	int rc = 0;
+
+	/* MAC Specifics */
+	parent_bp = rep_bp->parent_priv;
+	if (!parent_bp) {
+		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
+		return rc;
+	}
+	PMD_DRV_LOG(DEBUG, "Representor dev_info_get_op\n");
+	dev_info->max_mac_addrs = parent_bp->max_l2_ctx;
+	dev_info->max_hash_mac_addrs = 0;
+
+	max_rx_rings = BNXT_MAX_VF_REP_RINGS;
+	/* For the sake of symmetry, max_rx_queues = max_tx_queues */
+	dev_info->max_rx_queues = max_rx_rings;
+	dev_info->max_tx_queues = max_rx_rings;
+	dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
+	dev_info->hash_key_size = 40;
+	max_vnics = parent_bp->max_vnics;
+
+	/* MTU specifics */
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+	dev_info->max_mtu = BNXT_MAX_MTU;
+
+	/* Fast path specifics */
+	dev_info->min_rx_bufsize = 1;
+	dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+
+	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
+	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
+		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
+	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
+
+	/* *INDENT-OFF* */
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 0,
+		},
+		.rx_free_thresh = 32,
+		/* If no descriptors available, pkts are dropped by default */
+		.rx_drop_en = 1,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = 32,
+			.hthresh = 0,
+			.wthresh = 0,
+		},
+		.tx_free_thresh = 32,
+		.tx_rs_thresh = 32,
+	};
+	eth_dev->data->dev_conf.intr_conf.lsc = 1;
+
+	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+	dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC;
+	dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC;
+
+	/* *INDENT-ON* */
+
+	/*
+	 * TODO: default_rxconf, default_txconf, rx_desc_lim, and tx_desc_lim
+	 *       need further investigation.
+	 */
+
+	/* VMDq resources */
+	vpool = 64; /* ETH_64_POOLS */
+	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	for (i = 0; i < 4; vpool >>= 1, i++) {
+		if (max_vnics > vpool) {
+			for (j = 0; j < 5; vrxq >>= 1, j++) {
+				if (dev_info->max_rx_queues > vrxq) {
+					if (vpool > vrxq)
+						vpool = vrxq;
+					goto found;
+				}
+			}
+			/* Not enough resources to support VMDq */
+			break;
+		}
+	}
+	/* Not enough resources to support VMDq */
+	vpool = 0;
+	vrxq = 0;
+found:
+	dev_info->max_vmdq_pools = vpool;
+	dev_info->vmdq_queue_num = vrxq;
+
+	dev_info->vmdq_pool_base = 0;
+	dev_info->vmdq_queue_base = 0;
+
+	return 0;
+}
+
+int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
+{
+	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	return 0;
+}
+
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
+
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
new file mode 100644
index 000000000..6048faf08
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_REPS_H_
+#define _BNXT_REPS_H_
+
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info);
+int bnxt_vf_rep_dev_configure_op(struct rte_eth_dev *eth_dev);
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl);
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp);
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf);
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+#endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 4306c6039..5c7859cb5 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -21,6 +21,7 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+	'bnxt_reps.c',
 
 	'tf_core/tf_core.c',
 	'tf_core/bitalloc.c',
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 02/51] net/bnxt: add support for VF-reps data path
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
                           ` (48 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Added code to support Tx/Rx from a VF representor port.
The VF-reps use the RX/TX rings of the Trusted VF/PF.
For each VF-rep, the Trusted VF/PF driver issues a VFR_ALLOC FW cmd that
returns "cfa_code" and "cfa_action" values.
The FW sets up the filter tables in such a way that VF traffic by
default (in absence of other rules) gets punted to the parent function
i.e. either the Trusted VF or the PF.
The cfa_code value in the RX-compl informs the driver of the source VF.
For traffic being transmitted from the VF-rep, the TX BD is tagged with
a cfa_action value that informs the HW to punt it to the corresponding
VF.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  30 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 150 ++++++++---
 drivers/net/bnxt/bnxt_reps.c   | 476 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/bnxt_reps.h   |  11 +
 drivers/net/bnxt/bnxt_rxr.c    |  22 +-
 drivers/net/bnxt/bnxt_rxr.h    |   1 +
 drivers/net/bnxt/bnxt_txq.h    |   1 +
 drivers/net/bnxt/bnxt_txr.c    |   4 +-
 8 files changed, 616 insertions(+), 79 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 9b7b87cee..443d9fee4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -495,6 +495,7 @@ struct bnxt_mark_info {
 
 struct bnxt_rep_info {
 	struct rte_eth_dev	*vfr_eth_dev;
+	pthread_mutex_t		vfr_lock;
 };
 
 /* address space location of register */
@@ -755,7 +756,8 @@ struct bnxt {
 	uint16_t		fw_reset_max_msecs;
 	uint16_t		switch_domain_id;
 	uint16_t		num_reps;
-	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
+	struct bnxt_rep_info	*rep_info;
+	uint16_t                *cfa_code_map;
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -780,12 +782,28 @@ struct bnxt {
  * Structure to store private data for each VF representor instance
  */
 struct bnxt_vf_representor {
-	uint16_t switch_domain_id;
-	uint16_t vf_id;
+	uint16_t		switch_domain_id;
+	uint16_t		vf_id;
+	uint16_t		tx_cfa_action;
+	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
-	struct bnxt	*parent_priv;
-	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
-	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct rte_eth_dev	*parent_dev;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t			dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct bnxt_rx_queue	**rx_queues;
+	unsigned int		rx_nr_rings;
+	unsigned int		tx_nr_rings;
+	uint64_t                tx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                tx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_bytes[BNXT_MAX_VF_REP_RINGS];
+};
+
+struct bnxt_vf_rep_tx_queue {
+	struct bnxt_tx_queue *txq;
+	struct bnxt_vf_representor *bp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4911745af..4202904c9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -137,6 +137,7 @@ static void bnxt_cancel_fw_health_check(struct bnxt *bp);
 static int bnxt_restore_vlan_filters(struct bnxt *bp);
 static void bnxt_dev_recover(void *arg);
 static void bnxt_free_error_recovery_info(struct bnxt *bp);
+static void bnxt_free_rep_info(struct bnxt *bp);
 
 int is_bnxt_in_error(struct bnxt *bp)
 {
@@ -5243,7 +5244,7 @@ bnxt_init_locks(struct bnxt *bp)
 
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 {
-	int rc;
+	int rc = 0;
 
 	rc = bnxt_init_fw(bp);
 	if (rc)
@@ -5642,6 +5643,8 @@ bnxt_uninit_locks(struct bnxt *bp)
 {
 	pthread_mutex_destroy(&bp->flow_lock);
 	pthread_mutex_destroy(&bp->def_cp_lock);
+	if (bp->rep_info)
+		pthread_mutex_destroy(&bp->rep_info->vfr_lock);
 }
 
 static int
@@ -5664,6 +5667,7 @@ bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev)
 
 	bnxt_uninit_locks(bp);
 	bnxt_free_flow_stats_info(bp);
+	bnxt_free_rep_info(bp);
 	rte_free(bp->ptp_cfg);
 	bp->ptp_cfg = NULL;
 	return rc;
@@ -5703,56 +5707,73 @@ static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static void bnxt_free_rep_info(struct bnxt *bp)
 {
-	char name[RTE_ETH_NAME_MAX_LEN];
-	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
-	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
-	uint16_t num_rep;
-	int i, ret = 0;
-	struct bnxt *backing_bp;
+	rte_free(bp->rep_info);
+	bp->rep_info = NULL;
+	rte_free(bp->cfa_code_map);
+	bp->cfa_code_map = NULL;
+}
 
-	if (pci_dev->device.devargs) {
-		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
-					    &eth_da);
-		if (ret)
-			return ret;
-	}
+static int bnxt_init_rep_info(struct bnxt *bp)
+{
+	int i = 0, rc;
 
-	num_rep = eth_da.nb_representor_ports;
-	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
-		    num_rep);
+	if (bp->rep_info)
+		return 0;
 
-	/* We could come here after first level of probe is already invoked
-	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
-	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
-	 */
-	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
-					 sizeof(struct bnxt),
-					 eth_dev_pci_specific_init, pci_dev,
-					 bnxt_dev_init, NULL);
+	bp->rep_info = rte_zmalloc("bnxt_rep_info",
+				   sizeof(bp->rep_info[0]) * BNXT_MAX_VF_REPS,
+				   0);
+	if (!bp->rep_info) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for rep info\n");
+		return -ENOMEM;
+	}
+	bp->cfa_code_map = rte_zmalloc("bnxt_cfa_code_map",
+				       sizeof(*bp->cfa_code_map) *
+				       BNXT_MAX_CFA_CODE, 0);
+	if (!bp->cfa_code_map) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for cfa_code_map\n");
+		bnxt_free_rep_info(bp);
+		return -ENOMEM;
+	}
 
-		if (ret || !num_rep)
-			return ret;
+	for (i = 0; i < BNXT_MAX_CFA_CODE; i++)
+		bp->cfa_code_map[i] = BNXT_VF_IDX_INVALID;
+
+	rc = pthread_mutex_init(&bp->rep_info->vfr_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Unable to initialize vfr_lock\n");
+		bnxt_free_rep_info(bp);
+		return rc;
 	}
+	return rc;
+}
+
+static int bnxt_rep_port_probe(struct rte_pci_device *pci_dev,
+			       struct rte_eth_devargs eth_da,
+			       struct rte_eth_dev *backing_eth_dev)
+{
+	struct rte_eth_dev *vf_rep_eth_dev;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct bnxt *backing_bp;
+	uint16_t num_rep;
+	int i, ret = 0;
 
+	num_rep = eth_da.nb_representor_ports;
 	if (num_rep > BNXT_MAX_VF_REPS) {
 		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
-			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
-		ret = -EINVAL;
-		return ret;
+			    num_rep, BNXT_MAX_VF_REPS);
+		return -EINVAL;
 	}
 
-	/* probe representor ports now */
-	if (!backing_eth_dev)
-		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = -ENODEV;
-		return ret;
+	if (num_rep > RTE_MAX_ETHPORTS) {
+		PMD_DRV_LOG(ERR,
+			    "nb_representor_ports = %d > %d MAX ETHPORTS\n",
+			    num_rep, RTE_MAX_ETHPORTS);
+		return -EINVAL;
 	}
+
 	backing_bp = backing_eth_dev->data->dev_private;
 
 	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
@@ -5761,14 +5782,17 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		/* Returning an error is not an option.
 		 * Applications are not handling this correctly
 		 */
-		return ret;
+		return 0;
 	}
 
-	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+	if (bnxt_init_rep_info(backing_bp))
+		return 0;
+
+	for (i = 0; i < num_rep; i++) {
 		struct bnxt_vf_representor representor = {
 			.vf_id = eth_da.representor_ports[i],
 			.switch_domain_id = backing_bp->switch_domain_id,
-			.parent_priv = backing_bp
+			.parent_dev = backing_eth_dev
 		};
 
 		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
@@ -5809,6 +5833,48 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return ret;
 }
 
+static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			  struct rte_pci_device *pci_dev)
+{
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev;
+	uint16_t num_rep;
+	int ret = 0;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	}
+
+	/* probe representor ports now */
+	ret = bnxt_rep_port_probe(pci_dev, eth_da, backing_eth_dev);
+
+	return ret;
+}
+
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct rte_eth_dev *eth_dev;
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 21f1b0765..777179558 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -6,6 +6,11 @@
 #include "bnxt.h"
 #include "bnxt_ring.h"
 #include "bnxt_reps.h"
+#include "bnxt_rxq.h"
+#include "bnxt_rxr.h"
+#include "bnxt_txq.h"
+#include "bnxt_txr.h"
+#include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
@@ -13,25 +18,128 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_configure = bnxt_vf_rep_dev_configure_op,
 	.dev_start = bnxt_vf_rep_dev_start_op,
 	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.rx_queue_release = bnxt_vf_rep_rx_queue_release_op,
 	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.tx_queue_release = bnxt_vf_rep_tx_queue_release_op,
 	.link_update = bnxt_vf_rep_link_update_op,
 	.dev_close = bnxt_vf_rep_dev_close_op,
-	.dev_stop = bnxt_vf_rep_dev_stop_op
+	.dev_stop = bnxt_vf_rep_dev_stop_op,
+	.stats_get = bnxt_vf_rep_stats_get_op,
+	.stats_reset = bnxt_vf_rep_stats_reset_op,
 };
 
-static uint16_t
-bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
-		     __rte_unused struct rte_mbuf **rx_pkts,
-		     __rte_unused uint16_t nb_pkts)
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf)
 {
+	struct bnxt_sw_rx_bd *prod_rx_buf;
+	struct bnxt_rx_ring_info *rep_rxr;
+	struct bnxt_rx_queue *rep_rxq;
+	struct rte_eth_dev *vfr_eth_dev;
+	struct bnxt_vf_representor *vfr_bp;
+	uint16_t vf_id;
+	uint16_t mask;
+	uint8_t que;
+
+	vf_id = bp->cfa_code_map[cfa_code];
+	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
+	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
+		return 1;
+	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	if (!vfr_eth_dev)
+		return 1;
+	vfr_bp = vfr_eth_dev->data->dev_private;
+	if (vfr_bp->rx_cfa_code != cfa_code) {
+		/* cfa_code not meant for this VF rep!!?? */
+		return 1;
+	}
+	/* If rxq_id happens to be > max rep_queue, use rxq0 */
+	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
+	rep_rxq = vfr_bp->rx_queues[que];
+	rep_rxr = rep_rxq->rx_ring;
+	mask = rep_rxr->rx_ring_struct->ring_mask;
+
+	/* Put this mbuf on the RxQ of the Representor */
+	prod_rx_buf =
+		&rep_rxr->rx_buf_ring[rep_rxr->rx_prod++ & mask];
+	if (!prod_rx_buf->mbuf) {
+		prod_rx_buf->mbuf = mbuf;
+		vfr_bp->rx_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_pkts[que]++;
+	} else {
+		vfr_bp->rx_drop_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_drop_pkts[que]++;
+		rte_free(mbuf); /* Representor Rx ring full, drop pkt */
+	}
+
 	return 0;
 }
 
 static uint16_t
-bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
-		     __rte_unused struct rte_mbuf **tx_pkts,
+bnxt_vf_rep_rx_burst(void *rx_queue,
+		     struct rte_mbuf **rx_pkts,
+		     uint16_t nb_pkts)
+{
+	struct bnxt_rx_queue *rxq = rx_queue;
+	struct bnxt_sw_rx_bd *cons_rx_buf;
+	struct bnxt_rx_ring_info *rxr;
+	uint16_t nb_rx_pkts = 0;
+	uint16_t mask, i;
+
+	if (!rxq)
+		return 0;
+
+	rxr = rxq->rx_ring;
+	mask = rxr->rx_ring_struct->ring_mask;
+	for (i = 0; i < nb_pkts; i++) {
+		cons_rx_buf = &rxr->rx_buf_ring[rxr->rx_cons & mask];
+		if (!cons_rx_buf->mbuf)
+			return nb_rx_pkts;
+		rx_pkts[nb_rx_pkts] = cons_rx_buf->mbuf;
+		rx_pkts[nb_rx_pkts]->port = rxq->port_id;
+		cons_rx_buf->mbuf = NULL;
+		nb_rx_pkts++;
+		rxr->rx_cons++;
+	}
+
+	return nb_rx_pkts;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
 		     __rte_unused uint16_t nb_pkts)
 {
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+	struct bnxt_tx_queue *ptxq;
+	struct bnxt *parent;
+	struct  bnxt_vf_representor *vf_rep_bp;
+	int qid;
+	int rc;
+	int i;
+
+	if (!vfr_txq)
+		return 0;
+
+	qid = vfr_txq->txq->queue_id;
+	vf_rep_bp = vfr_txq->bp;
+	parent = vf_rep_bp->parent_dev->data->dev_private;
+	pthread_mutex_lock(&parent->rep_info->vfr_lock);
+	ptxq = parent->tx_queues[qid];
+
+	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+
+	for (i = 0; i < nb_pkts; i++) {
+		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
+		vf_rep_bp->tx_pkts[qid]++;
+	}
+
+	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
+	ptxq->tx_cfa_action = 0;
+	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
+
+	return rc;
+
 	return 0;
 }
 
@@ -45,7 +153,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
-	vf_rep_bp->parent_priv = rep_params->parent_priv;
+	vf_rep_bp->parent_dev = rep_params->parent_dev;
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	eth_dev->data->representor_id = rep_params->vf_id;
@@ -63,7 +171,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
 	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
 	/* Link state. Inherited from PF or trusted VF */
-	parent_bp = vf_rep_bp->parent_priv;
+	parent_bp = vf_rep_bp->parent_dev->data->dev_private;
 	link = &parent_bp->eth_dev->data->dev_link;
 
 	eth_dev->data->dev_link.link_speed = link->link_speed;
@@ -94,18 +202,18 @@ int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
 	uint16_t vf_id;
 
 	eth_dev->data->mac_addrs = NULL;
-
-	parent_bp = rep->parent_priv;
-	if (parent_bp) {
-		parent_bp->num_reps--;
-		vf_id = rep->vf_id;
-		if (parent_bp->rep_info) {
-			memset(&parent_bp->rep_info[vf_id], 0,
-			       sizeof(parent_bp->rep_info[vf_id]));
-			/* mark that this representor has been freed */
-		}
-	}
 	eth_dev->dev_ops = NULL;
+
+	parent_bp = rep->parent_dev->data->dev_private;
+	if (!parent_bp)
+		return 0;
+
+	parent_bp->num_reps--;
+	vf_id = rep->vf_id;
+	if (parent_bp->rep_info)
+		memset(&parent_bp->rep_info[vf_id], 0,
+		       sizeof(parent_bp->rep_info[vf_id]));
+		/* mark that this representor has been freed */
 	return 0;
 }
 
@@ -117,7 +225,7 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	struct rte_eth_link *link;
 	int rc;
 
-	parent_bp = rep->parent_priv;
+	parent_bp = rep->parent_dev->data->dev_private;
 	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
 
 	/* Link state. Inherited from PF or trusted VF */
@@ -132,16 +240,134 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
+static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already allocated in FW */
+	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * Alloc VF rep rules in CFA after default VNIC is created.
+	 * Otherwise the FW will create the VF-rep rules with
+	 * default drop action.
+	 */
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
+				     vfr->vf_id,
+				     &vfr->tx_cfa_action,
+				     &vfr->rx_cfa_code);
+	 */
+	if (!rc) {
+		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
+			    vfr->vf_id);
+	} else {
+		PMD_DRV_LOG(ERR,
+			    "Failed to alloc representor %d in FW\n",
+			    vfr->vf_id);
+	}
+
+	return rc;
+}
+
+static void bnxt_vf_rep_free_rx_mbufs(struct bnxt_vf_representor *rep_bp)
+{
+	struct bnxt_rx_queue *rxq;
+	unsigned int i;
+
+	for (i = 0; i < rep_bp->rx_nr_rings; i++) {
+		rxq = rep_bp->rx_queues[i];
+		bnxt_rx_queue_release_mbufs(rxq);
+	}
+}
+
 int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 {
-	bnxt_vf_rep_link_update_op(eth_dev, 1);
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int rc;
 
-	return 0;
+	rc = bnxt_vfr_alloc(rep_bp);
+
+	if (!rc) {
+		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
+		eth_dev->tx_pkt_burst = &bnxt_vf_rep_tx_burst;
+
+		bnxt_vf_rep_link_update_op(eth_dev, 1);
+	} else {
+		eth_dev->data->dev_link.link_status = 0;
+		bnxt_vf_rep_free_rx_mbufs(rep_bp);
+	}
+
+	return rc;
+}
+
+static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already freed in FW */
+	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
+				    vfr->vf_id);
+	 */
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to free representor %d in FW\n",
+			    vfr->vf_id);
+		return rc;
+	}
+
+	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
+	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
+		    vfr->vf_id);
+	vfr->tx_cfa_action = 0;
+	vfr->rx_cfa_code = 0;
+
+	return rc;
 }
 
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *vfr_bp = eth_dev->data->dev_private;
+
+	/* Avoid crashes as we are about to free queues */
+	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
+	eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+
+	bnxt_vfr_free(vfr_bp);
+
+	if (eth_dev->data->dev_started)
+		eth_dev->data->dev_link.link_status = 0;
+
+	bnxt_vf_rep_free_rx_mbufs(vfr_bp);
 }
 
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
@@ -159,7 +385,7 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	int rc = 0;
 
 	/* MAC Specifics */
-	parent_bp = rep_bp->parent_priv;
+	parent_bp = rep_bp->parent_dev->data->dev_private;
 	if (!parent_bp) {
 		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
 		return rc;
@@ -257,7 +483,13 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
 {
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+
 	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	rep_bp->rx_queues = (void *)eth_dev->data->rx_queues;
+	rep_bp->tx_nr_rings = eth_dev->data->nb_tx_queues;
+	rep_bp->rx_nr_rings = eth_dev->data->nb_rx_queues;
+
 	return 0;
 }
 
@@ -269,9 +501,94 @@ int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  rx_conf,
 				  __rte_unused struct rte_mempool *mp)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_rx_queue *parent_rxq;
+	struct bnxt_rx_queue *rxq;
+	struct bnxt_sw_rx_bd *buf_ring;
+	int rc = 0;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Rx ring %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_RX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid\n", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_rxq = parent_bp->rx_queues[queue_idx];
+	if (!parent_rxq) {
+		PMD_DRV_LOG(ERR, "Parent RxQ has not been configured yet\n");
+		return -EINVAL;
+	}
+
+	if (nb_desc != parent_rxq->nb_rx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent rxq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->rx_queues) {
+		rxq = eth_dev->data->rx_queues[queue_idx];
+		if (rxq)
+			bnxt_rx_queue_release_op(rxq);
+	}
+
+	rxq = rte_zmalloc_socket("bnxt_vfr_rx_queue",
+				 sizeof(struct bnxt_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_rx_queue allocation failed!\n");
+		return -ENOMEM;
+	}
+
+	rxq->nb_rx_desc = nb_desc;
+
+	rc = bnxt_init_rx_ring_struct(rxq, socket_id);
+	if (rc)
+		goto out;
+
+	buf_ring = rte_zmalloc_socket("bnxt_rx_vfr_buf_ring",
+				      sizeof(struct bnxt_sw_rx_bd) *
+				      rxq->rx_ring->rx_ring_struct->ring_size,
+				      RTE_CACHE_LINE_SIZE, socket_id);
+	if (!buf_ring) {
+		PMD_DRV_LOG(ERR, "bnxt_rx_vfr_buf_ring allocation failed!\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rxq->rx_ring->rx_buf_ring = buf_ring;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = eth_dev->data->port_id;
+	eth_dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
+
+out:
+	if (rxq)
+		bnxt_rx_queue_release_op(rxq);
+
+	return rc;
+}
+
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue)
+{
+	struct bnxt_rx_queue *rxq = (struct bnxt_rx_queue *)rx_queue;
+
+	if (!rxq)
+		return;
+
+	bnxt_rx_queue_release_mbufs(rxq);
+
+	bnxt_free_ring(rxq->rx_ring->rx_ring_struct);
+	bnxt_free_ring(rxq->rx_ring->ag_ring_struct);
+	bnxt_free_ring(rxq->cp_ring->cp_ring_struct);
+
+	rte_free(rxq);
 }
 
 int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
@@ -281,7 +598,112 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_tx_queue *parent_txq, *txq;
+	struct bnxt_vf_rep_tx_queue *vfr_txq;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Tx rings %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_TX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_txq = parent_bp->tx_queues[queue_idx];
+	if (!parent_txq) {
+		PMD_DRV_LOG(ERR, "Parent TxQ has not been configured yet\n");
+		return -EINVAL;
+	}
 
+	if (nb_desc != parent_txq->nb_tx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent txq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->tx_queues) {
+		vfr_txq = eth_dev->data->tx_queues[queue_idx];
+		bnxt_vf_rep_tx_queue_release_op(vfr_txq);
+		vfr_txq = NULL;
+	}
+
+	vfr_txq = rte_zmalloc_socket("bnxt_vfr_tx_queue",
+				     sizeof(struct bnxt_vf_rep_tx_queue),
+				     RTE_CACHE_LINE_SIZE, socket_id);
+	if (!vfr_txq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_tx_queue allocation failed!");
+		return -ENOMEM;
+	}
+	txq = rte_zmalloc_socket("bnxt_tx_queue",
+				 sizeof(struct bnxt_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!");
+		rte_free(vfr_txq);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->queue_id = queue_idx;
+	txq->port_id = eth_dev->data->port_id;
+	vfr_txq->txq = txq;
+	vfr_txq->bp = rep_bp;
+	eth_dev->data->tx_queues[queue_idx] = vfr_txq;
+
+	return 0;
+}
+
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue)
+{
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+
+	if (!vfr_txq)
+		return;
+
+	rte_free(vfr_txq->txq);
+	rte_free(vfr_txq);
+}
+
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		stats->obytes += rep_bp->tx_bytes[i];
+		stats->opackets += rep_bp->tx_pkts[i];
+		stats->ibytes += rep_bp->rx_bytes[i];
+		stats->ipackets += rep_bp->rx_pkts[i];
+		stats->imissed += rep_bp->rx_drop_pkts[i];
+
+		stats->q_ipackets[i] = rep_bp->rx_pkts[i];
+		stats->q_ibytes[i] = rep_bp->rx_bytes[i];
+		stats->q_opackets[i] = rep_bp->tx_pkts[i];
+		stats->q_obytes[i] = rep_bp->tx_bytes[i];
+		stats->q_errors[i] = rep_bp->rx_drop_pkts[i];
+	}
+
+	return 0;
+}
+
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		rep_bp->tx_pkts[i] = 0;
+		rep_bp->tx_bytes[i] = 0;
+		rep_bp->rx_pkts[i] = 0;
+		rep_bp->rx_bytes[i] = 0;
+		rep_bp->rx_drop_pkts[i] = 0;
+	}
 	return 0;
 }
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 6048faf08..5c2e0a0b9 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -9,6 +9,12 @@
 #include <rte_malloc.h>
 #include <rte_ethdev.h>
 
+#define BNXT_MAX_CFA_CODE               65536
+#define BNXT_VF_IDX_INVALID             0xffff
+
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
@@ -30,6 +36,11 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused unsigned int socket_id,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf);
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue);
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue);
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats);
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev);
 #endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 11807f409..37b534fc2 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -12,6 +12,7 @@
 #include <rte_memory.h>
 
 #include "bnxt.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxr.h"
 #include "bnxt_rxq.h"
@@ -539,7 +540,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
-			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
+		       struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
@@ -735,6 +736,20 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
+	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
+	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
+		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
+				   mbuf)) {
+			/* Now return an error so that nb_rx_pkts is not
+			 * incremented.
+			 * This packet was meant to be given to the representor.
+			 * So no need to account the packet and give it to
+			 * parent Rx burst function.
+			 */
+			rc = -ENODEV;
+		}
+	}
+
 next_rx:
 
 	*raw_cons = tmp_raw_cons;
@@ -751,6 +766,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint32_t raw_cons = cpr->cp_raw_cons;
 	uint32_t cons;
 	int nb_rx_pkts = 0;
+	int nb_rep_rx_pkts = 0;
 	struct rx_pkt_cmpl *rxcmp;
 	uint16_t prod = rxr->rx_prod;
 	uint16_t ag_prod = rxr->ag_prod;
@@ -784,6 +800,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				nb_rx_pkts++;
 			if (rc == -EBUSY)	/* partial completion */
 				break;
+			if (rc == -ENODEV)	/* completion for representor */
+				nb_rep_rx_pkts++;
 		} else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
 			evt =
 			bnxt_event_hwrm_resp_handler(rxq->bp,
@@ -802,7 +820,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 
 	cpr->cp_raw_cons = raw_cons;
-	if (!nb_rx_pkts && !evt) {
+	if (!nb_rx_pkts && !nb_rep_rx_pkts && !evt) {
 		/*
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 811dcd86b..e60c97fa1 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -188,6 +188,7 @@ struct bnxt_sw_rx_bd {
 struct bnxt_rx_ring_info {
 	uint16_t		rx_prod;
 	uint16_t		ag_prod;
+	uint16_t                rx_cons; /* Needed for representor */
 	struct bnxt_db_info     rx_db;
 	struct bnxt_db_info     ag_db;
 
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 37a3f9539..69ff89aab 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -29,6 +29,7 @@ struct bnxt_tx_queue {
 	struct bnxt		*bp;
 	int			index;
 	int			tx_wake_thresh;
+	uint32_t                tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 16021407e..d7e193d38 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT))
+				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +184,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = 0;
+		cfa_action = txq->tx_cfa_action;
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 03/51] net/bnxt: get IDs for VF-Rep endpoint
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
                           ` (47 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use 'first_vf_id' and the 'vf_id' that is input as part of adding
a representor to obtain the PCI function ID(FID) of the VF(VFR endpoint).
Use the FID as an input to FUNC_QCFG HWRM cmd to obtain the default
vnic ID of the VF.
Along with getting the default vNIC ID by supplying the FW FID of
the VF-rep endpoint to HWRM_FUNC_QCFG, obtain and store it's
function svif.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |  3 +++
 drivers/net/bnxt/bnxt_hwrm.c | 27 +++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h |  4 ++++
 drivers/net/bnxt/bnxt_reps.c | 12 ++++++++++++
 4 files changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 443d9fee4..7afbd5cab 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -784,6 +784,9 @@ struct bnxt {
 struct bnxt_vf_representor {
 	uint16_t		switch_domain_id;
 	uint16_t		vf_id;
+	uint16_t		fw_fid;
+	uint16_t		dflt_vnic_id;
+	uint16_t		svif;
 	uint16_t		tx_cfa_action;
 	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 945bc9018..ed42e58d4 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,33 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t svif_info;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	req.fid = rte_cpu_to_le_16(fid);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	if (vnic_id)
+		*vnic_id = rte_le_to_cpu_16(resp->dflt_vnic_id);
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif && (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID))
+		*svif = svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 {
 	struct hwrm_port_mac_qcfg_input req = {0};
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 58b414d4f..8d19998df 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -270,4 +270,8 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 				 enum bnxt_flow_dir dir,
 				 uint16_t cntr,
 				 uint16_t num_entries);
+int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
+			       uint16_t *vnic_id);
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif);
 #endif
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 777179558..ea6f0010f 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -150,6 +150,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 				 (struct bnxt_vf_representor *)params;
 	struct rte_eth_link *link;
 	struct bnxt *parent_bp;
+	int rc = 0;
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
@@ -179,6 +180,17 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_link.link_status = link->link_status;
 	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
 
+	vf_rep_bp->fw_fid = rep_params->vf_id + parent_bp->first_vf_id;
+	PMD_DRV_LOG(INFO, "vf_rep->fw_fid = %d\n", vf_rep_bp->fw_fid);
+	rc = bnxt_hwrm_get_dflt_vnic_svif(parent_bp, vf_rep_bp->fw_fid,
+					  &vf_rep_bp->dflt_vnic_id,
+					  &vf_rep_bp->svif);
+	if (rc)
+		PMD_DRV_LOG(ERR, "Failed to get default vnic id of VF\n");
+	else
+		PMD_DRV_LOG(INFO, "vf_rep->dflt_vnic_id = %d\n",
+			    vf_rep_bp->dflt_vnic_id);
+
 	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
 	bnxt_print_link_info(eth_dev);
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 04/51] net/bnxt: initialize parent PF information
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (2 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
                           ` (46 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev
  Cc: Lance Richardson, Venkat Duvvuru, Somnath Kotur, Kalesh AP,
	Kishore Padmanabha

From: Lance Richardson <lance.richardson@broadcom.com>

Add support to query parent PF information (MAC address,
function ID, port ID and default VNIC) from firmware.

Current firmware returns zero for parent default vnic,
a temporary Wh+-specific workaround is included until
that can be fixed.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  9 ++++++++
 drivers/net/bnxt/bnxt_ethdev.c | 23 +++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 42 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 75 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 7afbd5cab..2b87899a4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -217,6 +217,14 @@ struct bnxt_child_vf_info {
 	bool			persist_stats;
 };
 
+struct bnxt_parent_info {
+#define	BNXT_PF_FID_INVALID	0xFFFF
+	uint16_t		fid;
+	uint16_t		vnic;
+	uint16_t		port_id;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
@@ -738,6 +746,7 @@ struct bnxt {
 #define BNXT_OUTER_TPID_BD_SHFT	16
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
+	struct bnxt_parent_info	*parent;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4202904c9..bf018be16 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -97,6 +97,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+
 static const char *const bnxt_dev_args[] = {
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
@@ -173,6 +174,11 @@ uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 	return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
+static void bnxt_free_parent_info(struct bnxt *bp)
+{
+	rte_free(bp->parent);
+}
+
 static void bnxt_free_pf_info(struct bnxt *bp)
 {
 	rte_free(bp->pf);
@@ -223,6 +229,16 @@ static void bnxt_free_mem(struct bnxt *bp, bool reconfig)
 	bp->grp_info = NULL;
 }
 
+static int bnxt_alloc_parent_info(struct bnxt *bp)
+{
+	bp->parent = rte_zmalloc("bnxt_parent_info",
+				 sizeof(struct bnxt_parent_info), 0);
+	if (bp->parent == NULL)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int bnxt_alloc_pf_info(struct bnxt *bp)
 {
 	bp->pf = rte_zmalloc("bnxt_pf_info", sizeof(struct bnxt_pf_info), 0);
@@ -1322,6 +1338,7 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	bnxt_free_cos_queues(bp);
 	bnxt_free_link_info(bp);
 	bnxt_free_pf_info(bp);
+	bnxt_free_parent_info(bp);
 
 	eth_dev->dev_ops = NULL;
 	eth_dev->rx_pkt_burst = NULL;
@@ -5210,6 +5227,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_port_mac_qcfg(bp);
 
+	bnxt_hwrm_parent_pf_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
@@ -5528,6 +5547,10 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 	if (rc)
 		goto error_free;
 
+	rc = bnxt_alloc_parent_info(bp);
+	if (rc)
+		goto error_free;
+
 	rc = bnxt_alloc_hwrm_resources(bp);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ed42e58d4..347e1c71e 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,48 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	int rc;
+
+	if (!BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	if (!bp->parent)
+		return -EINVAL;
+
+	bp->parent->fid = BNXT_PF_FID_INVALID;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+
+	req.fid = rte_cpu_to_le_16(0xfffe); /* Request parent PF information. */
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	memcpy(bp->parent->mac_addr, resp->mac_address, RTE_ETHER_ADDR_LEN);
+	bp->parent->vnic = rte_le_to_cpu_16(resp->dflt_vnic_id);
+	bp->parent->fid = rte_le_to_cpu_16(resp->fid);
+	bp->parent->port_id = rte_le_to_cpu_16(resp->port_id);
+
+	/* FIXME: Temporary workaround - remove when firmware issue is fixed. */
+	if (bp->parent->vnic == 0) {
+		PMD_DRV_LOG(ERR, "Error: parent VNIC unavailable.\n");
+		/* Use hard-coded values appropriate for current Wh+ fw. */
+		if (bp->parent->fid == 2)
+			bp->parent->vnic = 0x100;
+		else
+			bp->parent->vnic = 1;
+	}
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 8d19998df..ef8997500 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -274,4 +274,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 			       uint16_t *vnic_id);
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 05/51] net/bnxt: modify port db dev interface
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (3 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 06/51] net/bnxt: get port and function info Ajit Khaparde
                           ` (45 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Modify ulp_port_db_dev_port_intf_update prototype to take
"struct rte_eth_dev *" as the second parameter.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    | 4 ++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 5 +++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.h | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 0c3c638ce..c7281ab9a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -548,7 +548,7 @@ bnxt_ulp_init(struct bnxt *bp)
 		}
 
 		/* update the port database */
-		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 		if (rc) {
 			BNXT_TF_DBG(ERR,
 				    "Failed to update port database\n");
@@ -584,7 +584,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	}
 
 	/* update the port database */
-	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to update port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index e3b924289..66b584026 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -104,10 +104,11 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp)
+					 struct rte_eth_dev *eth_dev)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint32_t port_id = bp->eth_dev->data->port_id;
+	struct bnxt *bp = eth_dev->data->dev_private;
+	uint32_t port_id = eth_dev->data->port_id;
 	uint32_t ifindex;
 	struct ulp_interface_info *intf;
 	int32_t rc;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 271c29a47..929a5a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -71,7 +71,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp);
+					 struct rte_eth_dev *eth_dev);
 
 /*
  * Api to get the ulp ifindex for a given device port.
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 06/51] net/bnxt: get port and function info
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (4 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
                           ` (44 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

add helper functions to get port & function related information
like parif, physical port id & vport id.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  8 ++++
 drivers/net/bnxt/bnxt_ethdev.c           | 58 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h | 10 ++++
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    | 10 ----
 4 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2b87899a4..0bdf8f5ba 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -855,6 +855,9 @@ extern const struct rte_flow_ops bnxt_flow_ops;
 	} \
 } while (0)
 
+#define	BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)	\
+		((eth_dev)->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+
 extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG_RAW(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
@@ -870,6 +873,11 @@ void bnxt_ulp_deinit(struct bnxt *bp);
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 uint16_t bnxt_get_fw_func_id(uint16_t port);
+uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_phy_port_id(uint16_t port);
+uint16_t bnxt_get_vport(uint16_t port);
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port);
 
 void bnxt_cancel_fc_thread(struct bnxt *bp);
 void bnxt_flow_cnt_alarm_cb(void *arg);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index bf018be16..af88b360f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -28,6 +28,7 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
+#include "bnxt_tf_common.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -5101,6 +5102,63 @@ bnxt_get_fw_func_id(uint16_t port)
 	return bp->fw_fid;
 }
 
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev))
+		return BNXT_ULP_INTF_TYPE_VF_REP;
+
+	bp = eth_dev->data->dev_private;
+	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
+			   : BNXT_ULP_INTF_TYPE_VF;
+}
+
+uint16_t
+bnxt_get_phy_port_id(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->pf->port_id : bp->parent->port_id;
+}
+
+uint16_t
+bnxt_get_parif(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->fw_fid - 1 : bp->parent->fid - 1;
+}
+
+uint16_t
+bnxt_get_vport(uint16_t port_id)
+{
+	return (1 << bnxt_get_phy_port_id(port_id));
+}
+
 static void bnxt_alloc_error_recovery_info(struct bnxt *bp)
 {
 	struct bnxt_error_recovery_info *info = bp->recovery_info;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f41757908..f772d4919 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -44,6 +44,16 @@ enum ulp_direction_type {
 	ULP_DIR_EGRESS,
 };
 
+/* enumeration of the interface types */
+enum bnxt_ulp_intf_type {
+	BNXT_ULP_INTF_TYPE_INVALID = 0,
+	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_VF,
+	BNXT_ULP_INTF_TYPE_PF_REP,
+	BNXT_ULP_INTF_TYPE_VF_REP,
+	BNXT_ULP_INTF_TYPE_LAST
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 929a5a510..604c4385a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -10,16 +10,6 @@
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
 
-/* enumeration of the interface types */
-enum bnxt_ulp_intf_type {
-	BNXT_ULP_INTF_TYPE_INVALID = 0,
-	BNXT_ULP_INTF_TYPE_PF = 1,
-	BNXT_ULP_INTF_TYPE_VF,
-	BNXT_ULP_INTF_TYPE_PF_REP,
-	BNXT_ULP_INTF_TYPE_VF_REP,
-	BNXT_ULP_INTF_TYPE_LAST
-};
-
 /* Structure for the Port database resource information. */
 struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 07/51] net/bnxt: add support for hwrm port phy qcaps
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (5 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 06/51] net/bnxt: get port and function info Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
                           ` (43 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_PHY_QCAPS to the firmware to get the physical
port count of the device.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c |  2 ++
 drivers/net/bnxt/bnxt_hwrm.c   | 22 ++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0bdf8f5ba..65862abdc 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -747,6 +747,7 @@ struct bnxt {
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
 	struct bnxt_parent_info	*parent;
+	uint8_t			port_cnt;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index af88b360f..697cd6651 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5287,6 +5287,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_parent_pf_qcfg(bp);
 
+	bnxt_hwrm_port_phy_qcaps(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 347e1c71e..e6a28d07c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1330,6 +1330,28 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	return rc;
 }
 
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp)
+{
+	int rc = 0;
+	struct hwrm_port_phy_qcaps_input req = {0};
+	struct hwrm_port_phy_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCAPS, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	bp->port_cnt = resp->port_cnt;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static bool bnxt_find_lossy_profile(struct bnxt *bp)
 {
 	int i = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index ef8997500..87cd40779 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -275,4 +275,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 08/51] net/bnxt: modify port db to handle more info
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (6 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 09/51] net/bnxt: add support for exact match Ajit Khaparde
                           ` (42 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Apart from func_svif, func_id & vnic, port_db now stores and
retrieves func_spif, func_parif, phy_port_id, port_svif, port_spif,
port_parif, port_vport. New helper functions have been added to
support the same.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 145 +++++++++++++++++++++-----
 drivers/net/bnxt/tf_ulp/ulp_port_db.h |  72 ++++++++++---
 2 files changed, 179 insertions(+), 38 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 66b584026..ea27ef41f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -106,13 +106,12 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 					 struct rte_eth_dev *eth_dev)
 {
-	struct bnxt_ulp_port_db *port_db;
-	struct bnxt *bp = eth_dev->data->dev_private;
 	uint32_t port_id = eth_dev->data->port_id;
-	uint32_t ifindex;
+	struct ulp_phy_port_info *port_data;
+	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	uint32_t ifindex;
 	int32_t rc;
-	struct bnxt_vnic_info *vnic;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db) {
@@ -133,22 +132,22 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 
 	/* update the interface details */
 	intf = &port_db->ulp_intf_list[ifindex];
-	if (BNXT_PF(bp) || BNXT_VF(bp)) {
-		if (BNXT_PF(bp)) {
-			intf->type = BNXT_ULP_INTF_TYPE_PF;
-			intf->port_svif = bp->port_svif;
-		} else {
-			intf->type = BNXT_ULP_INTF_TYPE_VF;
-		}
-		intf->func_id = bp->fw_fid;
-		intf->func_svif = bp->func_svif;
-		vnic = BNXT_GET_DEFAULT_VNIC(bp);
-		if (vnic)
-			intf->default_vnic = vnic->fw_vnic_id;
-		intf->bp = bp;
-		memcpy(intf->mac_addr, bp->mac_addr, sizeof(intf->mac_addr));
-	} else {
-		BNXT_TF_DBG(ERR, "Invalid interface type\n");
+
+	intf->type = bnxt_get_interface_type(port_id);
+
+	intf->func_id = bnxt_get_fw_func_id(port_id);
+	intf->func_svif = bnxt_get_svif(port_id, 1);
+	intf->func_spif = bnxt_get_phy_port_id(port_id);
+	intf->func_parif = bnxt_get_parif(port_id);
+	intf->default_vnic = bnxt_get_vnic_id(port_id);
+	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+
+	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
+		port_data = &port_db->phy_port_list[intf->phy_port_id];
+		port_data->port_svif = bnxt_get_svif(port_id, 0);
+		port_data->port_spif = bnxt_get_phy_port_id(port_id);
+		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_vport = bnxt_get_vport(port_id);
 	}
 
 	return 0;
@@ -209,7 +208,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 }
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -225,16 +224,88 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS)
+	if (dir == ULP_DIR_EGRESS) {
 		*svif = port_db->ulp_intf_list[ifindex].func_svif;
-	else
-		*svif = port_db->ulp_intf_list[ifindex].port_svif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*svif = port_db->phy_port_list[phy_port_id].port_svif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *spif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*spif = port_db->phy_port_list[phy_port_id].port_spif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *parif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*parif = port_db->phy_port_list[phy_port_id].port_parif;
+	}
+
 	return 0;
 }
 
@@ -262,3 +333,29 @@ ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
 	return 0;
 }
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint16_t *vport)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+	*vport = port_db->phy_port_list[phy_port_id].port_vport;
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 604c4385a..87de3bcbc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -15,11 +15,17 @@ struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
 	uint16_t		func_id;
 	uint16_t		func_svif;
-	uint16_t		port_svif;
+	uint16_t		func_spif;
+	uint16_t		func_parif;
 	uint16_t		default_vnic;
-	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
-	/* back pointer to the bnxt driver, it is null for rep ports */
-	struct bnxt		*bp;
+	uint16_t		phy_port_id;
+};
+
+struct ulp_phy_port_info {
+	uint16_t	port_svif;
+	uint16_t	port_spif;
+	uint16_t	port_parif;
+	uint16_t	port_vport;
 };
 
 /* Structure for the Port database */
@@ -29,6 +35,7 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
 };
 
 /*
@@ -74,8 +81,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
-				  uint32_t port_id,
-				  uint32_t *ifindex);
+				  uint32_t port_id, uint32_t *ifindex);
 
 /*
  * Api to get the function id for a given ulp ifindex.
@@ -88,11 +94,10 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex,
-			    uint16_t *func_id);
+			    uint32_t ifindex, uint16_t *func_id);
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -103,9 +108,36 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
-		     uint32_t ifindex,
-		     uint32_t dir,
-		     uint16_t *svif);
+		     uint32_t ifindex, uint32_t dir, uint16_t *svif);
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex, uint32_t dir, uint16_t *spif);
+
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint32_t dir, uint16_t *parif);
 
 /*
  * Api to get the vnic id for a given ulp ifindex.
@@ -118,7 +150,19 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex,
-			     uint16_t *vnic);
+			     uint32_t ifindex, uint16_t *vnic);
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex,	uint16_t *vport);
 
 #endif /* _ULP_PORT_DB_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 09/51] net/bnxt: add support for exact match
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (7 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
                           ` (41 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Add Exact Match support
- Create EM table pool of memory indices
- Insert exact match internal entry API
- Sends EM internal insert and delete request to firmware

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++++++++---
 drivers/net/bnxt/tf_core/hwrm_tf.h            |    9 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_core.c            |  144 +-
 drivers/net/bnxt/tf_core/tf_core.h            |  383 +-
 drivers/net/bnxt/tf_core/tf_em.c              |   98 +-
 drivers/net/bnxt/tf_core/tf_em.h              |   31 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_msg.c             |   86 +-
 drivers/net/bnxt/tf_core/tf_msg.h             |   13 +
 drivers/net/bnxt/tf_core/tf_session.h         |   18 +
 drivers/net/bnxt/tf_core/tf_tbl.c             |   99 +-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   57 +-
 drivers/net/bnxt/tf_core/tfp.h                |  123 +-
 16 files changed, 3493 insertions(+), 690 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index 7e30c9ffc..30516eb75 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -611,6 +611,10 @@ struct cmd_nums {
 	#define HWRM_FUNC_VF_BW_QCFG                      UINT32_C(0x196)
 	/* Queries pf ids belong to specified host(s) */
 	#define HWRM_FUNC_HOST_PF_IDS_QUERY               UINT32_C(0x197)
+	/* Queries extended stats per function */
+	#define HWRM_FUNC_QSTATS_EXT                      UINT32_C(0x198)
+	/* Queries exteded statitics context */
+	#define HWRM_STAT_EXT_CTX_QUERY                   UINT32_C(0x199)
 	/* Experimental */
 	#define HWRM_SELFTEST_QLIST                       UINT32_C(0x200)
 	/* Experimental */
@@ -647,41 +651,49 @@ struct cmd_nums {
 	/* Experimental */
 	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
 	/* Experimental */
-	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	#define HWRM_TF_SESSION_REGISTER                  UINT32_C(0x2c8)
 	/* Experimental */
-	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	#define HWRM_TF_SESSION_UNREGISTER                UINT32_C(0x2c9)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2ca)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2cb)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2cc)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cd)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2ce)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cf)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2da)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2db)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2e4)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2e5)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2e6)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2e7)
 	/* Experimental */
-	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2e8)
 	/* Experimental */
-	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2e9)
 	/* Experimental */
-	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	#define HWRM_TF_EM_INSERT                         UINT32_C(0x2ea)
 	/* Experimental */
-	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	#define HWRM_TF_EM_DELETE                         UINT32_C(0x2eb)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2f8)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2f9)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2fa)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2fb)
 	/* Experimental */
 	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
@@ -715,6 +727,13 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
 	/* Send driver debug information to firmware */
 	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
+	/* Query debug capabilities of firmware */
+	#define HWRM_DBG_QCAPS                            UINT32_C(0xff20)
+	/* Retrieve debug settings of firmware */
+	#define HWRM_DBG_QCFG                             UINT32_C(0xff21)
+	/* Set destination parameters for crashdump medium */
+	#define HWRM_DBG_CRASHDUMP_MEDIUM_CFG             UINT32_C(0xff22)
+	#define HWRM_NVM_REQ_ARBITRATION                  UINT32_C(0xffed)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -914,8 +933,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 30
-#define HWRM_VERSION_STR "1.10.1.30"
+#define HWRM_VERSION_RSVD 45
+#define HWRM_VERSION_STR "1.10.1.45"
 
 /****************
  * hwrm_ver_get *
@@ -2292,6 +2311,35 @@ struct cmpl_base {
 	 * Completion of TX packet. Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_TX_L2             UINT32_C(0x0)
+	/*
+	 * NO-OP completion:
+	 * Completion of NO-OP. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_NO_OP             UINT32_C(0x1)
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of coalesced TX packet. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_COAL        UINT32_C(0x2)
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of PTP TX packet. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_PTP         UINT32_C(0x3)
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_TPA_START_V2   UINT32_C(0xd)
+	/*
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_L2_V2          UINT32_C(0xf)
 	/*
 	 * RX L2 completion:
 	 * Completion of and L2 RX packet. Length = 32B
@@ -2321,6 +2369,24 @@ struct cmpl_base {
 	 * Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_STAT_EJECT        UINT32_C(0x1a)
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by
+	 * the Primate and processed by the VEE hardware to ensure that
+	 * all completions on a VEE function have been processed by the
+	 * VEE hardware before FLR process is completed.
+	 */
+	#define CMPL_BASE_TYPE_VEE_FLUSH         UINT32_C(0x1c)
+	/*
+	 * Mid Path Short Completion :
+	 * Completion of a Mid Path Command. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_SHORT    UINT32_C(0x1e)
+	/*
+	 * Mid Path Long Completion :
+	 * Completion of a Mid Path Command. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_LONG     UINT32_C(0x1f)
 	/*
 	 * HWRM Command Completion:
 	 * Completion of an HWRM command.
@@ -2398,7 +2464,9 @@ struct tx_cmpl {
 	uint16_t	unused_0;
 	/*
 	 * This is a copy of the opaque field from the first TX BD of this
-	 * transmitted packet.
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
 	 */
 	uint32_t	opaque;
 	uint16_t	errors_v;
@@ -2407,58 +2475,352 @@ struct tx_cmpl {
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	#define TX_CMPL_V                              UINT32_C(0x1)
-	#define TX_CMPL_ERRORS_MASK                    UINT32_C(0xfffe)
-	#define TX_CMPL_ERRORS_SFT                     1
+	#define TX_CMPL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_ERRORS_SFT                         1
 	/*
 	 * This error indicates that there was some sort of problem
 	 * with the BDs for the packet.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK        UINT32_C(0xe)
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT             1
 	/* No error */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR      (UINT32_C(0x0) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
 	/*
 	 * Bad Format:
 	 * BDs were not formatted correctly.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT       (UINT32_C(0x2) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
 	#define TX_CMPL_ERRORS_BUFFER_ERROR_LAST \
 		TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT
 	/*
 	 * When this bit is '1', it indicates that the length of
 	 * the packet was zero. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT          UINT32_C(0x10)
+	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
 	/*
 	 * When this bit is '1', it indicates that the packet
 	 * was longer than the programmed limit in TDI. No
 	 * packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH      UINT32_C(0x20)
+	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
 	/*
 	 * When this bit is '1', it indicates that one or more of the
 	 * BDs associated with this packet generated a PCI error.
 	 * This probably means the address was not valid.
 	 */
-	#define TX_CMPL_ERRORS_DMA_ERROR                UINT32_C(0x40)
+	#define TX_CMPL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
 	/*
 	 * When this bit is '1', it indicates that the packet was longer
 	 * than indicated by the hint. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_HINT_TOO_SHORT           UINT32_C(0x80)
+	#define TX_CMPL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
 	/*
 	 * When this bit is '1', it indicates that the packet was
 	 * dropped due to Poison TLP error on one or more of the
 	 * TLPs in the PXP completion.
 	 */
-	#define TX_CMPL_ERRORS_POISON_TLP_ERROR         UINT32_C(0x100)
+	#define TX_CMPL_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such completions
+	 * are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
 	/* unused2 is 16 b */
 	uint16_t	unused_1;
 	/* unused3 is 32 b */
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* tx_cmpl_coal (size:128b/16B) */
+struct tx_cmpl_coal {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_COAL_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_COAL_TYPE_SFT        0
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of TX packet. Length = 16B
+	 */
+	#define TX_CMPL_COAL_TYPE_TX_L2_COAL   UINT32_C(0x2)
+	#define TX_CMPL_COAL_TYPE_LAST        TX_CMPL_COAL_TYPE_TX_L2_COAL
+	#define TX_CMPL_COAL_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_COAL_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_COAL_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_COAL_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of the packet
+	 * which corresponds with the reported sq_cons_idx. Note that, with
+	 * coalesced completions, completions are generated for only some of the
+	 * packets. The driver will see the opaque field for only those packets.
+	 * Note that, if the packet was described by a short CSO or short CSO
+	 * inline BD, then the 16-bit opaque field from the short CSO BD will
+	 * appear in the bottom 16 bits of this field. For TX rings with
+	 * completion coalescing enabled (which would use the coalesced
+	 * completion record), it is suggested that the driver populate the
+	 * opaque field to indicate the specific TX ring with which the
+	 * completion is associated, then utilize the opaque and sq_cons_idx
+	 * fields in the coalesced completion record to determine the specific
+	 * packets that are to be completed on that ring.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_COAL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_COAL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define TX_CMPL_COAL_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_COAL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_COAL_ERRORS_POISON_TLP_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_COAL_ERRORS_INTERNAL_ERROR \
+		UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_COAL_ERRORS_TIMESTAMP_INVALID_ERROR \
+		UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	uint32_t	sq_cons_idx;
+	/*
+	 * This value is SQ index for the start of the packet following the
+	 * last completed packet.
+	 */
+	#define TX_CMPL_COAL_SQ_CONS_IDX_MASK UINT32_C(0xffffff)
+	#define TX_CMPL_COAL_SQ_CONS_IDX_SFT 0
+} __rte_packed;
+
+/* tx_cmpl_ptp (size:128b/16B) */
+struct tx_cmpl_ptp {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_PTP_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_PTP_TYPE_SFT        0
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of TX packet. Length = 32B
+	 */
+	#define TX_CMPL_PTP_TYPE_TX_L2_PTP    UINT32_C(0x2)
+	#define TX_CMPL_PTP_TYPE_LAST        TX_CMPL_PTP_TYPE_TX_L2_PTP
+	#define TX_CMPL_PTP_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_PTP_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_PTP_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_PTP_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of this
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_PTP_V                                  UINT32_C(0x1)
+	#define TX_CMPL_PTP_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_PTP_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_PTP_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_PTP_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped due
+	 * to a transient internal error in TDC. The packet or LSO can be
+	 * retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_PTP_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_PTP_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint32_t	timestamp_lo;
+} __rte_packed;
+
+/* tx_cmpl_ptp_hi (size:128b/16B) */
+struct tx_cmpl_ptp_hi {
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint16_t	timestamp_hi[3];
+	uint16_t	reserved16;
+	uint64_t	v2;
+	/*
+	 * This value is written by the NIC such that it will be different for
+	 * each pass through the completion queue.The even passes will write 1.
+	 * The odd passes will write 0
+	 */
+	#define TX_CMPL_PTP_HI_V2     UINT32_C(0x1)
+} __rte_packed;
+
 /* rx_pkt_cmpl (size:128b/16B) */
 struct rx_pkt_cmpl {
 	uint16_t	flags_type;
@@ -3003,12 +3365,8 @@ struct rx_pkt_cmpl_hi {
 	#define RX_PKT_CMPL_REORDER_SFT 0
 } __rte_packed;
 
-/*
- * This TPA completion structure is used on devices where the
- * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
- */
-/* rx_tpa_start_cmpl (size:128b/16B) */
-struct rx_tpa_start_cmpl {
+/* rx_pkt_v2_cmpl (size:128b/16B) */
+struct rx_pkt_v2_cmpl {
 	uint16_t	flags_type;
 	/*
 	 * This field indicates the exact type of the completion.
@@ -3017,84 +3375,143 @@ struct rx_tpa_start_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	#define RX_PKT_V2_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_PKT_V2_CMPL_TYPE_SFT                       0
 	/*
-	 * RX L2 TPA Start Completion:
-	 * Completion at the beginning of a TPA operation.
-	 * Length = 32B
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
-	#define RX_TPA_START_CMPL_TYPE_LAST \
-		RX_TPA_START_CMPL_TYPE_RX_TPA_START
-	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_START_CMPL_FLAGS_SFT                6
-	/* This bit will always be '0' for TPA start completions. */
-	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_PKT_V2_CMPL_TYPE_RX_L2_V2                    UINT32_C(0xf)
+	#define RX_PKT_V2_CMPL_TYPE_LAST \
+		RX_PKT_V2_CMPL_TYPE_RX_L2_V2
+	#define RX_PKT_V2_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_PKT_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Normal:
+	 * Packet was placed using normal algorithm.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_NORMAL \
+		(UINT32_C(0x0) << 7)
 	/*
 	 * Jumbo:
-	 * TPA Packet was placed using jumbo algorithm. This means
-	 * that the first buffer will be filled with data before
-	 * moving to aggregation buffers. Each aggregation buffer
-	 * will be filled before moving to the next aggregation
-	 * buffer.
+	 * Packet was placed using jumbo algorithm.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
 		(UINT32_C(0x1) << 7)
 	/*
 	 * Header/Data Separation:
 	 * Packet was placed using Header/Data separation algorithm.
 	 * The separation location is indicated by the itype field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
 	/*
-	 * GRO/Jumbo:
-	 * Packet will be placed using GRO/Jumbo where the first
-	 * packet is filled with data. Subsequent packets will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * Truncation:
+	 * Packet was placed using truncation algorithm. The
+	 * placed (truncated) length is indicated in the payload_offset
+	 * field. The original length is indicated in the len field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
-		(UINT32_C(0x5) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION \
+		(UINT32_C(0x3) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_PKT_V2_CMPL_FLAGS_RSS_VALID                 UINT32_C(0x400)
 	/*
-	 * GRO/Header-Data Separation:
-	 * Packet will be placed using GRO/HDS where the header
-	 * is in the first packet.
-	 * Payload of each packet will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
-		(UINT32_C(0x6) << 7)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* This bit is '1' if the RSS field in this completion is valid. */
-	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
-	/* unused is 1 b */
-	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	#define RX_PKT_V2_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_MASK                UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * Not Known:
+	 * Indicates that the packet type was not known.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_NOT_KNOWN \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * IP Packet:
+	 * Indicates that the packet was an IP packet, but further
+	 * classification was not possible.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_IP \
+		(UINT32_C(0x1) << 12)
 	/*
 	 * TCP Packet:
 	 * Indicates that the packet was IP and TCP.
+	 * This indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_TCP \
 		(UINT32_C(0x2) << 12)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
-		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
 	/*
-	 * This value indicates the amount of packet data written to the
-	 * buffer the opaque field in this completion corresponds to.
+	 * UDP Packet:
+	 * Indicates that the packet was IP and UDP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_UDP \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * FCoE Packet:
+	 * Indicates that the packet was recognized as a FCoE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_FCOE \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * RoCE Packet:
+	 * Indicates that the packet was recognized as a RoCE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ROCE \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * ICMP Packet:
+	 * Indicates that the packet was recognized as ICMP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ICMP \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * PtP packet wo/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_WO_TIMESTAMP \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * PtP packet w/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet and that a timestamp was taken for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP \
+		(UINT32_C(0x9) << 12)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP
+	/*
+	 * This is the length of the data for the packet stored in the
+	 * buffer(s) identified by the opaque value. This includes
+	 * the packet BD and any associated buffer BDs. This does not include
+	 * the length of any data places in aggregation BDs.
 	 */
 	uint16_t	len;
 	/*
@@ -3102,19 +3519,597 @@ struct rx_tpa_start_cmpl {
 	 * corresponds to.
 	 */
 	uint32_t	opaque;
+	uint8_t	agg_bufs_v1;
 	/*
 	 * This value is written by the NIC such that it will be different
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	uint8_t	v1;
+	#define RX_PKT_V2_CMPL_V1           UINT32_C(0x1)
 	/*
-	 * This value is written by the NIC such that it will be different
-	 * for each pass through the completion queue. The even passes
-	 * will write 1. The odd passes will write 0.
+	 * This value is the number of aggregation buffers that follow this
+	 * entry in the completion ring that are a part of this packet.
+	 * If the value is zero, then the packet is completely contained
+	 * in the buffer space provided for the packet in the RX ring.
 	 */
-	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
-	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
+	#define RX_PKT_V2_CMPL_AGG_BUFS_MASK UINT32_C(0x3e)
+	#define RX_PKT_V2_CMPL_AGG_BUFS_SFT 1
+	/* unused1 is 2 b */
+	#define RX_PKT_V2_CMPL_UNUSED1_MASK UINT32_C(0xc0)
+	#define RX_PKT_V2_CMPL_UNUSED1_SFT  6
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	uint16_t	metadata1_payload_offset;
+	/*
+	 * This is data from the CFA as indicated by the meta_format field.
+	 * If truncation placement is not used, this value indicates the offset
+	 * in bytes from the beginning of the packet where the inner payload
+	 * starts. This value is valid for TCP, UDP, FCoE, and RoCE packets. If
+	 * truncation placement is used, this value represents the placed
+	 * (truncated) length of the packet.
+	 */
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_MASK    UINT32_C(0x1ff)
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_SFT     0
+	/* This is data from the CFA as indicated by the meta_format field. */
+	#define RX_PKT_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC. When vee_cmpl_mode
+	 * is set in VNIC context, this is the lower 32b of the host address
+	 * from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/* Last 16 bytes of RX Packet V2 Completion Record */
+/* rx_pkt_v2_cmpl_hi (size:128b/16B) */
+struct rx_pkt_v2_cmpl_hi {
+	uint32_t	flags2;
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_LAST \
+		RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_PKT_V2_CMPL_HI_V2 \
+		UINT32_C(0x1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_SFT                               1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packet that was found after part of the
+	 * packet was already placed. The packet should be treated as
+	 * invalid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_SFT                   1
+	/* No buffer error */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit: Packet did not fit into packet buffer provided.
+	 * For regular placement, this means the packet did not fit in
+	 * the buffer provided. For HDS and jumbo placement, this means
+	 * that the packet could not be placed into 8 physical buffers
+	 * (if fixed-size buffers are used), or that the packet could
+	 * not be placed in the number of physical buffers configured
+	 * for the VNIC (if variable-size buffers are used)
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Not On Chip: All BDs needed for the packet were not on-chip
+	 * when the packet arrived. For regular placement, this error is
+	 * not valid. For HDS and jumbo placement, this means that not
+	 * enough agg BDs were posted to place the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
+		(UINT32_C(0x2) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This indicates that there was an error in the outer tunnel
+	 * portion of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_MASK \
+		UINT32_C(0x70)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_SFT                   4
+	/*
+	 * No additional error occurred on the outer tunnel portion
+	 * of the packet or the packet does not have a outer tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the outer tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * Indicates that header length is out of range in the outer
+	 * tunnel header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the outer tunnel l3 header length. Valid for IPv4, or
+	 * IPv6 outer tunnel packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the outer tunnel UDP header length for a outer
+	 * tunnel UDP packet that is not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 4)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have
+	 * failed (e.g. TTL = 0) in the outer tunnel header. Valid for
+	 * IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_TTL \
+		(UINT32_C(0x5) << 4)
+	/*
+	 * Indicates that the IP checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_CS_ERROR \
+		(UINT32_C(0x6) << 4)
+	/*
+	 * Indicates that the L4 checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR \
+		(UINT32_C(0x7) << 4)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR
+	/*
+	 * This indicates that there was a CRC error on either an FCoE
+	 * or RoCE packet. The itype indicates the packet type.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_CRC_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that there was an error in the tunnel portion
+	 * of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_MASK \
+		UINT32_C(0xe00)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_SFT                    9
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * of the packet or the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 9)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 9)
+	/*
+	 * Indicates that header length is out of range in the tunnel
+	 * header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 9)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the tunnel l3 header length. Valid for IPv4, or IPv6 tunnel
+	 * packet packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 9)
+	/*
+	 * Indicates that the physical packet is shorter than that claimed
+	 * by the tunnel UDP header length for a tunnel UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 9)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have failed
+	 * (e.g. TTL = 0) in the tunnel header. Valid for IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \
+		(UINT32_C(0x5) << 9)
+	/*
+	 * Indicates that the IP checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \
+		(UINT32_C(0x6) << 9)
+	/*
+	 * Indicates that the L4 checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \
+		(UINT32_C(0x7) << 9)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR
+	/*
+	 * This indicates that there was an error in the inner
+	 * portion of the packet when this
+	 * field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_MASK \
+		UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_SFT                      12
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * or the packet of the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * Indicates that IP header version does not match
+	 * expectation from L2 Ethertype for IPv4 and IPv6 or that
+	 * option other than VFT was parsed on
+	 * FCoE packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 12)
+	/*
+	 * indicates that header length is out of range. Valid for
+	 * IPv4 and RoCE
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 12)
+	/*
+	 * indicates that the IPv4 TTL or IPv6 hop limit check
+	 * have failed (e.g. TTL = 0). Valid for IPv4, and IPv6
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_TTL \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * Indicates that physical packet is shorter than that
+	 * claimed by the l3 header length. Valid for IPv4,
+	 * IPv6 packet or RoCE packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the UDP header length for a UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_UDP_TOTAL_ERROR \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * Indicates that TCP header length > IP payload. Valid for
+	 * TCP packets only.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN \
+		(UINT32_C(0x6) << 12)
+	/* Indicates that TCP header length < 5. Valid for TCP. */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN_TOO_SMALL \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * Indicates that TCP option headers result in a TCP header
+	 * size that does not match data offset in TCP header. Valid
+	 * for TCP.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * Indicates that the IP checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \
+		(UINT32_C(0x9) << 12)
+	/*
+	 * Indicates that the L4 checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \
+		(UINT32_C(0xa) << 12)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format=1, this value is the VLAN VID. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_SFT 0
+	/* When meta_format=1, this value is the VLAN DE. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format=1, this value is the VLAN PRI. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_SFT 13
+	/*
+	 * The timestamp field contains the 32b timestamp for the packet from
+	 * the MAC.
+	 */
+	uint32_t	timestamp;
+} __rte_packed;
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_cmpl (size:128b/16B) */
+struct rx_tpa_start_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
+	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	/*
+	 * RX L2 TPA Start Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 */
+	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
+	#define RX_TPA_START_CMPL_TYPE_LAST \
+		RX_TPA_START_CMPL_TYPE_RX_TPA_START
+	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
+	#define RX_TPA_START_CMPL_FLAGS_SFT                6
+	/* This bit will always be '0' for TPA start completions. */
+	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
+	/* unused is 1 b */
+	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
 	/*
 	 * This is the RSS hash type for the packet. The value is packed
 	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
@@ -3285,6 +4280,430 @@ struct rx_tpa_start_cmpl_hi {
 	#define RX_TPA_START_CMPL_INNER_L4_SIZE_SFT   27
 } __rte_packed;
 
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ * RX L2 TPA Start V2 Completion Record (32 bytes split to 2 16-byte
+ * struct)
+ */
+/* rx_tpa_start_v2_cmpl (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define RX_TPA_START_V2_CMPL_TYPE_SFT                       0
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2 \
+		UINT32_C(0xd)
+	#define RX_TPA_START_V2_CMPL_TYPE_LAST \
+		RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2
+	#define RX_TPA_START_V2_CMPL_FLAGS_MASK \
+		UINT32_C(0xffc0)
+	#define RX_TPA_START_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an error
+	 * of some type. Type of error is indicated in error_flags.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ERROR \
+		UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_MASK \
+		UINT32_C(0x380)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_RSS_VALID \
+		UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement. It
+	 * starts at the first 32B boundary after the end of the header for
+	 * HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PKT_METADATA_PRESENT \
+		UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to. If the VNIC is configured to not use an Rx BD for
+	 * the TPA Start completion, then this is a copy of the opaque field
+	 * from the first BD used to place the TPA Start packet.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_LAST RX_TPA_START_V2_CMPL_V1
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	uint16_t	agg_id;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	#define RX_TPA_START_V2_CMPL_AGG_ID_MASK            UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_AGG_ID_SFT             0
+	#define RX_TPA_START_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN valid. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC.
+	 * When vee_cmpl_mode is set in VNIC context, this is the lower
+	 * 32b of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/*
+ * Last 16 bytes of RX L2 TPA Start V2 Completion Record
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_v2_cmpl_hi (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl_hi {
+	uint32_t	flags2;
+	/* This indicates that the aggregation was done using GRO rules. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_AGG_GRO \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet in the affregation.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 * CS status for TPA packets is always valid. This means that "all_ok"
+	 * status will always be set. The ok count status will be set
+	 * appropriately for the packet header, such that all existing CS
+	 * values are ok.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set. For TPA Start completions,
+	 * the complete checksum is calculated for the first packet in the
+	 * aggregation only.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V2 \
+		UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_SFT                     1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packetThe packet should be treated as
+	 * invalid.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	/* No buffer error */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit:
+	 * Packet did not fit into packet buffer provided. This means
+	 * that the TPA Start packet was too big to be placed into the
+	 * per-packet maximum number of physical buffers configured for
+	 * the VNIC, or that it was too big to be placed into the
+	 * per-aggregation maximum number of physical buffers configured
+	 * for the VNIC. This error only occurs when the VNIC is
+	 * configured for variable size receive buffers.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_LAST \
+		RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format != 0, this value is the VLAN VID. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_SFT 0
+	/* When meta_format != 0, this value is the VLAN DE. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format != 0, this value is the VLAN PRI. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_SFT 13
+	/*
+	 * This field contains the outer_l3_offset, inner_l2_offset,
+	 * inner_l3_offset, and inner_l4_size.
+	 *
+	 * hdr_offsets[8:0] contains the outer_l3_offset.
+	 * hdr_offsets[17:9] contains the inner_l2_offset.
+	 * hdr_offsets[26:18] contains the inner_l3_offset.
+	 * hdr_offsets[31:27] contains the inner_l4_size.
+	 */
+	uint32_t	hdr_offsets;
+} __rte_packed;
+
 /*
  * This TPA completion structure is used on devices where the
  * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
@@ -3299,27 +4718,27 @@ struct rx_tpa_end_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_END_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_END_CMPL_TYPE_SFT                 0
+	#define RX_TPA_END_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_TPA_END_CMPL_TYPE_SFT                       0
 	/*
 	 * RX L2 TPA End Completion:
 	 * Completion at the end of a TPA operation.
 	 * Length = 32B
 	 */
-	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END            UINT32_C(0x15)
+	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END                  UINT32_C(0x15)
 	#define RX_TPA_END_CMPL_TYPE_LAST \
 		RX_TPA_END_CMPL_TYPE_RX_TPA_END
-	#define RX_TPA_END_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_END_CMPL_FLAGS_SFT                6
+	#define RX_TPA_END_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_TPA_END_CMPL_FLAGS_SFT                      6
 	/*
 	 * When this bit is '1', it indicates a packet that has an
 	 * error of some type. Type of error is indicated in
 	 * error_flags.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_TPA_END_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT             7
 	/*
 	 * Jumbo:
 	 * TPA Packet was placed using jumbo algorithm. This means
@@ -3337,6 +4756,15 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
 	/*
 	 * GRO/Jumbo:
 	 * Packet will be placed using GRO/Jumbo where the first
@@ -3358,11 +4786,28 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS \
 		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* unused is 2 b */
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_MASK         UINT32_C(0xc00)
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_SFT          10
+		RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* unused is 1 b */
+	#define RX_TPA_END_CMPL_FLAGS_UNUSED                    UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
@@ -3372,8 +4817,9 @@ struct rx_tpa_end_cmpl {
 	 *     field is valid and contains the TCP checksum.
 	 *     This also indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT                 12
 	/*
 	 * This value is zero for TPA End completions.
 	 * There is no data in the buffer that corresponds to the opaque
@@ -4243,6 +5689,52 @@ struct rx_abuf_cmpl {
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* VEE FLUSH Completion Record (16 bytes) */
+/* vee_flush (size:128b/16B) */
+struct vee_flush {
+	uint32_t	downstream_path_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define VEE_FLUSH_TYPE_MASK           UINT32_C(0x3f)
+	#define VEE_FLUSH_TYPE_SFT            0
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by the Primate and processed
+	 * by the VEE hardware to ensure that all completions on a VEE
+	 * function have been processed by the VEE hardware before FLR
+	 * process is completed.
+	 */
+	#define VEE_FLUSH_TYPE_VEE_FLUSH        UINT32_C(0x1c)
+	#define VEE_FLUSH_TYPE_LAST            VEE_FLUSH_TYPE_VEE_FLUSH
+	/* downstream_path is 1 b */
+	#define VEE_FLUSH_DOWNSTREAM_PATH     UINT32_C(0x40)
+	/* This completion is associated with VEE Transmit */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_TX    (UINT32_C(0x0) << 6)
+	/* This completion is associated with VEE Receive */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_RX    (UINT32_C(0x1) << 6)
+	#define VEE_FLUSH_DOWNSTREAM_PATH_LAST VEE_FLUSH_DOWNSTREAM_PATH_RX
+	/*
+	 * This is an opaque value that is passed through the completion
+	 * to the VEE handler SW and is used to indicate what VEE VQ or
+	 * function has completed FLR processing.
+	 */
+	uint32_t	opaque;
+	uint32_t	v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes will
+	 * write 1. The odd passes will write 0.
+	 */
+	#define VEE_FLUSH_V     UINT32_C(0x1)
+	/* unused3 is 32 b */
+	uint32_t	unused_3;
+} __rte_packed;
+
 /* eject_cmpl (size:128b/16B) */
 struct eject_cmpl {
 	uint16_t	type;
@@ -6562,7 +8054,7 @@ struct hwrm_async_event_cmpl_deferred_response {
 	/*
 	 * The PF's mailbox is clear to issue another command.
 	 * A command with this seq_id is still in progress
-	 * and will return a regular HWRM completion when done.
+	 * and will return a regualr HWRM completion when done.
 	 * 'event_data1' field, if non-zero, contains the estimated
 	 * execution time for the command.
 	 */
@@ -7476,6 +8968,8 @@ struct hwrm_func_qcaps_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -7729,6 +9223,12 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
 		UINT32_C(0x40000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_QCAPS command
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_DBG_QCAPS_CMD_SUPPORTED \
+		UINT32_C(0x80000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7854,6 +9354,19 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
 		UINT32_C(0x2)
+	/*
+	 * If 1, the device can report extended hw statistics (including
+	 * additional tpa statistics).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_EXT_HW_STATS_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then the core firmware has support to enable/
+	 * disable hot reset support for interface dynamically through
+	 * HWRM_FUNC_CFG.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x8)
 	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -7904,6 +9417,8 @@ struct hwrm_func_qcfg_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -8013,6 +9528,15 @@ struct hwrm_func_qcfg_output {
 	 */
 	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x100)
+	/*
+	 * If set to 1, then the firmware and all currently registered driver
+	 * instances support hot reset. The hot reset support will be updated
+	 * dynamically based on the driver interface advertisement.
+	 * If set to 0, then the adapter is not currently able to initiate
+	 * hot reset.
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_HOT_RESET_ALLOWED \
+		UINT32_C(0x200)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -8565,6 +10089,17 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x2000000)
+	/*
+	 * If this bit is set to 0, then the interface does not support hot
+	 * reset capability which it advertised with the hot_reset_support
+	 * flag in HWRM_FUNC_DRV_RGTR. If any of the function has set this
+	 * flag to 0, adapter cannot do the hot reset. In this state, if the
+	 * firmware receives a hot reset request, firmware must fail the
+	 * request. If this bit is set to 1, then interface is renabling the
+	 * hot reset capability.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS_HOT_RESET_IF_EN_DIS \
+		UINT32_C(0x4000000)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the mtu field to be
@@ -8704,6 +10239,12 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_ENABLES_ADMIN_LINK_STATE \
 		UINT32_C(0x400000)
+	/*
+	 * This bit must be '1' for the hot_reset_if_en_dis field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x800000)
 	/*
 	 * The maximum transmission unit of the function.
 	 * The HWRM should make sure that the mtu of
@@ -9036,15 +10577,21 @@ struct hwrm_func_qstats_input {
 	/* This flags indicates the type of statistics request. */
 	uint8_t	flags;
 	/* This value is not used to avoid backward compatibility issues. */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED    UINT32_C(0x0)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
 	/*
 	 * flags should be set to 1 when request is for only RoCE statistics.
 	 * This will be honored only if the caller_fid is a privileged PF.
 	 * In all other cases FID and caller_fid should be the same.
 	 */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY UINT32_C(0x1)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
 	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_LAST \
-		HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY
+		HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK
 	uint8_t	unused_0[5];
 } __rte_packed;
 
@@ -9130,6 +10677,132 @@ struct hwrm_func_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/************************
+ * hwrm_func_qstats_ext *
+ ************************/
+
+
+/* hwrm_func_qstats_ext_input (size:192b/24B) */
+struct hwrm_func_qstats_ext_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being queried.
+	 * 0xFF... (All Fs) if the query is for the requesting
+	 * function.
+	 * A privileged PF can query for other function's statistics.
+	 */
+	uint16_t	fid;
+	/* This flags indicates the type of statistics request. */
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * flags should be set to 1 when request is for only RoCE statistics.
+	 * This will be honored only if the caller_fid is a privileged PF.
+	 * In all other cases FID and caller_fid should be the same.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
+} __rte_packed;
+
+/* hwrm_func_qstats_ext_output (size:1472b/184B) */
+struct hwrm_func_qstats_ext_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on received path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***********************
  * hwrm_func_clr_stats *
  ***********************/
@@ -10116,7 +11789,7 @@ struct hwrm_func_backing_store_qcaps_output {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -11039,7 +12712,7 @@ struct hwrm_func_backing_store_cfg_input {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -16149,7 +17822,18 @@ struct hwrm_port_qstats_input {
 	uint64_t	resp_addr;
 	/* Port ID of port that is being queried. */
 	uint16_t	port_id;
-	uint8_t	unused_0[6];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -16382,7 +18066,7 @@ struct rx_port_stats_ext {
  * Port Rx Statistics extended PFC WatchDog Format.
  * StormDetect and StormRevert event determination is based
  * on an integration period and a percentage threshold.
- * StormDetect event - when percentage of XOFF frames received
+ * StormDetect event - when percentage of XOFF frames receieved
  * within an integration period exceeds the configured threshold.
  * StormRevert event - when percentage of XON frames received
  * within an integration period exceeds the configured threshold.
@@ -16843,7 +18527,18 @@ struct hwrm_port_qstats_ext_input {
 	 * statistics block in bytes
 	 */
 	uint16_t	rx_stat_size;
-	uint8_t	unused_0[2];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for the counter mask,
+	 * representing width of each of the stats counters, rather than
+	 * counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0;
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -25312,95 +27007,104 @@ struct hwrm_ring_free_input {
 	/* Ring Type. */
 	uint8_t	ring_type;
 	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	/* TX Ring (TR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	/* RX Ring (RR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	/* RoCE Notification Completion Ring (ROCE_CR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	/* RX Aggregation Ring */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
+	/* Notification Queue */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
+		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
+	uint8_t	unused_0;
+	/* Physical number of ring allocated. */
+	uint16_t	ring_id;
+	uint8_t	unused_1[4];
+} __rte_packed;
+
+/* hwrm_ring_free_output (size:128b/16B) */
+struct hwrm_ring_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/*******************
+ * hwrm_ring_reset *
+ *******************/
+
+
+/* hwrm_ring_reset_input (size:192b/24B) */
+struct hwrm_ring_reset_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Ring Type. */
+	uint8_t	ring_type;
+	/* L2 Completion Ring (CR) */
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL     UINT32_C(0x0)
 	/* TX Ring (TR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX          UINT32_C(0x1)
 	/* RX Ring (RR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX          UINT32_C(0x2)
 	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
-	/* RX Aggregation Ring */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
-	/* Notification Queue */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
-		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
-	uint8_t	unused_0;
-	/* Physical number of ring allocated. */
-	uint16_t	ring_id;
-	uint8_t	unused_1[4];
-} __rte_packed;
-
-/* hwrm_ring_free_output (size:128b/16B) */
-struct hwrm_ring_free_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL   UINT32_C(0x3)
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * Rx Ring Group.  This is to reset rx and aggregation in an atomic
+	 * operation. Completion ring associated with this ring group is
+	 * not reset.
 	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/*******************
- * hwrm_ring_reset *
- *******************/
-
-
-/* hwrm_ring_reset_input (size:192b/24B) */
-struct hwrm_ring_reset_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
-	 */
-	uint64_t	resp_addr;
-	/* Ring Type. */
-	uint8_t	ring_type;
-	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
-	/* TX Ring (TR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX        UINT32_C(0x1)
-	/* RX Ring (RR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX        UINT32_C(0x2)
-	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP UINT32_C(0x6)
 	#define HWRM_RING_RESET_INPUT_RING_TYPE_LAST \
-		HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL
+		HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP
 	uint8_t	unused_0;
-	/* Physical number of the ring. */
+	/*
+	 * Physical number of the ring. When ring type is rx_ring_grp, ring id
+	 * actually refers to ring group id.
+	 */
 	uint16_t	ring_id;
 	uint8_t	unused_1[4];
 } __rte_packed;
@@ -25615,7 +27319,18 @@ struct hwrm_ring_cmpl_ring_qaggint_params_input {
 	uint64_t	resp_addr;
 	/* Physical number of completion ring. */
 	uint16_t	ring_id;
-	uint8_t	unused_0[6];
+	uint16_t	flags;
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_MASK \
+		UINT32_C(0x3)
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_SFT 0
+	/*
+	 * Set this flag to 1 when querying parameters on a notification
+	 * queue. Set this flag to 0 when querying parameters on a
+	 * completion queue or completion ring.
+	 */
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
+		UINT32_C(0x4)
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_ring_cmpl_ring_qaggint_params_output (size:256b/32B) */
@@ -25652,19 +27367,19 @@ struct hwrm_ring_cmpl_ring_qaggint_params_output {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA when in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
+	 * Maximum wait time spent aggregating
 	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
@@ -25738,7 +27453,7 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	/*
 	 * Set this flag to 1 when configuring parameters on a
 	 * notification queue. Set this flag to 0 when configuring
-	 * parameters on a completion queue.
+	 * parameters on a completion queue or completion ring.
 	 */
 	#define HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
 		UINT32_C(0x4)
@@ -25753,20 +27468,20 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA while in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
-	 * cmpls before signaling the interrupt after the
+	 * Maximum wait time spent aggregating
+	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
 	uint16_t	int_lat_tmr_max;
@@ -33339,78 +35054,246 @@ struct hwrm_tf_version_get_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-} __rte_packed;
-
-/* hwrm_tf_version_get_output (size:128b/16B) */
-struct hwrm_tf_version_get_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	/* Version Major number. */
-	uint8_t	major;
-	/* Version Minor number. */
-	uint8_t	minor;
-	/* Version Update number. */
-	uint8_t	update;
-	/* unused. */
-	uint8_t	unused0[4];
+} __rte_packed;
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_open_output (size:192b/24B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware.
+	 */
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the first client on
+	 * the newly created session.
+	 */
+	uint32_t	fw_session_client_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* unused. */
+	uint8_t	unused1[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/****************************
+ * hwrm_tf_session_register *
+ ****************************/
+
+
+/* hwrm_tf_session_register_input (size:704b/88B) */
+struct hwrm_tf_session_register_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field is
-	 * written last.
-	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/************************
- * hwrm_tf_session_open *
- ************************/
-
-
-/* hwrm_tf_session_open_input (size:640b/80B) */
-struct hwrm_tf_session_open_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * Unique session identifier for the session that the
+	 * register request want to create a new client on. This
+	 * value originates from the first open request.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
 	 */
-	uint64_t	resp_addr;
-	/* Name of the session. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session client. */
+	uint8_t	session_client_name[64];
 } __rte_packed;
 
-/* hwrm_tf_session_open_output (size:128b/16B) */
-struct hwrm_tf_session_open_output {
+/* hwrm_tf_session_register_output (size:128b/16B) */
+struct hwrm_tf_session_register_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33420,12 +35303,11 @@ struct hwrm_tf_session_open_output {
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
 	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session.
+	 * Unique session client identifier for the session created
+	 * by the firmware. It includes the session the client it
+	 * attached to and session client info.
 	 */
-	uint32_t	fw_session_id;
+	uint32_t	fw_session_client_id;
 	/* unused. */
 	uint8_t	unused0[3];
 	/*
@@ -33439,13 +35321,13 @@ struct hwrm_tf_session_open_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/**************************
- * hwrm_tf_session_attach *
- **************************/
+/******************************
+ * hwrm_tf_session_unregister *
+ ******************************/
 
 
-/* hwrm_tf_session_attach_input (size:704b/88B) */
-struct hwrm_tf_session_attach_input {
+/* hwrm_tf_session_unregister_input (size:192b/24B) */
+struct hwrm_tf_session_unregister_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -33475,24 +35357,19 @@ struct hwrm_tf_session_attach_input {
 	 */
 	uint64_t	resp_addr;
 	/*
-	 * Unique session identifier for the session that the attach
-	 * request want to attach to. This value originates from the
-	 * shared session memory that the attach request opened by
-	 * way of the 'attach name' that was passed in to the core
-	 * attach API.
-	 * The fw_session_id of the attach session includes PCIe bus
-	 * info to distinguish the PF and session info to identify
-	 * the associated TruFlow session.
+	 * Unique session identifier for the session that the
+	 * unregister request want to close a session client on.
 	 */
-	uint32_t	attach_fw_session_id;
-	/* unused. */
-	uint32_t	unused0;
-	/* Name of the session it self. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the session that the
+	 * unregister request want to close.
+	 */
+	uint32_t	fw_session_client_id;
 } __rte_packed;
 
-/* hwrm_tf_session_attach_output (size:128b/16B) */
-struct hwrm_tf_session_attach_output {
+/* hwrm_tf_session_unregister_output (size:128b/16B) */
+struct hwrm_tf_session_unregister_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33501,16 +35378,8 @@ struct hwrm_tf_session_attach_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session. This fw_session_id is unique to the attach
-	 * request.
-	 */
-	uint32_t	fw_session_id;
 	/* unused. */
-	uint8_t	unused0[3];
+	uint8_t	unused0[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33746,15 +35615,17 @@ struct hwrm_tf_session_resc_qcaps_input {
 	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * Defines the size of the provided qcaps_addr array
 	 * buffer. The size should be set to the Resource Manager
-	 * provided max qcaps value that is device specific. This is
-	 * the max size possible.
+	 * provided max number of qcaps entries which is device
+	 * specific. Resource Manager gets the max size from HCAPI
+	 * RM.
 	 */
-	uint16_t	size;
+	uint16_t	qcaps_size;
 	/*
-	 * This is the DMA address for the qcaps output data
-	 * array. Array is of tf_rm_cap type and is device specific.
+	 * This is the DMA address for the qcaps output data array
+	 * buffer. Array is of tf_rm_resc_req_entry type and is
+	 * device specific.
 	 */
 	uint64_t	qcaps_addr;
 } __rte_packed;
@@ -33772,29 +35643,28 @@ struct hwrm_tf_session_resc_qcaps_output {
 	/* Control flags. */
 	uint32_t	flags;
 	/* Session reservation strategy. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_SFT \
 		0
 	/* Static partitioning. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_STATIC \
 		UINT32_C(0x0)
 	/* Strategy 1. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_1 \
 		UINT32_C(0x1)
 	/* Strategy 2. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_2 \
 		UINT32_C(0x2)
 	/* Strategy 3. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3 \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
-		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3
 	/*
-	 * Size of the returned tf_rm_cap data array. The value
-	 * cannot exceed the size defined by the input msg. The data
-	 * array is returned using the qcaps_addr specified DMA
-	 * address also provided by the input msg.
+	 * Size of the returned qcaps_addr data array buffer. The
+	 * value cannot exceed the size defined by the input msg,
+	 * qcaps_size.
 	 */
 	uint16_t	size;
 	/* unused. */
@@ -33817,7 +35687,7 @@ struct hwrm_tf_session_resc_qcaps_output {
  ******************************/
 
 
-/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+/* hwrm_tf_session_resc_alloc_input (size:320b/40B) */
 struct hwrm_tf_session_resc_alloc_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -33860,16 +35730,25 @@ struct hwrm_tf_session_resc_alloc_input {
 	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided num_addr
-	 * buffer.
+	 * Defines the array size of the provided req_addr and
+	 * resv_addr array buffers. Should be set to the number of
+	 * request entries.
 	 */
-	uint16_t	size;
+	uint16_t	req_size;
+	/*
+	 * This is the DMA address for the request input data array
+	 * buffer. Array is of tf_rm_resc_req_entry type. Size of the
+	 * array buffer is provided by the 'req_size' field in this
+	 * message.
+	 */
+	uint64_t	req_addr;
 	/*
-	 * This is the DMA address for the num input data array
-	 * buffer. Array is of tf_rm_num type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * This is the DMA address for the resc output data array
+	 * buffer. Array is of tf_rm_resc_entry type. Size of the array
+	 * buffer is provided by the 'req_size' field in this
+	 * message.
 	 */
-	uint64_t	num_addr;
+	uint64_t	resc_addr;
 } __rte_packed;
 
 /* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
@@ -33882,8 +35761,15 @@ struct hwrm_tf_session_resc_alloc_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
+	/*
+	 * Size of the returned tf_rm_resc_entry data array. The value
+	 * cannot exceed the req_size defined by the input msg. The data
+	 * array is returned using the resv_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
 	/* unused. */
-	uint8_t	unused0[7];
+	uint8_t	unused0[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33946,11 +35832,12 @@ struct hwrm_tf_session_resc_free_input {
 	 * Defines the size, in bytes, of the provided free_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	free_size;
 	/*
 	 * This is the DMA address for the free input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size field of this message.
+	 * buffer.  Array is of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'free_size' field of this
+	 * message.
 	 */
 	uint64_t	free_addr;
 } __rte_packed;
@@ -34029,11 +35916,12 @@ struct hwrm_tf_session_resc_flush_input {
 	 * Defines the size, in bytes, of the provided flush_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	flush_size;
 	/*
 	 * This is the DMA address for the flush input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * buffer.  Array of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'flush_size' field in this
+	 * message.
 	 */
 	uint64_t	flush_addr;
 } __rte_packed;
@@ -34062,12 +35950,9 @@ struct hwrm_tf_session_resc_flush_output {
 } __rte_packed;
 
 /* TruFlow RM capability of a resource. */
-/* tf_rm_cap (size:64b/8B) */
-struct tf_rm_cap {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_req_entry (size:64b/8B) */
+struct tf_rm_resc_req_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Minimum value. */
 	uint16_t	min;
@@ -34075,25 +35960,10 @@ struct tf_rm_cap {
 	uint16_t	max;
 } __rte_packed;
 
-/* TruFlow RM number of a resource. */
-/* tf_rm_num (size:64b/8B) */
-struct tf_rm_num {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
-	uint32_t	type;
-	/* Number of resources. */
-	uint32_t	num;
-} __rte_packed;
-
 /* TruFlow RM reservation information. */
-/* tf_rm_res (size:64b/8B) */
-struct tf_rm_res {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_entry (size:64b/8B) */
+struct tf_rm_resc_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Start offset. */
 	uint16_t	start;
@@ -34925,6 +36795,162 @@ struct hwrm_tf_ext_em_qcfg_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/*********************
+ * hwrm_tf_em_insert *
+ *********************/
+
+
+/* hwrm_tf_em_insert_input (size:832b/104B) */
+struct hwrm_tf_em_insert_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware Session Id. */
+	uint32_t	fw_session_id;
+	/* Control Flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX
+	/* Reported match strength. */
+	uint16_t	strength;
+	/* Index to action. */
+	uint32_t	action_ptr;
+	/* Index of EM record. */
+	uint32_t	em_record_idx;
+	/* EM Key value. */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
+/* hwrm_tf_em_insert_output (size:128b/16B) */
+struct hwrm_tf_em_insert_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* EM record pointer index. */
+	uint16_t	rptr_index;
+	/* EM record offset 0~3. */
+	uint8_t	rptr_entry;
+	/* Number of word entries consumed by the key. */
+	uint8_t	num_of_entries;
+	/* unused. */
+	uint32_t	unused0;
+} __rte_packed;
+
+/*********************
+ * hwrm_tf_em_delete *
+ *********************/
+
+
+/* hwrm_tf_em_delete_input (size:832b/104B) */
+struct hwrm_tf_em_delete_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Session Id. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX
+	/* Unused0 */
+	uint16_t	unused0;
+	/* EM internal flow hanndle. */
+	uint64_t	flow_handle;
+	/* EM Key value */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused1[3];
+} __rte_packed;
+
+/* hwrm_tf_em_delete_output (size:128b/16B) */
+struct hwrm_tf_em_delete_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Original stack allocation index. */
+	uint16_t	em_index;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
 /********************
  * hwrm_tf_tcam_set *
  ********************/
@@ -35582,10 +37608,10 @@ struct ctx_hw_stats {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35598,10 +37624,10 @@ struct ctx_hw_stats {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35618,7 +37644,11 @@ struct ctx_hw_stats {
 	uint64_t	tpa_aborts;
 } __rte_packed;
 
-/* Periodic statistics context DMA to host. */
+/*
+ * Extended periodic statistics context DMA to host. On cards that
+ * support TPA v2, additional TPA related stats exist and can be retrieved
+ * by DMA of ctx_hw_stats_ext, rather than legacy ctx_hw_stats structure.
+ */
 /* ctx_hw_stats_ext (size:1344b/168B) */
 struct ctx_hw_stats_ext {
 	/* Number of received unicast packets */
@@ -35627,10 +37657,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35643,10 +37673,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35912,7 +37942,14 @@ struct hwrm_stat_ctx_query_input {
 	uint64_t	resp_addr;
 	/* ID of the statistics context that is being queried. */
 	uint32_t	stat_ctx_id;
-	uint8_t	unused_0[4];
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK     UINT32_C(0x1)
+	uint8_t	unused_0[3];
 } __rte_packed;
 
 /* hwrm_stat_ctx_query_output (size:1408b/176B) */
@@ -35949,7 +37986,7 @@ struct hwrm_stat_ctx_query_output {
 	uint64_t	rx_bcast_pkts;
 	/* Number of received packets with error */
 	uint64_t	rx_err_pkts;
-	/* Number of dropped packets on received path */
+	/* Number of dropped packets on receive path */
 	uint64_t	rx_drop_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
@@ -35976,6 +38013,117 @@ struct hwrm_stat_ctx_query_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/***************************
+ * hwrm_stat_ext_ctx_query *
+ ***************************/
+
+
+/* hwrm_stat_ext_ctx_query_input (size:192b/24B) */
+struct hwrm_stat_ext_ctx_query_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* ID of the extended statistics context that is being queried. */
+	uint32_t	stat_ctx_id;
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_EXT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK \
+		UINT32_C(0x1)
+	uint8_t	unused_0[3];
+} __rte_packed;
+
+/* hwrm_stat_ext_ctx_query_output (size:1472b/184B) */
+struct hwrm_stat_ext_ctx_query_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on receive path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***************************
  * hwrm_stat_ctx_eng_query *
  ***************************/
@@ -37565,6 +39713,13 @@ struct hwrm_nvm_install_update_input {
 	 */
 	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_ALLOWED_TO_DEFRAG \
 		UINT32_C(0x4)
+	/*
+	 * If set to 1, FW will verify the package in the "UPDATE" NVM item
+	 * without installing it. This flag is for FW internal use only.
+	 * Users should not set this flag. The request will otherwise fail.
+	 */
+	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_VERIFY_ONLY \
+		UINT32_C(0x8)
 	uint8_t	unused_0[2];
 } __rte_packed;
 
@@ -38115,6 +40270,72 @@ struct hwrm_nvm_validate_option_cmd_err {
 	uint8_t	unused_0[7];
 } __rte_packed;
 
+/****************
+ * hwrm_oem_cmd *
+ ****************/
+
+
+/* hwrm_oem_cmd_input (size:1024b/128B) */
+struct hwrm_oem_cmd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific command data. */
+	uint32_t	oem_data[26];
+} __rte_packed;
+
+/* hwrm_oem_cmd_output (size:768b/96B) */
+struct hwrm_oem_cmd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific response data. */
+	uint32_t	oem_data[18];
+	uint8_t	unused_1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /*****************
  * hwrm_fw_reset *
  ******************/
@@ -38338,6 +40559,55 @@ struct hwrm_port_ts_query_output {
 	uint8_t		valid;
 } __rte_packed;
 
+/*
+ * This structure is fixed at the beginning of the ChiMP SRAM (GRC
+ * offset: 0x31001F0). Host software is expected to read from this
+ * location for a defined signature. If it exists, the software can
+ * assume the presence of this structure and the validity of the
+ * FW_STATUS location in the next field.
+ */
+/* hcomm_status (size:64b/8B) */
+struct hcomm_status {
+	uint32_t	sig_ver;
+	/*
+	 * This field defines the version of the structure. The latest
+	 * version value is 1.
+	 */
+	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
+	#define HCOMM_STATUS_VER_SFT		0
+	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
+	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
+	/*
+	 * This field is to store the signature value to indicate the
+	 * presence of the structure.
+	 */
+	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
+	#define HCOMM_STATUS_SIGNATURE_SFT	8
+	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
+	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
+	uint32_t	fw_status_loc;
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
+	/* PCIE configuration space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
+	/* GRC space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
+	/* BAR0 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
+	/* BAR1 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
+		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
+	/*
+	 * This offset where the fw_status register is located. The value
+	 * is generally 4-byte aligned.
+	 */
+	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
+	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
+} __rte_packed;
+/* This is the GRC offset where the hcomm_status struct resides. */
+#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
+
 /**************************
  * hwrm_cfa_counter_qcaps *
  **************************/
@@ -38622,53 +40892,4 @@ struct hwrm_cfa_counter_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/*
- * This structure is fixed at the beginning of the ChiMP SRAM (GRC
- * offset: 0x31001F0). Host software is expected to read from this
- * location for a defined signature. If it exists, the software can
- * assume the presence of this structure and the validity of the
- * FW_STATUS location in the next field.
- */
-/* hcomm_status (size:64b/8B) */
-struct hcomm_status {
-	uint32_t	sig_ver;
-	/*
-	 * This field defines the version of the structure. The latest
-	 * version value is 1.
-	 */
-	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
-	#define HCOMM_STATUS_VER_SFT		0
-	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
-	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
-	/*
-	 * This field is to store the signature value to indicate the
-	 * presence of the structure.
-	 */
-	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
-	#define HCOMM_STATUS_SIGNATURE_SFT	8
-	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
-	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
-	uint32_t	fw_status_loc;
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
-	/* PCIE configuration space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
-	/* GRC space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
-	/* BAR0 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
-	/* BAR1 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
-		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
-	/*
-	 * This offset where the fw_status register is located. The value
-	 * is generally 4-byte aligned.
-	 */
-	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
-	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
-} __rte_packed;
-/* This is the GRC offset where the hcomm_status struct resides. */
-#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
-
 #endif /* _HSI_STRUCT_DEF_DPDK_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 341909573..439950e02 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -86,6 +86,7 @@ struct tf_tbl_type_get_output;
 struct tf_em_internal_insert_input;
 struct tf_em_internal_insert_output;
 struct tf_em_internal_delete_input;
+struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -949,6 +950,8 @@ typedef struct tf_em_internal_insert_output {
 	uint16_t			 rptr_index;
 	/* EM record offset 0~3 */
 	uint8_t			  rptr_entry;
+	/* Number of word entries consumed by the key */
+	uint8_t			  num_of_entries;
 } tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
 
 /* Input params for EM INTERNAL rule delete */
@@ -969,4 +972,10 @@ typedef struct tf_em_internal_delete_input {
 	uint16_t			 em_key_bitlen;
 } tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
 
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_output {
+	/* Original stack allocation index */
+	uint16_t			 em_index;
+} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
index e5abcc2f2..b1fd2cd43 100644
--- a/drivers/net/bnxt/tf_core/lookup3.h
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -152,7 +152,6 @@ static inline uint32_t hashword(const uint32_t *k,
 		final(a, b, c);
 		/* Falls through. */
 	case 0:	    /* case 0: nothing left to add */
-		/* FALLTHROUGH */
 		break;
 	}
 	/*------------------------------------------------- report the result */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 9cfbd244f..954806377 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -27,6 +27,14 @@ stack_init(int num_entries, uint32_t *items, struct stack *st)
 	return 0;
 }
 
+/*
+ * Return the address of the items
+ */
+uint32_t *stack_items(struct stack *st)
+{
+	return st->items;
+}
+
 /* Return the size of the stack
  */
 int32_t
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
index ebd055592..6732e0313 100644
--- a/drivers/net/bnxt/tf_core/stack.h
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -36,6 +36,16 @@ int stack_init(int num_entries,
 	       uint32_t *items,
 	       struct stack *st);
 
+/** Return the address of the stack contents
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    pointer to the stack contents
+ */
+uint32_t *stack_items(struct stack *st);
+
 /** Return the size of the stack
  *
  *  [in] st
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index cf9f36adb..1f6c33ab5 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -45,6 +45,100 @@ static void tf_seeds_init(struct tf_session *session)
 	}
 }
 
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	tfp_free(ptr);
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -54,6 +148,7 @@ tf_open_session(struct tf                    *tfp,
 	struct tfp_calloc_parms alloc_parms;
 	unsigned int domain, bus, slot, device;
 	uint8_t fw_session_id;
+	int dir;
 
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
@@ -110,7 +205,7 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup;
 	}
 
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+	tfp->session = alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -175,6 +270,16 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize EM pool */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EM Pool initialization failed\n");
+			goto cleanup_close;
+		}
+	}
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -239,6 +344,7 @@ tf_close_session(struct tf *tfp)
 	int rc_close = 0;
 	struct tf_session *tfs;
 	union tf_session_id session_id;
+	int dir;
 
 	if (tfp == NULL || tfp->session == NULL)
 		return -EINVAL;
@@ -268,6 +374,10 @@ tf_close_session(struct tf *tfp)
 
 	/* Final cleanup as we're last user of the session */
 	if (tfs->ref_count == 0) {
+		/* Free EM pool */
+		for (dir = 0; dir < TF_DIR_MAX; dir++)
+			tf_free_em_pool(tfs, dir);
+
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
 		tfp->session = NULL;
@@ -301,16 +411,25 @@ int tf_insert_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
 	/* Process the EM entry per Table Scope type */
-	return tf_insert_eem_entry((struct tf_session *)tfp->session->core_data,
-				   tbl_scope_cb,
-				   parms);
+	if (parms->mem == TF_MEM_EXTERNAL) {
+		/* External EEM */
+		return tf_insert_eem_entry((struct tf_session *)
+					   (tfp->session->core_data),
+					   tbl_scope_cb,
+					   parms);
+	} else if (parms->mem == TF_MEM_INTERNAL) {
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
+	}
+
+	return -EINVAL;
 }
 
 /** Delete EM hash entry API
@@ -327,13 +446,16 @@ int tf_delete_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
-	return tf_delete_eem_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tfp, parms);
+	else
+		return tf_delete_em_internal_entry(tfp, parms);
 }
 
 /** allocate identifier resource
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1eedd80e7..81ff7602f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -44,44 +44,7 @@ enum tf_mem {
 };
 
 /**
- * The size of the external action record (Wh+/Brd2)
- *
- * Currently set to 512.
- *
- * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
- * + stats (16) = 304 aligned on a 16B boundary
- *
- * Theoretically, the size should be smaller. ~304B
- */
-#define TF_ACTION_RECORD_SZ 512
-
-/**
- * External pool size
- *
- * Defines a single pool of external action records of
- * fixed size.  Currently, this is an index.
- */
-#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
-
-/**
- *  External pool entry count
- *
- *  Defines the number of entries in the external action pool
- */
-#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
-
-/**
- * Number of external pools
- */
-#define TF_EXT_POOL_CNT_MAX 1
-
-/**
- * External pool Id
- */
-#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
-#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
-
-/** EEM record AR helper
+ * EEM record AR helper
  *
  * Helper to handle the Action Record Pointer in the EEM Record Entry.
  *
@@ -109,7 +72,8 @@ enum tf_mem {
  */
 
 
-/** Session Version defines
+/**
+ * Session Version defines
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -119,7 +83,8 @@ enum tf_mem {
 #define TF_SESSION_VER_MINOR  0   /**< Minor Version */
 #define TF_SESSION_VER_UPDATE 0   /**< Update Version */
 
-/** Session Name
+/**
+ * Session Name
  *
  * Name of the TruFlow control channel interface.  Expects
  * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
@@ -128,7 +93,8 @@ enum tf_mem {
 
 #define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Session ID define */
 
-/** Session Identifier
+/**
+ * Session Identifier
  *
  * Unique session identifier which includes PCIe bus info to
  * distinguish the PF and session info to identify the associated
@@ -146,7 +112,8 @@ union tf_session_id {
 	} internal;
 };
 
-/** Session Version
+/**
+ * Session Version
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -160,8 +127,8 @@ struct tf_session_version {
 	uint8_t update;
 };
 
-/** Session supported device types
- *
+/**
+ * Session supported device types
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
@@ -171,6 +138,147 @@ enum tf_device_type {
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
+/** Identifier resource types
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for SR2
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC,
+	TF_IDENT_TYPE_MAX
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/* Internal */
+
+	/** Wh+/SR Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Wh+/SR/Th Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Wh+/SR Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Wh+/SR Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 32 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Wh+/SR Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Wh+/SR Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Wh+/SR Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Wh+/SR Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Wh+/SR Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Wh+/SR Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Wh+/SR Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** SR2 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** SR2 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** SR2 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** SR2 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** SR2 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** SR2 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** SR2 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** SR2 VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+	/** Th/SR2 EM Flexible Key builder */
+	TF_TBL_TYPE_EM_FKB,
+	/** Th/SR2 WC Flexible Key builder */
+	TF_TBL_TYPE_WC_FKB,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	TF_TBL_TYPE_MAX
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	/** L2 Context TCAM */
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	/** Profile TCAM */
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	/** Wildcard TCAM */
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	/** Source Properties TCAM */
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	/** Connection Tracking Rule TCAM */
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	/** Virtual Edge Bridge TCAM */
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+};
+
+/**
+ * EM Resources
+ * These defines are provisioned during
+ * tf_open_session()
+ */
+enum tf_em_tbl_type {
+	/** The number of internal EM records for the session */
+	TF_EM_TBL_TYPE_EM_RECORD,
+	/** The number of table scopes reequested */
+	TF_EM_TBL_TYPE_TBL_SCOPE,
+	TF_EM_TBL_TYPE_MAX
+};
+
 /** TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
@@ -309,6 +417,30 @@ struct tf_open_session_parms {
 	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
 	 */
 	enum tf_device_type device_type;
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
 };
 
 /**
@@ -417,31 +549,6 @@ int tf_close_session(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
-	 *  and can be used in WC TCAM or EM keys to virtualize further
-	 *  lookups.
-	 */
-	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
-	 *  to enable virtualization of the profile TCAM.
-	 */
-	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
-	 *  to enable virtualization of the WC TCAM hardware.
-	 */
-	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
-	 *  to enable virtualization of the EM hardware. (not required for Brd4
-	 *  as it has table scope)
-	 */
-	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
-	 *  enable virtualization of further lookups.
-	 */
-	TF_IDENT_TYPE_L2_FUNC
-};
-
 /** tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
@@ -631,19 +738,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-/**
- * TCAM table type
- */
-enum tf_tcam_tbl_type {
-	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	TF_TCAM_TBL_TYPE_PROF_TCAM,
-	TF_TCAM_TBL_TYPE_WC_TCAM,
-	TF_TCAM_TBL_TYPE_SP_TCAM,
-	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
-	TF_TCAM_TBL_TYPE_VEB_TCAM,
-	TF_TCAM_TBL_TYPE_MAX
-
-};
 
 /**
  * @page tcam TCAM Access
@@ -813,7 +907,8 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** get TCAM entry
+/*
+ * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -824,7 +919,8 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/** tf_free_tcam_entry parameter definition
+/*
+ * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
 	/**
@@ -845,8 +941,7 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free TCAM entry
- *
+/*
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -873,84 +968,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  */
 
 /**
- * Enumeration of TruFlow table types. A table type is used to identify a
- * resource object.
- *
- * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
- * the only table type that is connected with a table scope.
- */
-enum tf_tbl_type {
-	/** Wh+/Brd2 Action Record */
-	TF_TBL_TYPE_FULL_ACT_RECORD,
-	/** Multicast Groups */
-	TF_TBL_TYPE_MCAST_GROUPS,
-	/** Action Encap 8 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_8B,
-	/** Action Encap 16 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_16B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_32B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_64B,
-	/** Action Source Properties SMAC */
-	TF_TBL_TYPE_ACT_SP_SMAC,
-	/** Action Source Properties SMAC IPv4 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
-	/** Action Source Properties SMAC IPv6 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
-	/** Action Statistics 64 Bits */
-	TF_TBL_TYPE_ACT_STATS_64,
-	/** Action Modify L4 Src Port */
-	TF_TBL_TYPE_ACT_MODIFY_SPORT,
-	/** Action Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_DPORT,
-	/** Action Modify IPv4 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
-	/** Action _Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
-
-	/* HW */
-
-	/** Meter Profiles */
-	TF_TBL_TYPE_METER_PROF,
-	/** Meter Instance */
-	TF_TBL_TYPE_METER_INST,
-	/** Mirror Config */
-	TF_TBL_TYPE_MIRROR_CONFIG,
-	/** UPAR */
-	TF_TBL_TYPE_UPAR,
-	/** Brd4 Epoch 0 table */
-	TF_TBL_TYPE_EPOCH0,
-	/** Brd4 Epoch 1 table  */
-	TF_TBL_TYPE_EPOCH1,
-	/** Brd4 Metadata  */
-	TF_TBL_TYPE_METADATA,
-	/** Brd4 CT State  */
-	TF_TBL_TYPE_CT_STATE,
-	/** Brd4 Range Profile  */
-	TF_TBL_TYPE_RANGE_PROF,
-	/** Brd4 Range Entry  */
-	TF_TBL_TYPE_RANGE_ENTRY,
-	/** Brd4 LAG Entry  */
-	TF_TBL_TYPE_LAG,
-	/** Brd4 only VNIC/SVIF Table */
-	TF_TBL_TYPE_VNIC_SVIF,
-
-	/* External */
-
-	/** External table type - initially 1 poolsize entries.
-	 * All External table types are associated with a table
-	 * scope. Internal types are not.
-	 */
-	TF_TBL_TYPE_EXT,
-	TF_TBL_TYPE_MAX
-};
-
-/** tf_alloc_tbl_entry parameter definition
+ * tf_alloc_tbl_entry parameter definition
  */
 struct tf_alloc_tbl_entry_parms {
 	/**
@@ -993,7 +1011,8 @@ struct tf_alloc_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** allocate index table entries
+/**
+ * allocate index table entries
  *
  * Internal types:
  *
@@ -1023,7 +1042,8 @@ struct tf_alloc_tbl_entry_parms {
 int tf_alloc_tbl_entry(struct tf *tfp,
 		       struct tf_alloc_tbl_entry_parms *parms);
 
-/** tf_free_tbl_entry parameter definition
+/**
+ * tf_free_tbl_entry parameter definition
  */
 struct tf_free_tbl_entry_parms {
 	/**
@@ -1049,7 +1069,8 @@ struct tf_free_tbl_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free index table entry
+/**
+ * free index table entry
  *
  * Used to free a previously allocated table entry.
  *
@@ -1075,7 +1096,8 @@ struct tf_free_tbl_entry_parms {
 int tf_free_tbl_entry(struct tf *tfp,
 		      struct tf_free_tbl_entry_parms *parms);
 
-/** tf_set_tbl_entry parameter definition
+/**
+ * tf_set_tbl_entry parameter definition
  */
 struct tf_set_tbl_entry_parms {
 	/**
@@ -1104,7 +1126,8 @@ struct tf_set_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** set index table entry
+/**
+ * set index table entry
  *
  * Used to insert an application programmed index table entry into a
  * previous allocated table location.  A shadow copy of the table
@@ -1115,7 +1138,8 @@ struct tf_set_tbl_entry_parms {
 int tf_set_tbl_entry(struct tf *tfp,
 		     struct tf_set_tbl_entry_parms *parms);
 
-/** tf_get_tbl_entry parameter definition
+/**
+ * tf_get_tbl_entry parameter definition
  */
 struct tf_get_tbl_entry_parms {
 	/**
@@ -1140,7 +1164,8 @@ struct tf_get_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** get index table entry
+/**
+ * get index table entry
  *
  * Used to retrieve a previous set index table entry.
  *
@@ -1163,7 +1188,8 @@ int tf_get_tbl_entry(struct tf *tfp,
  * @ref tf_search_em_entry
  *
  */
-/** tf_insert_em_entry parameter definition
+/**
+ * tf_insert_em_entry parameter definition
  */
 struct tf_insert_em_entry_parms {
 	/**
@@ -1239,6 +1265,10 @@ struct tf_delete_em_entry_parms {
 	 * 2 element array with 2 ids. (Brd4 only)
 	 */
 	uint16_t *epochs;
+	/**
+	 * [out] The index of the entry
+	 */
+	uint16_t index;
 	/**
 	 * [in] structure containing flow delete handle information
 	 */
@@ -1291,7 +1321,8 @@ struct tf_search_em_entry_parms {
 	uint64_t flow_handle;
 };
 
-/** insert em hash entry in internal table memory
+/**
+ * insert em hash entry in internal table memory
  *
  * Internal:
  *
@@ -1328,7 +1359,8 @@ struct tf_search_em_entry_parms {
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms);
 
-/** delete em hash entry table memory
+/**
+ * delete em hash entry table memory
  *
  * Internal:
  *
@@ -1353,7 +1385,8 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms);
 
-/** search em hash entry table memory
+/**
+ * search em hash entry table memory
  *
  * Internal:
 
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index bd8e2ba8a..fd1797e39 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -287,7 +287,7 @@ static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
 }
 
 static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t	       *in_key,
+				    uint8_t *in_key,
 				    struct tf_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
@@ -308,7 +308,7 @@ static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
  * EEXIST  - Key does exist in table at "index" in table "table".
  * TF_ERR     - Something went horribly wrong.
  */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 					  enum tf_dir dir,
 					  struct tf_eem_64b_entry *entry,
 					  uint32_t key0_hash,
@@ -368,8 +368,8 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session	   *session,
-			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
@@ -457,6 +457,96 @@ int tf_insert_eem_entry(struct tf_session	   *session,
 	return -EINVAL;
 }
 
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms)
+{
+	int       rc;
+	uint32_t  gfid;
+	uint16_t  rptr_index = 0;
+	uint8_t   rptr_entry = 0;
+	uint8_t   num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG
+		   (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc != 0)
+		return -1;
+
+	PMD_DRV_LOG
+		   (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int tf_delete_em_internal_entry(struct tf *tfp,
+				struct tf_delete_em_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+
 /** delete EEM hash entry API
  *
  * returns:
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 8a3584fbd..c1805df73 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -12,6 +12,20 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+/*
+ * Used to build GFID:
+ *
+ *   15           2  0
+ *  +--------------+--+
+ *  |   Index      |E |
+ *  +--------------+--+
+ *
+ * E = Entry (bucket inndex)
+ */
+#define TF_EM_INTERNAL_INDEX_SHIFT 2
+#define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
+#define TF_EM_INTERNAL_ENTRY_MASK  0x3
+
 /** EEM Entry header
  *
  */
@@ -53,6 +67,17 @@ struct tf_eem_64b_entry {
 	struct tf_eem_entry_hdr hdr;
 };
 
+/** EM Entry
+ *  Each EM entry is 512-bit (64-bytes) but ordered differently to
+ *  EEM.
+ */
+struct tf_em_64b_entry {
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+};
+
 /**
  * Allocates EEM Table scope
  *
@@ -106,9 +131,15 @@ int tf_insert_eem_entry(struct tf_session *session,
 			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms);
 
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms);
+
 int tf_delete_eem_entry(struct tf *tfp,
 			struct tf_delete_em_entry_parms *parms);
 
+int tf_delete_em_internal_entry(struct tf                       *tfp,
+				struct tf_delete_em_entry_parms *parms);
+
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
index 417a99cda..1491539ca 100644
--- a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -90,6 +90,18 @@ do {									\
 		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
 } while (0)
 
+#define TF_GET_NUM_KEY_ENTRIES_FROM_FLOW_HANDLE(flow_handle,		\
+					  num_key_entries)		\
+	(num_key_entries =						\
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		     TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT))		\
+
+#define TF_GET_ENTRY_NUM_FROM_FLOW_HANDLE(flow_handle,		\
+					  entry_num)		\
+	(entry_num =						\
+		(((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT))		\
+
 /*
  * 32 bit Flow ID handlers
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index beecafdeb..554a8491d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -16,6 +16,7 @@
 #include "tf_msg.h"
 #include "hsi_struct_def_dpdk.h"
 #include "hwrm_tf.h"
+#include "tf_em.h"
 
 /**
  * Endian converts min and max values from the HW response to the query
@@ -1013,15 +1014,94 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_insert_input req = { 0 };
+	struct tf_em_internal_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
+		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_INSERT,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends EM delete insert request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_delete_input req = { 0 };
+	struct tf_em_internal_delete_output resp = { 0 };
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.tf_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_DELETE,
+		 req,
+		resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 /**
  * Sends EM operation request to Firmware
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op)
+		 int dir,
+		 uint16_t op)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_input req = {0};
 	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
 	struct tfp_send_msg_parms parms = { 0 };
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 030d1881e..89f7370cc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -121,6 +121,19 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				    struct tf_insert_em_entry_parms *params,
+				    uint16_t *rptr_index,
+				    uint8_t *rptr_entry,
+				    uint8_t *num_of_entries);
+/**
+ * Sends EM internal delete request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms);
 /**
  * Sends EM mem register request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 50ef2d530..c9f4f8f04 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -13,12 +13,25 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "stack.h"
 
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
+/**
+ * Number of EM entries. Static for now will be removed
+ * when parameter added at a later date. At this stage we
+ * are using fixed size entries so that each stack entry
+ * represents 4 RT (f/n)blocks. So we take the total block
+ * allocation for truflow and divide that by 4.
+ */
+#define TF_SESSION_TOTAL_FN_BLOCKS (1024 * 8) /* 8K blocks */
+#define TF_SESSION_EM_ENTRY_SIZE 4 /* 4 blocks per entry */
+#define TF_SESSION_EM_POOL_SIZE \
+	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
+
 /** Session
  *
  * Shared memory containing private TruFlow session information.
@@ -289,6 +302,11 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/**
+	 * EM Pools
+	 */
+	struct stack em_pool[TF_DIR_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d900c9c09..dda72c3d5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -156,7 +156,7 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
 		if (tfp_calloc(&parms) != 0)
 			goto cleanup;
 
-		tp->pg_pa_tbl[i] = (uint64_t)(uintptr_t)parms.mem_pa;
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
 		tp->pg_va_tbl[i] = parms.mem_va;
 
 		memset(tp->pg_va_tbl[i], 0, pg_size);
@@ -792,7 +792,8 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	index = parms->idx;
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1179,7 +1180,8 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1330,7 +1332,8 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1801,3 +1804,91 @@ tf_free_tbl_entry(struct tf *tfp,
 			    rc);
 	return rc;
 }
+
+
+static void
+tf_dump_link_page_table(struct tf_em_page_tbl *tp,
+			struct tf_em_page_tbl *tp_next)
+{
+	uint64_t *pg_va;
+	uint32_t i;
+	uint32_t j;
+	uint32_t k = 0;
+
+	printf("pg_count:%d pg_size:0x%x\n",
+	       tp->pg_count,
+	       tp->pg_size);
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+		printf("\t%p\n", (void *)pg_va);
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
+			if (((pg_va[j] & 0x7) ==
+			     tfp_cpu_to_le_64(PTU_PTE_LAST |
+					      PTU_PTE_VALID)))
+				return;
+
+			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
+				printf("** Invalid entry **\n");
+				return;
+			}
+
+			if (++k >= tp_next->pg_count) {
+				printf("** Shouldn't get here **\n");
+				return;
+			}
+		}
+	}
+}
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
+{
+	struct tf_session      *session;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_page_tbl *tp;
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_table *tbl;
+	int i;
+	int j;
+	int dir;
+
+	printf("called %s\n", __func__);
+
+	/* find session struct */
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* find control block for table scope */
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		PMD_DRV_LOG(ERR, "No table scope\n");
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
+
+		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
+			printf
+	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
+			       j,
+			       tbl->type,
+			       tbl->num_entries,
+			       tbl->entry_size,
+			       tbl->num_lvl);
+			if (tbl->pg_tbl[0].pg_va_tbl &&
+			    tbl->pg_tbl[0].pg_pa_tbl)
+				printf("%p %p\n",
+			       tbl->pg_tbl[0].pg_va_tbl[0],
+			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
+			for (i = 0; i < tbl->num_lvl - 1; i++) {
+				printf("Level:%d\n", i);
+				tp = &tbl->pg_tbl[i];
+				tp_next = &tbl->pg_tbl[i + 1];
+				tf_dump_link_page_table(tp, tp_next);
+			}
+			printf("\n");
+		}
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index bdc6288ee..7a5443678 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -76,38 +76,51 @@ struct tf_tbl_scope_cb {
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
  */
-#define BNXT_PAGE_SHIFT 22 /** 2M */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
 
-#if (BNXT_PAGE_SHIFT < 12)				/** < 4K >> 4K */
-#define TF_EM_PAGE_SHIFT 12
+#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_PAGE_SHIFT <= 13)			/** 4K, 8K */
-#define TF_EM_PAGE_SHIFT 13
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
-#define TF_EM_PAGE_SHIFT 15
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
-#elif (BNXT_PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
-#define TF_EM_PAGE_SHIFT 16
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
-#define TF_EM_PAGE_SHIFT 18
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_PAGE_SHIFT <= 21)			/** 1M */
-#define TF_EM_PAGE_SHIFT 20
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_PAGE_SHIFT <= 22)			/** 2M, 4M */
-#define TF_EM_PAGE_SHIFT 21
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
-#define TF_EM_PAGE_SHIFT 22
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#else						/** >= 1G >> 1G */
-#define TF_EM_PAGE_SHIFT	30
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8d5e94e1a..fe49b6304 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -3,14 +3,23 @@
  * All rights reserved.
  */
 
-/* This header file defines the Portability structures and APIs for
+/*
+ * This header file defines the Portability structures and APIs for
  * TruFlow.
  */
 
 #ifndef _TFP_H_
 #define _TFP_H_
 
+#include <rte_config.h>
 #include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+
+/**
+ * DPDK/Driver specific log level for the BNXT Eth driver.
+ */
+extern int bnxt_logtype_driver;
 
 /** Spinlock
  */
@@ -18,13 +27,21 @@ struct tfp_spinlock_parms {
 	rte_spinlock_t slock;
 };
 
+#define TFP_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define TFP_DRV_LOG(level, fmt, args...) \
+	TFP_DRV_LOG_RAW(level, fmt, ## args)
+
 /**
  * @file
  *
  * TrueFlow Portability API Header File
  */
 
-/** send message parameter definition
+/**
+ * send message parameter definition
  */
 struct tfp_send_msg_parms {
 	/**
@@ -62,7 +79,8 @@ struct tfp_send_msg_parms {
 	uint32_t *resp_data;
 };
 
-/** calloc parameter definition
+/**
+ * calloc parameter definition
  */
 struct tfp_calloc_parms {
 	/**
@@ -96,43 +114,15 @@ struct tfp_calloc_parms {
  * @ref tfp_send_msg_tunneled
  *
  * @ref tfp_calloc
- * @ref tfp_free
  * @ref tfp_memcpy
+ * @ref tfp_free
  *
  * @ref tfp_spinlock_init
  * @ref tfp_spinlock_lock
  * @ref tfp_spinlock_unlock
  *
- * @ref tfp_cpu_to_le_16
- * @ref tfp_le_to_cpu_16
- * @ref tfp_cpu_to_le_32
- * @ref tfp_le_to_cpu_32
- * @ref tfp_cpu_to_le_64
- * @ref tfp_le_to_cpu_64
- * @ref tfp_cpu_to_be_16
- * @ref tfp_be_to_cpu_16
- * @ref tfp_cpu_to_be_32
- * @ref tfp_be_to_cpu_32
- * @ref tfp_cpu_to_be_64
- * @ref tfp_be_to_cpu_64
  */
 
-#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
-#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
-#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
-#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
-#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
-#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
-#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
-#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
-#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
-#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
-#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
-#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
-#define tfp_bswap_16(val) rte_bswap16(val)
-#define tfp_bswap_32(val) rte_bswap32(val)
-#define tfp_bswap_64(val) rte_bswap64(val)
-
 /**
  * Provides communication capability from the TrueFlow API layer to
  * the TrueFlow firmware. The portability layer internally provides
@@ -162,9 +152,24 @@ int tfp_send_msg_direct(struct tf *tfp,
  *   -1             - Global error like not supported
  *   -EINVAL        - Parameter Error
  */
-int tfp_send_msg_tunneled(struct tf                 *tfp,
+int tfp_send_msg_tunneled(struct tf *tfp,
 			  struct tfp_send_msg_parms *parms);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
@@ -179,10 +184,58 @@ int tfp_send_msg_tunneled(struct tf                 *tfp,
  *   -EINVAL        - Parameter error
  */
 int tfp_calloc(struct tfp_calloc_parms *parms);
-
-void tfp_free(void *addr);
 void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_free(void *addr);
+
 void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
+
+/*
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 10/51] net/bnxt: modify EM insert and delete to use HWRM direct
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (8 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 09/51] net/bnxt: add support for exact match Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 11/51] net/bnxt: add multi device support Ajit Khaparde
                           ` (40 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Modify Exact Match insert and delete to use the HWRM messages directly.
Remove tunneled EM insert and delete message types.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h | 70 ++----------------------------
 drivers/net/bnxt/tf_core/tf_msg.c  | 66 ++++++++++++++++------------
 2 files changed, 43 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 439950e02..d342c695c 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -23,8 +23,6 @@ typedef enum tf_subtype {
 	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
 	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
 	HWRM_TFT_TBL_SCOPE_CFG = 731,
-	HWRM_TFT_EM_RULE_INSERT = 739,
-	HWRM_TFT_EM_RULE_DELETE = 740,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
@@ -83,10 +81,6 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_em_internal_insert_input;
-struct tf_em_internal_insert_output;
-struct tf_em_internal_delete_input;
-struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -351,7 +345,7 @@ typedef struct tf_session_hw_resc_alloc_output {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -453,7 +447,7 @@ typedef struct tf_session_hw_resc_free_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -555,7 +549,7 @@ typedef struct tf_session_hw_resc_flush_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -922,60 +916,4 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
-/* Input params for EM internal rule insert */
-typedef struct tf_em_internal_insert_input {
-	/* Firmware Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* strength */
-	uint16_t			 strength;
-	/* index to action */
-	uint32_t			 action_ptr;
-	/* index of em record */
-	uint32_t			 em_record_idx;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
-
-/* Output params for EM internal rule insert */
-typedef struct tf_em_internal_insert_output {
-	/* EM record pointer index */
-	uint16_t			 rptr_index;
-	/* EM record offset 0~3 */
-	uint8_t			  rptr_entry;
-	/* Number of word entries consumed by the key */
-	uint8_t			  num_of_entries;
-} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_input {
-	/* Session Id */
-	uint32_t			 tf_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* EM internal flow hanndle */
-	uint64_t			 flow_handle;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_output {
-	/* Original stack allocation index */
-	uint16_t			 em_index;
-} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
-
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 554a8491d..c8f6b88d3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1023,32 +1023,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				uint8_t *rptr_entry,
 				uint8_t *num_of_entries)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_insert_input req = { 0 };
-	struct tf_em_internal_insert_output resp = { 0 };
+	int                         rc;
+	struct tfp_send_msg_parms        parms = { 0 };
+	struct hwrm_tf_em_insert_input   req = { 0 };
+	struct hwrm_tf_em_insert_output  resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
 		TF_LKUP_RECORD_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_INSERT,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1056,7 +1062,7 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 	*rptr_index = resp.rptr_index;
 	*num_of_entries = resp.num_of_entries;
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
@@ -1065,32 +1071,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_delete_input req = { 0 };
-	struct tf_em_internal_delete_output resp = { 0 };
+	int                             rc;
+	struct tfp_send_msg_parms       parms = { 0 };
+	struct hwrm_tf_em_delete_input  req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
-	req.tf_session_id =
+	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_DELETE,
-		 req,
-		resp);
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
 	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 11/51] net/bnxt: add multi device support
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (9 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
                           ` (39 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Randy Schacher, Venkat Duvvuru

From: Michael Wildt <michael.wildt@broadcom.com>

Introduce new modules for Device, Resource Manager, Identifier,
Table Types, and TCAM for multi device support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   8 +
 drivers/net/bnxt/tf_core/Makefile             |   9 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 266 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c            |   2 +
 drivers/net/bnxt/tf_core/tf_core.h            |  56 +--
 drivers/net/bnxt/tf_core/tf_device.c          |  50 +++
 drivers/net/bnxt/tf_core/tf_device.h          | 331 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  24 ++
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  64 +++
 drivers/net/bnxt/tf_core/tf_identifier.c      |  47 +++
 drivers/net/bnxt/tf_core/tf_identifier.h      | 140 +++++++
 drivers/net/bnxt/tf_core/tf_rm.c              |  54 +--
 drivers/net/bnxt/tf_core/tf_rm.h              |  18 -
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 102 +++++
 drivers/net/bnxt/tf_core/tf_rm_new.h          | 368 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_session.c         |  31 ++
 drivers/net/bnxt/tf_core/tf_session.h         |  54 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      | 240 ++++++++++++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     | 239 ++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c             |   1 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  78 ++++
 drivers/net/bnxt/tf_core/tf_tbl_type.h        | 309 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_tcam.c            |  78 ++++
 drivers/net/bnxt/tf_core/tf_tcam.h            | 314 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_util.c            | 145 +++++++
 drivers/net/bnxt/tf_core/tf_util.h            |  41 ++
 28 files changed, 3101 insertions(+), 94 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 5c7859cb5..a50cb261d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,14 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_device_p4.c',
+	'tf_core/tf_identifier.c',
+	'tf_core/tf_shadow_tbl.c',
+	'tf_core/tf_shadow_tcam.c',
+	'tf_core/tf_tbl_type.c',
+	'tf_core/tf_tcam.c',
+	'tf_core/tf_util.c',
+	'tf_core/tf_rm_new.c',
 
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index aa2d964e9..7a3c325a6 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,3 +14,12 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
new file mode 100644
index 000000000..c0c1e754e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -0,0 +1,266 @@
+/*
+ * Copyright(c) 2001-2020, Broadcom. All rights reserved. The
+ * term Broadcom refers to Broadcom Inc. and/or its subsidiaries.
+ * Proprietary and Confidential Information.
+ *
+ * This source file is the property of Broadcom Corporation, and
+ * may not be copied or distributed in any isomorphic form without
+ * the prior written consent of Broadcom Corporation.
+ *
+ * DO NOT MODIFY!!! This file is automatically generated.
+ */
+
+#ifndef _CFA_RESOURCE_TYPES_H_
+#define _CFA_RESOURCE_TYPES_H_
+
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+/* Wildcard TCAM Profile Id */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+/* Meter Profile */
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+/* Source Properties TCAM */
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+/* L2 Func */
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+/* EPOCH */
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+/* Metadata */
+#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+/* Connection Tracking Rule TCAM */
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+/* Range Profile */
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+/* Range */
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+/* Link Aggrigation */
+#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+
+
+#endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 1f6c33ab5..6e15a4c5c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -6,6 +6,7 @@
 #include <stdio.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
 #include "tf_em.h"
@@ -229,6 +230,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	session->dev = NULL;
 	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 81ff7602f..becc50c7f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -371,6 +371,35 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * tf_session_resources parameter definition.
+ */
+struct tf_session_resources {
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+};
 
 /**
  * tf_open_session parameters definition.
@@ -414,33 +443,14 @@ struct tf_open_session_parms {
 	union tf_session_id session_id;
 	/** [in] device type
 	 *
-	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
 	enum tf_device_type device_type;
-	/** [in] Requested Identifier Resources
-	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
-	 */
-	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
-	/** [in] Requested Index Table resource counts
-	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
-	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
-	/** [in] Requested TCAM Table resource counts
-	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
-	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
-	/** [in] Requested EM resource counts
+	/** [in] resources
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * Resource allocation
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
+	struct tf_session_resources resources;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
new file mode 100644
index 000000000..3b368313e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_device_p4.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+struct tf;
+
+/**
+ * Device specific bind function
+ */
+static int
+dev_bind_p4(struct tf *tfp __rte_unused,
+	    struct tf_session_resources *resources __rte_unused,
+	    struct tf_dev_info *dev_info)
+{
+	/* Initialize the modules */
+
+	dev_info->ops = &tf_dev_ops_p4;
+	return 0;
+}
+
+int
+dev_bind(struct tf *tfp __rte_unused,
+	 enum tf_device_type type,
+	 struct tf_session_resources *resources,
+	 struct tf_dev_info *dev_info)
+{
+	switch (type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_bind_p4(tfp,
+				   resources,
+				   dev_info);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "Device type not supported\n");
+		return -ENOTSUP;
+	}
+}
+
+int
+dev_unbind(struct tf *tfp __rte_unused,
+	   struct tf_dev_info *dev_handle __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
new file mode 100644
index 000000000..8b63ff178
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -0,0 +1,331 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_H_
+#define _TF_DEVICE_H_
+
+#include "tf_core.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * The Device module provides a general device template. A supported
+ * device type should implement one or more of the listed function
+ * pointers according to its capabilities.
+ *
+ * If a device function pointer is NULL the device capability is not
+ * supported.
+ */
+
+/**
+ * TF device information
+ */
+struct tf_dev_info {
+	const struct tf_dev_ops *ops;
+};
+
+/**
+ * @page device Device
+ *
+ * @ref tf_dev_bind
+ *
+ * @ref tf_dev_unbind
+ */
+
+/**
+ * Device bind handles the initialization of the specified device
+ * type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] type
+ *   Device type
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int dev_bind(struct tf *tfp,
+	     enum tf_device_type type,
+	     struct tf_session_resources *resources,
+	     struct tf_dev_info *dev_handle);
+
+/**
+ * Device release handles cleanup of the device specific information.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dev_handle
+ *   Device handle
+ */
+int dev_unbind(struct tf *tfp,
+	       struct tf_dev_info *dev_handle);
+
+/**
+ * Truflow device specific function hooks structure
+ *
+ * The following device hooks can be defined; unless noted otherwise,
+ * they are optional and can be filled with a null pointer. The
+ * purpose of these hooks is to support Truflow device operations for
+ * different device variants.
+ */
+struct tf_dev_ops {
+	/**
+	 * Allocation of an identifier element.
+	 *
+	 * This API allocates the specified identifier element from a
+	 * device specific identifier DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ident)(struct tf *tfp,
+				  struct tf_ident_alloc_parms *parms);
+
+	/**
+	 * Free of an identifier element.
+	 *
+	 * This API free's a previous allocated identifier element from a
+	 * device specific identifier DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ident)(struct tf *tfp,
+				 struct tf_ident_free_parms *parms);
+
+	/**
+	 * Allocation of a table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
+				     struct tf_tbl_type_alloc_parms *parms);
+
+	/**
+	 * Free of a table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tbl_type)(struct tf *tfp,
+				    struct tf_tbl_type_free_parms *parms);
+
+	/**
+	 * Searches for the specified table type element in a shadow DB.
+	 *
+	 * This API searches for the specified table type element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the table type
+	 * DB and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tbl_type)
+			(struct tf *tfp,
+			struct tf_tbl_type_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_set_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_get_parms *parms);
+
+	/**
+	 * Allocation of a tcam element.
+	 *
+	 * This API allocates the specified tcam element from a device
+	 * specific tcam DB. The allocated element is returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tcam)(struct tf *tfp,
+				 struct tf_tcam_alloc_parms *parms);
+
+	/**
+	 * Free of a tcam element.
+	 *
+	 * This API free's a previous allocated tcam element from a
+	 * device specific tcam DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tcam)(struct tf *tfp,
+				struct tf_tcam_free_parms *parms);
+
+	/**
+	 * Searches for the specified tcam element in a shadow DB.
+	 *
+	 * This API searches for the specified tcam element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the tcam DB
+	 * and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tcam)
+			(struct tf *tfp,
+			struct tf_tcam_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified tcam element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tcam)(struct tf *tfp,
+			       struct tf_tcam_set_parms *parms);
+
+	/**
+	 * Retrieves the specified tcam element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tcam)(struct tf *tfp,
+			       struct tf_tcam_get_parms *parms);
+};
+
+/**
+ * Supported device operation structures
+ */
+extern const struct tf_dev_ops tf_dev_ops_p4;
+
+#endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
new file mode 100644
index 000000000..c3c4d1e05
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_alloc_ident = tf_ident_alloc,
+	.tf_dev_free_ident = tf_ident_free,
+	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
+	.tf_dev_free_tbl_type = tf_tbl_type_free,
+	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
+	.tf_dev_set_tbl_type = tf_tbl_type_set,
+	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tcam = tf_tcam_alloc,
+	.tf_dev_free_tcam = tf_tcam_free,
+	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_set_tcam = tf_tcam_set,
+	.tf_dev_get_tcam = tf_tcam_get,
+};
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
new file mode 100644
index 000000000..84d90e3a7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_P4_H_
+#define _TF_DEVICE_P4_H_
+
+#include <cfa_resource_types.h>
+
+#include "tf_core.h"
+#include "tf_rm_new.h"
+
+struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+};
+
+struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
+	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+};
+
+struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+};
+
+#endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
new file mode 100644
index 000000000..726d0b406
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_identifier.h"
+
+struct tf;
+
+/**
+ * Identifier DBs.
+ */
+/* static void *ident_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+int
+tf_ident_bind(struct tf *tfp __rte_unused,
+	      struct tf_ident_cfg *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_alloc(struct tf *tfp __rte_unused,
+	       struct tf_ident_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_free(struct tf *tfp __rte_unused,
+	      struct tf_ident_free_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
new file mode 100644
index 000000000..b77c91b9d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_IDENTIFIER_H_
+#define _TF_IDENTIFIER_H_
+
+#include "tf_core.h"
+
+/**
+ * The Identifier module provides processing of Identifiers.
+ */
+
+struct tf_ident_cfg {
+	/**
+	 * Number of identifier types in each of the configuration
+	 * arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Identifier allcoation parameter definition
+ */
+struct tf_ident_alloc_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/**
+ * Identifier free parameter definition
+ */
+struct tf_ident_free_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/**
+ * @page ident Identity Management
+ *
+ * @ref tf_ident_bind
+ *
+ * @ref tf_ident_unbind
+ *
+ * @ref tf_ident_alloc
+ *
+ * @ref tf_ident_free
+ */
+
+/**
+ * Initializes the Identifier module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_bind(struct tf *tfp,
+		  struct tf_ident_cfg *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_unbind(struct tf *tfp);
+
+/**
+ * Allocates a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_alloc(struct tf *tfp,
+		   struct tf_ident_alloc_parms *parms);
+
+/**
+ * Free's a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_free(struct tf *tfp,
+		  struct tf_ident_free_parms *parms);
+
+#endif /* _TF_IDENTIFIER_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 38b1e71cd..2264704d2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -9,6 +9,7 @@
 
 #include "tf_rm.h"
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_resources.h"
 #include "tf_msg.h"
@@ -76,59 +77,6 @@
 			(dtype) = type ## _TX;	\
 	} while (0)
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
-{
-	switch (dir) {
-	case TF_DIR_RX:
-		return "RX";
-	case TF_DIR_TX:
-		return "TX";
-	default:
-		return "Invalid direction";
-	}
-}
-
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
-{
-	switch (id_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		return "l2_ctxt_remap";
-	case TF_IDENT_TYPE_PROF_FUNC:
-		return "prof_func";
-	case TF_IDENT_TYPE_WC_PROF:
-		return "wc_prof";
-	case TF_IDENT_TYPE_EM_PROF:
-		return "em_prof";
-	case TF_IDENT_TYPE_L2_FUNC:
-		return "l2_func";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
-{
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		return "l2_ctxt_tcam";
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		return "prof_tcam";
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		return "wc_tcam";
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		return "veb_tcam";
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		return "sp_tcam";
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		return "ct_rule_tcam";
-	default:
-		return "Invalid tcam table type";
-	}
-}
-
 const char
 *tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index e69d443a8..1a09f13a7 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -124,24 +124,6 @@ struct tf_rm_db {
 	struct tf_rm_resc tx;
 };
 
-/**
- * Helper function converting direction to text string
- */
-const char
-*tf_dir_2_str(enum tf_dir dir);
-
-/**
- * Helper function converting identifier to text string
- */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
-
-/**
- * Helper function converting tcam type to text string
- */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
-
 /**
  * Helper function used to convert HW HCAPI resource type to a string.
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
new file mode 100644
index 000000000..51bb9ba3a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_rm_new.h"
+
+/**
+ * Resource query single entry. Used when accessing HCAPI RM on the
+ * firmware.
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Generic RM Element data type that an RM DB is build upon.
+ */
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type type;
+
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
+
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
+
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
+
+/**
+ * TF RM DB definition
+ */
+struct tf_rm_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
+
+int
+tf_rm_create_db(struct tf *tfp __rte_unused,
+		struct tf_rm_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free_db(struct tf *tfp __rte_unused,
+	      struct tf_rm_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
new file mode 100644
index 000000000..72dba0984
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -0,0 +1,368 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_core.h"
+#include "bitalloc.h"
+
+struct tf;
+
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
+ *
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
+ */
+
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
+	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	TF_RM_TYPE_MAX
+};
+
+/**
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
+ */
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Allocation information for a single element.
+ */
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_entry entry;
+};
+
+/**
+ * Create RM DB parameters
+ */
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements in the parameter structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure
+	 */
+	struct tf_rm_element_cfg *parms;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Free RM DB parameters
+ */
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Allocate RM parameters for a single element
+ */
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+};
+
+/**
+ * Free RM parameters for a single element
+ */
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+};
+
+/**
+ * Is Allocated parameters for a single element
+ */
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	uint8_t *allocated;
+};
+
+/**
+ * Get Allocation information for a single element
+ */
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * @page rm Resource Manager
+ *
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ */
+
+/**
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
+
+/**
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
+
+/**
+ * Allocates a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to allocate parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
+
+/**
+ * Free's a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EpINVAL) on failure.
+ */
+int tf_rm_free(struct tf_rm_free_parms *parms);
+
+/**
+ * Performs an allocation verification check on a specified element.
+ *
+ * [in] parms
+ *   Pointer to is allocated parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
+
+/**
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
+ *
+ * [in] parms
+ *   Pointer to get info parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
+
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
new file mode 100644
index 000000000..c74994546
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session *tfs)
+{
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		TFP_DRV_LOG(ERR, "Session not created\n");
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	return 0;
+}
+
+int
+tf_session_get_device(struct tf_session *tfs,
+		      struct tf_device *tfd)
+{
+	if (tfs->dev == NULL) {
+		TFP_DRV_LOG(ERR, "Device not created\n");
+		return -EINVAL;
+	}
+	tfd = tfs->dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index c9f4f8f04..b1cc7a4a7 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -11,10 +11,21 @@
 
 #include "bitalloc.h"
 #include "tf_core.h"
+#include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
 #include "stack.h"
 
+/**
+ * The Session module provides session control support. A session is
+ * to the ULP layer known as a session_info instance. The session
+ * private data is the actual session.
+ *
+ * Session manages:
+ *   - The device and all the resources related to the device.
+ *   - Any session sharing between ULP applications
+ */
+
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
@@ -90,6 +101,9 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Device */
+	struct tf_dev_info *dev;
+
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
 
@@ -309,4 +323,44 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * @page session Session Management
+ *
+ * @ref tf_session_get_session
+ *
+ * @ref tf_session_get_device
+ */
+
+/**
+ * Looks up the private session information from the TF session info.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session(struct tf *tfp,
+			   struct tf_session *tfs);
+
+/**
+ * Looks up the device information from the TF Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfd
+ *   Pointer to the device
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_device(struct tf_session *tfs,
+			  struct tf_dev_info *tfd);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
new file mode 100644
index 000000000..8f2b6de70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tbl.h"
+
+/**
+ * Shadow table DB element
+ */
+struct tf_shadow_tbl_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of table type entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow table DB definition
+ */
+struct tf_shadow_tbl_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tbl_element *db;
+};
+
+int
+tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
new file mode 100644
index 000000000..dfd336e53
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TBL_H_
+#define _TF_SHADOW_TBL_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow Table module provides shadow DB handling for table based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow table DB is intended to be used by the Table Type module
+ * only.
+ */
+
+/**
+ * Shadow DB configuration information for a single table type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of table types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tbl_cfg_parms {
+	/**
+	 * TF Table type
+	 */
+	enum tf_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow table DB creation parameters
+ */
+struct tf_shadow_tbl_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tbl_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table DB free parameters
+ */
+struct tf_shadow_tbl_free_db_parms {
+	/**
+	 * Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table search parameters
+ */
+struct tf_shadow_tbl_search_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Table type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table insert parameters
+ */
+struct tf_shadow_tbl_insert_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table remove parameters
+ */
+struct tf_shadow_tbl_remove_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tbl Shadow table DB
+ *
+ * @ref tf_shadow_tbl_create_db
+ *
+ * @ref tf_shadow_tbl_free_db
+ *
+ * @reg tf_shadow_tbl_search
+ *
+ * @reg tf_shadow_tbl_insert
+ *
+ * @reg tf_shadow_tbl_remove
+ */
+
+/**
+ * Creates and fills a Shadow table DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms);
+
+/**
+ * Closes the Shadow table DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms);
+
+/**
+ * Search Shadow table db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow table DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow table DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TBL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.c b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
new file mode 100644
index 000000000..c61b833d7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tcam.h"
+
+/**
+ * Shadow tcam DB element
+ */
+struct tf_shadow_tcam_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of tcam entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow tcam DB definition
+ */
+struct tf_shadow_tcam_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tcam_element *db;
+};
+
+int
+tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.h b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
new file mode 100644
index 000000000..e2c4e06c0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TCAM_H_
+#define _TF_SHADOW_TCAM_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow tcam module provides shadow DB handling for tcam based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow tcam DB is intended to be used by the Tcam module only.
+ */
+
+/**
+ * Shadow DB configuration information for a single tcam type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of tcam types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tcam_cfg_parms {
+	/**
+	 * TF tcam type
+	 */
+	enum tf_tcam_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow tcam DB creation parameters
+ */
+struct tf_shadow_tcam_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tcam_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam DB free parameters
+ */
+struct tf_shadow_tcam_free_db_parms {
+	/**
+	 * Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam search parameters
+ */
+struct tf_shadow_tcam_search_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam insert parameters
+ */
+struct tf_shadow_tcam_insert_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam remove parameters
+ */
+struct tf_shadow_tcam_remove_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tcam Shadow tcam DB
+ *
+ * @ref tf_shadow_tcam_create_db
+ *
+ * @ref tf_shadow_tcam_free_db
+ *
+ * @reg tf_shadow_tcam_search
+ *
+ * @reg tf_shadow_tcam_insert
+ *
+ * @reg tf_shadow_tcam_remove
+ */
+
+/**
+ * Creates and fills a Shadow tcam DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms);
+
+/**
+ * Closes the Shadow tcam DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms);
+
+/**
+ * Search Shadow tcam db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow tcam DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow tcam DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TCAM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index dda72c3d5..17399a5b2 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,6 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
new file mode 100644
index 000000000..a57a5ddf2
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tbl_type.h"
+
+struct tf;
+
+/**
+ * Table Type DBs.
+ */
+/* static void *tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Table Type Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_type_bind(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc(struct tf *tfp __rte_unused,
+		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_free(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
+			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_set(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_get(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
new file mode 100644
index 000000000..c880b368b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Table Type module provides processing of Internal TF table types.
+ */
+
+/**
+ * Table Type configuration parameters
+ */
+struct tf_tbl_type_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Table Type allocation parameters
+ */
+struct tf_tbl_type_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type free parameters
+ */
+struct tf_tbl_type_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+struct tf_tbl_type_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type set parameters
+ */
+struct tf_tbl_type_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type get parameters
+ */
+struct tf_tbl_type_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tbl_type Table Type
+ *
+ * @ref tf_tbl_type_bind
+ *
+ * @ref tf_tbl_type_unbind
+ *
+ * @ref tf_tbl_type_alloc
+ *
+ * @ref tf_tbl_type_free
+ *
+ * @ref tf_tbl_type_alloc_search
+ *
+ * @ref tf_tbl_type_set
+ *
+ * @ref tf_tbl_type_get
+ */
+
+/**
+ * Initializes the Table Type module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_bind(struct tf *tfp,
+		     struct tf_tbl_type_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc(struct tf *tfp,
+		      struct tf_tbl_type_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_free(struct tf *tfp,
+		     struct tf_tbl_type_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc_search(struct tf *tfp,
+			     struct tf_tbl_type_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_set(struct tf *tfp,
+		    struct tf_tbl_type_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_get(struct tf *tfp,
+		    struct tf_tbl_type_get_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
new file mode 100644
index 000000000..3ad99dd0d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tcam.h"
+
+struct tf;
+
+/**
+ * TCAM DBs.
+ */
+/* static void *tcam_db[TF_DIR_MAX]; */
+
+/**
+ * TCAM Shadow DBs
+ */
+/* static void *shadow_tcam_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tcam_bind(struct tf *tfp __rte_unused,
+	     struct tf_tcam_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc(struct tf *tfp __rte_unused,
+	      struct tf_tcam_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_free(struct tf *tfp __rte_unused,
+	     struct tf_tcam_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc_search(struct tf *tfp __rte_unused,
+		     struct tf_tcam_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_set(struct tf *tfp __rte_unused,
+	    struct tf_tcam_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_get(struct tf *tfp __rte_unused,
+	    struct tf_tcam_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
new file mode 100644
index 000000000..1420c9ed5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -0,0 +1,314 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TCAM_H_
+#define _TF_TCAM_H_
+
+#include "tf_core.h"
+
+/**
+ * The TCAM module provides processing of Internal TCAM types.
+ */
+
+/**
+ * TCAM configuration parameters
+ */
+struct tf_tcam_cfg_parms {
+	/**
+	 * Number of tcam types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * TCAM allocation parameters
+ */
+struct tf_tcam_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM free parameters
+ */
+struct tf_tcam_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/**
+ * TCAM allocate search parameters
+ */
+struct tf_tcam_alloc_search_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/**
+ * TCAM set parameters
+ */
+struct tf_tcam_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM get parameters
+ */
+struct tf_tcam_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tcam TCAM
+ *
+ * @ref tf_tcam_bind
+ *
+ * @ref tf_tcam_unbind
+ *
+ * @ref tf_tcam_alloc
+ *
+ * @ref tf_tcam_free
+ *
+ * @ref tf_tcam_alloc_search
+ *
+ * @ref tf_tcam_set
+ *
+ * @ref tf_tcam_get
+ *
+ */
+
+/**
+ * Initializes the TCAM module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_bind(struct tf *tfp,
+		 struct tf_tcam_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested tcam type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc(struct tf *tfp,
+		  struct tf_tcam_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_free(struct tf *tfp,
+		 struct tf_tcam_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc_search(struct tf *tfp,
+			 struct tf_tcam_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_set(struct tf *tfp,
+		struct tf_tcam_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_get(struct tf *tfp,
+		struct tf_tcam_get_parms *parms);
+
+#endif /* _TF_TCAM_H */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
new file mode 100644
index 000000000..a9010543d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include "tf_util.h"
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+{
+	switch (tbl_type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		return "Full Action record";
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		return "Multicast Groups";
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		return "Encap 8B";
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		return "Encap 16B";
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+		return "Encap 32B";
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		return "Encap 64B";
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		return "Source Properties SMAC";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		return "Source Properties SMAC IPv4";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		return "Source Properties SMAC IPv6";
+	case TF_TBL_TYPE_ACT_STATS_64:
+		return "Stats 64B";
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		return "NAT Source Port";
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		return "NAT Destination Port";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		return "NAT IPv4 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		return "NAT IPv4 Destination";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+		return "NAT IPv6 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+		return "NAT IPv6 Destination";
+	case TF_TBL_TYPE_METER_PROF:
+		return "Meter Profile";
+	case TF_TBL_TYPE_METER_INST:
+		return "Meter";
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		return "Mirror";
+	case TF_TBL_TYPE_UPAR:
+		return "UPAR";
+	case TF_TBL_TYPE_EPOCH0:
+		return "EPOCH0";
+	case TF_TBL_TYPE_EPOCH1:
+		return "EPOCH1";
+	case TF_TBL_TYPE_METADATA:
+		return "Metadata";
+	case TF_TBL_TYPE_CT_STATE:
+		return "Connection State";
+	case TF_TBL_TYPE_RANGE_PROF:
+		return "Range Profile";
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		return "Range";
+	case TF_TBL_TYPE_LAG:
+		return "Link Aggregation";
+	case TF_TBL_TYPE_VNIC_SVIF:
+		return "VNIC SVIF";
+	case TF_TBL_TYPE_EM_FKB:
+		return "EM Flexible Key Builder";
+	case TF_TBL_TYPE_WC_FKB:
+		return "WC Flexible Key Builder";
+	case TF_TBL_TYPE_EXT:
+		return "External";
+	default:
+		return "Invalid tbl type";
+	}
+}
+
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+{
+	switch (em_type) {
+	case TF_EM_TBL_TYPE_EM_RECORD:
+		return "EM Record";
+	case TF_EM_TBL_TYPE_TBL_SCOPE:
+		return "Table Scope";
+	default:
+		return "Invalid EM type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
new file mode 100644
index 000000000..4099629ea
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_UTIL_H_
+#define _TF_UTIL_H_
+
+#include "tf_core.h"
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function converting tbl type to text string
+ */
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+
+/**
+ * Helper function converting em tbl type to text string
+ */
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+
+#endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 12/51] net/bnxt: support bulk table get and mirror
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (10 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 11/51] net/bnxt: add multi device support Ajit Khaparde
@ 2020-07-02 23:27         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 13/51] net/bnxt: update multi device design support Ajit Khaparde
                           ` (38 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:27 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add new bulk table type get using FW
  to DMA the data back to host.
- Add flag to allow records to be cleared if possible
- Set mirror using tf_alloc_tbl_entry

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h      |  37 ++++++++-
 drivers/net/bnxt/tf_core/tf_common.h    |  54 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |   2 +
 drivers/net/bnxt/tf_core/tf_core.h      |  55 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  70 ++++++++++++----
 drivers/net/bnxt/tf_core/tf_msg.h       |  15 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |   5 +-
 drivers/net/bnxt/tf_core/tf_tbl.c       | 103 ++++++++++++++++++++++++
 8 files changed, 319 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index d342c695c..c04d1034a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Broadcom
+ * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -27,7 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -81,6 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
+struct tf_tbl_type_get_bulk_input;
+struct tf_tbl_type_get_bulk_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -902,6 +905,8 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -916,4 +921,32 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_bulk_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Starting index to get from */
+	uint32_t			 start_index;
+	/* Number of entries to get */
+	uint32_t			 num_entries;
+	/* Host memory where data will be stored */
+	uint64_t			 host_addr;
+} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_bulk_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
new file mode 100644
index 000000000..2aa4b8640
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_COMMON_H_
+#define _TF_COMMON_H_
+
+/* Helper to check the parms */
+#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "%s: session error\n", \
+				    tf_dir_2_str((parms)->dir)); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_TFP_SESSION(tfp) do { \
+		if ((tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6e15a4c5c..a8236aec9 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -16,6 +16,8 @@
 #include "bitalloc.h"
 #include "bnxt.h"
 #include "rand.h"
+#include "tf_common.h"
+#include "hwrm_tf.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index becc50c7f..96a1a794f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1165,7 +1165,7 @@ struct tf_get_tbl_entry_parms {
 	 */
 	uint8_t *data;
 	/**
-	 * [out] Entry size
+	 * [in] Entry size
 	 */
 	uint16_t data_sz_in_bytes;
 	/**
@@ -1188,6 +1188,59 @@ struct tf_get_tbl_entry_parms {
 int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
+/**
+ * tf_get_bulk_tbl_entry parameter definition
+ */
+struct tf_get_bulk_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Clear hardware entries on reads only
+	 * supported for TF_TBL_TYPE_ACT_STATS_64
+	 */
+	bool clear_on_read;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * Bulk get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_bulk_tbl_entry(struct tf *tfp,
+		     struct tf_get_bulk_tbl_entry_parms *parms);
+
 /**
  * @page exact_match Exact Match Table
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c8f6b88d3..c755c8555 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1216,12 +1216,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 static int
-tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
 {
 	struct tfp_calloc_parms alloc_parms;
 	int rc;
@@ -1229,15 +1225,10 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	/* Allocate session */
 	alloc_parms.nitems = 1;
 	alloc_parms.size = size;
-	alloc_parms.alignment = 0;
+	alloc_parms.alignment = 4096;
 	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate tcam dma entry, rc:%d\n",
-			    rc);
+	if (rc)
 		return -ENOMEM;
-	}
 
 	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
 	buf->va_addr = alloc_parms.mem_va;
@@ -1245,6 +1236,52 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	return 0;
 }
 
+int
+tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *params)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_bulk_input req = { 0 };
+	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	int data_size = 0;
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16((params->dir) |
+		((params->clear_on_read) ?
+		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+
+	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < data_size)
+		return -EINVAL;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
 		      struct tf_set_tcam_entry_parms *parms)
@@ -1282,9 +1319,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	} else {
 		/* use dma buffer */
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_get_dma_buf(&buf, data_size);
-		if (rc != 0)
-			return rc;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
 		data = buf.va_addr;
 		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
 	}
@@ -1303,8 +1340,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
+cleanup:
 	if (buf.va_addr != NULL)
 		tfp_free(buf.va_addr);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 89f7370cc..8d050c402 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -267,4 +267,19 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 			 uint8_t *data,
 			 uint32_t index);
 
+/**
+ * Sends bulk get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to table get bulk parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 05e131f8b..9b7f5a069 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -149,11 +149,10 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_RX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_TX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 17399a5b2..26313ed3c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
@@ -794,6 +795,7 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -915,6 +917,76 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_bulk_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->starting_idx;
+
+	/*
+	 * Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->starting_idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return rc;
+}
+
 #if (TF_SHADOW == 1)
 /**
  * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
@@ -1182,6 +1254,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -1663,6 +1736,36 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
+/* API defined in tf_core.h */
+int
+tf_get_bulk_tbl_entry(struct tf *tfp,
+		 struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type,
+				    strerror(-rc));
+	}
+
+	return rc;
+}
+
 /* API defined in tf_core.h */
 int
 tf_alloc_tbl_scope(struct tf *tfp,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 13/51] net/bnxt: update multi device design support
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (11 preceding siblings ...)
  2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
                           ` (37 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Implement the modules RM, Device (WH+), Identifier.
- Update Session module.
- Implement new HWRMs for RM direct messaging.
- Add new parameter check macro's and clean up the header includes for
  i.e. tfp such that bnxt.h is not directly included in the new modules.
- Add cfa_resource_types, required for RM design.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   2 +
 drivers/net/bnxt/tf_core/Makefile             |   1 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 291 ++++++------
 drivers/net/bnxt/tf_core/tf_common.h          |  24 +
 drivers/net/bnxt/tf_core/tf_core.c            | 286 +++++++++++-
 drivers/net/bnxt/tf_core/tf_core.h            |  12 +-
 drivers/net/bnxt/tf_core/tf_device.c          | 150 +++++-
 drivers/net/bnxt/tf_core/tf_device.h          |  79 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  78 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  79 ++--
 drivers/net/bnxt/tf_core/tf_identifier.c      | 142 +++++-
 drivers/net/bnxt/tf_core/tf_identifier.h      |  25 +-
 drivers/net/bnxt/tf_core/tf_msg.c             | 268 +++++++++--
 drivers/net/bnxt/tf_core/tf_msg.h             |  59 +++
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 434 ++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h          |  72 ++-
 drivers/net/bnxt/tf_core/tf_session.c         | 256 ++++++++++-
 drivers/net/bnxt/tf_core/tf_session.h         | 118 ++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   4 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  30 +-
 drivers/net/bnxt/tf_core/tf_tbl_type.h        |  95 ++--
 drivers/net/bnxt/tf_core/tf_tcam.h            |  14 +-
 22 files changed, 2120 insertions(+), 399 deletions(-)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index a50cb261d..1f7df9d06 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_session.c',
+	'tf_core/tf_device.c',
 	'tf_core/tf_device_p4.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 7a3c325a6..2c02e29e7 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,6 +14,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index c0c1e754e..11e8892f4 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -12,6 +12,11 @@
 
 #ifndef _CFA_RESOURCE_TYPES_H_
 #define _CFA_RESOURCE_TYPES_H_
+/*
+ * This is the constant used to define invalid CFA
+ * resource types across all devices.
+ */
+#define CFA_RESOURCE_TYPE_INVALID 65535
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
@@ -58,209 +63,205 @@
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index 2aa4b8640..ec3bca835 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -51,4 +51,28 @@
 		} \
 	} while (0)
 
+
+#define TF_CHECK_PARMS1(parms) do {					\
+		if ((parms) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS2(parms1, parms2) do {				\
+		if ((parms1) == NULL || (parms2) == NULL) {		\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
+		if ((parms1) == NULL ||					\
+		    (parms2) == NULL ||					\
+		    (parms3) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
 #endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a8236aec9..81a88e211 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -85,7 +85,7 @@ tf_create_em_pool(struct tf_session *session,
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
 
 	if (rc != 0) {
 		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
@@ -231,7 +231,6 @@ tf_open_session(struct tf                    *tfp,
 		   TF_SESSION_NAME_MAX);
 
 	/* Initialize Session */
-	session->device_type = parms->device_type;
 	session->dev = NULL;
 	tf_rm_init(tfp);
 
@@ -276,7 +275,9 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		rc = tf_create_em_pool(session,
+				       (enum tf_dir)dir,
+				       TF_SESSION_EM_POOL_SIZE);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "EM Pool initialization failed\n");
@@ -313,6 +314,64 @@ tf_open_session(struct tf                    *tfp,
 	return -EINVAL;
 }
 
+int
+tf_open_session_new(struct tf *tfp,
+		    struct tf_open_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_open_session_parms oparms;
+
+	TF_CHECK_PARMS(tfp, parms);
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
+		return -ENOTSUP;
+	}
+
+	/* Verify control channel and build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	oparms.open_cfg = parms;
+
+	rc = tf_session_open_session(tfp, &oparms);
+	/* Logging handled by tf_session_open_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+}
+
 int
 tf_attach_session(struct tf *tfp __rte_unused,
 		  struct tf_attach_session_parms *parms __rte_unused)
@@ -341,6 +400,69 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_attach_session_new(struct tf *tfp,
+		      struct tf_attach_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_attach_session_parms aparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Verify control channel */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Verify 'attach' channel */
+	rc = sscanf(parms->attach_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device attach_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Prepare return value of session_id, using ctrl_chan_name
+	 * device values as it becomes the session id.
+	 */
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	aparms.attach_cfg = parms;
+	rc = tf_session_attach_session(tfp,
+				       &aparms);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Attached to session, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return rc;
+}
+
 int
 tf_close_session(struct tf *tfp)
 {
@@ -380,7 +502,7 @@ tf_close_session(struct tf *tfp)
 	if (tfs->ref_count == 0) {
 		/* Free EM pool */
 		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, dir);
+			tf_free_em_pool(tfs, (enum tf_dir)dir);
 
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
@@ -401,6 +523,39 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+int
+tf_close_session_new(struct tf *tfp)
+{
+	int rc;
+	struct tf_session_close_session_parms cparms = { 0 };
+	union tf_session_id session_id = { 0 };
+	uint8_t ref_count;
+
+	TF_CHECK_PARMS1(tfp);
+
+	cparms.ref_count = &ref_count;
+	cparms.session_id = &session_id;
+	rc = tf_session_close_session(tfp,
+				      &cparms);
+	/* Logging handled by tf_session_close_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    cparms.session_id->id,
+		    *cparms.ref_count);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    cparms.session_id->internal.domain,
+		    cparms.session_id->internal.bus,
+		    cparms.session_id->internal.device,
+		    cparms.session_id->internal.fw_session_id);
+
+	return rc;
+}
+
 /** insert EM hash entry API
  *
  *    returns:
@@ -539,10 +694,67 @@ int tf_alloc_identifier(struct tf *tfp,
 	return 0;
 }
 
-/** free identifier resource
- *
- * Returns success or failure code.
- */
+int
+tf_alloc_identifier_new(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_alloc_parms aparms;
+	uint16_t id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_ident_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.ident_type = parms->ident_type;
+	aparms.id = &id;
+	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->id = id;
+
+	return 0;
+}
+
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms)
 {
@@ -618,6 +830,64 @@ int tf_free_identifier(struct tf *tfp,
 	return 0;
 }
 
+int
+tf_free_identifier_new(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_ident_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.ident_type = parms->ident_type;
+	fparms.id = parms->id;
+	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 96a1a794f..74ed24e5a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -380,7 +380,7 @@ struct tf_session_resources {
 	 * The number of identifier resources requested for the session.
 	 * The index used is tf_identifier_type.
 	 */
-	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
 	/** [in] Requested Index Table resource counts
 	 *
 	 * The number of index table resources requested for the session.
@@ -480,6 +480,9 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+int tf_open_session_new(struct tf *tfp,
+			struct tf_open_session_parms *parms);
+
 struct tf_attach_session_parms {
 	/** [in] ctrl_chan_name
 	 *
@@ -542,6 +545,8 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
+int tf_attach_session_new(struct tf *tfp,
+			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -551,6 +556,7 @@ int tf_attach_session(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
+int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -602,6 +608,8 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
+int tf_alloc_identifier_new(struct tf *tfp,
+			    struct tf_alloc_identifier_parms *parms);
 
 /** free identifier resource
  *
@@ -613,6 +621,8 @@ int tf_alloc_identifier(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
+int tf_free_identifier_new(struct tf *tfp,
+			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 3b368313e..4c46cadc6 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,45 +6,169 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
-#include "bnxt.h"
 
 struct tf;
 
+/* Forward declarations */
+static int dev_unbind_p4(struct tf *tfp);
+
 /**
- * Device specific bind function
+ * Device specific bind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] shadow_copy
+ *   Flag controlling shadow copy DB creation
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp __rte_unused,
-	    struct tf_session_resources *resources __rte_unused,
-	    struct tf_dev_info *dev_info)
+dev_bind_p4(struct tf *tfp,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
+	int rc;
+	int frc;
+	struct tf_ident_cfg_parms ident_cfg;
+	struct tf_tbl_cfg_parms tbl_cfg;
+	struct tf_tcam_cfg_parms tcam_cfg;
+
 	/* Initialize the modules */
 
-	dev_info->ops = &tf_dev_ops_p4;
+	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
+	ident_cfg.cfg = tf_ident_p4;
+	ident_cfg.shadow_copy = shadow_copy;
+	ident_cfg.resources = resources;
+	rc = tf_ident_bind(tfp, &ident_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier initialization failure\n");
+		goto fail;
+	}
+
+	tbl_cfg.num_elements = TF_TBL_TYPE_MAX;
+	tbl_cfg.cfg = tf_tbl_p4;
+	tbl_cfg.shadow_copy = shadow_copy;
+	tbl_cfg.resources = resources;
+	rc = tf_tbl_bind(tfp, &tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Table initialization failure\n");
+		goto fail;
+	}
+
+	tcam_cfg.num_elements = TF_TCAM_TBL_TYPE_MAX;
+	tcam_cfg.cfg = tf_tcam_p4;
+	tcam_cfg.shadow_copy = shadow_copy;
+	tcam_cfg.resources = resources;
+	rc = tf_tcam_bind(tfp, &tcam_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM initialization failure\n");
+		goto fail;
+	}
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	dev_handle->ops = &tf_dev_ops_p4;
+
 	return 0;
+
+ fail:
+	/* Cleanup of already created modules */
+	frc = dev_unbind_p4(tfp);
+	if (frc)
+		return frc;
+
+	return rc;
+}
+
+/**
+ * Device specific unbind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+dev_unbind_p4(struct tf *tfp)
+{
+	int rc = 0;
+	bool fail = false;
+
+	/* Unbind all the support modules. As this is only done on
+	 * close we only report errors as everything has to be cleaned
+	 * up regardless.
+	 */
+	rc = tf_ident_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Identifier\n");
+		fail = true;
+	}
+
+	rc = tf_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Table Type\n");
+		fail = true;
+	}
+
+	rc = tf_tcam_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, TCAM\n");
+		fail = true;
+	}
+
+	if (fail)
+		return -1;
+
+	return rc;
 }
 
 int
 dev_bind(struct tf *tfp __rte_unused,
 	 enum tf_device_type type,
+	 bool shadow_copy,
 	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_info)
+	 struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
 		return dev_bind_p4(tfp,
+				   shadow_copy,
 				   resources,
-				   dev_info);
+				   dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
-			    "Device type not supported\n");
-		return -ENOTSUP;
+			    "No such device\n");
+		return -ENODEV;
 	}
 }
 
 int
-dev_unbind(struct tf *tfp __rte_unused,
-	   struct tf_dev_info *dev_handle __rte_unused)
+dev_unbind(struct tf *tfp,
+	   struct tf_dev_info *dev_handle)
 {
-	return 0;
+	switch (dev_handle->type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_unbind_p4(tfp);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "No such device\n");
+		return -ENODEV;
+	}
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 8b63ff178..6aeb6fedb 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -27,6 +27,7 @@ struct tf_session;
  * TF device information
  */
 struct tf_dev_info {
+	enum tf_device_type type;
 	const struct tf_dev_ops *ops;
 };
 
@@ -56,10 +57,12 @@ struct tf_dev_info {
  *
  * Returns
  *   - (0) if successful.
- *   - (-EINVAL) on failure.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_bind(struct tf *tfp,
 	     enum tf_device_type type,
+	     bool shadow_copy,
 	     struct tf_session_resources *resources,
 	     struct tf_dev_info *dev_handle);
 
@@ -71,6 +74,11 @@ int dev_bind(struct tf *tfp,
  *
  * [in] dev_handle
  *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_unbind(struct tf *tfp,
 	       struct tf_dev_info *dev_handle);
@@ -84,6 +92,44 @@ int dev_unbind(struct tf *tfp,
  * different device variants.
  */
 struct tf_dev_ops {
+	/**
+	 * Retrives the MAX number of resource types that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] max_types
+	 *   Pointer to MAX number of types the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_max_types)(struct tf *tfp,
+				    uint16_t *max_types);
+
+	/**
+	 * Retrieves the WC TCAM slice information that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] slice_size
+	 *   Pointer to slice size the device supports
+	 *
+	 * [out] num_slices_per_row
+	 *   Pointer to number of slices per row the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
+					 uint16_t *slice_size,
+					 uint16_t *num_slices_per_row);
+
 	/**
 	 * Allocation of an identifier element.
 	 *
@@ -134,14 +180,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation parameters
+	 *   Pointer to table allocation parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
-				     struct tf_tbl_type_alloc_parms *parms);
+	int (*tf_dev_alloc_tbl)(struct tf *tfp,
+				struct tf_tbl_alloc_parms *parms);
 
 	/**
 	 * Free of a table type element.
@@ -153,14 +199,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type free parameters
+	 *   Pointer to table free parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_free_tbl_type)(struct tf *tfp,
-				    struct tf_tbl_type_free_parms *parms);
+	int (*tf_dev_free_tbl)(struct tf *tfp,
+			       struct tf_tbl_free_parms *parms);
 
 	/**
 	 * Searches for the specified table type element in a shadow DB.
@@ -175,15 +221,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation and search parameters
+	 *   Pointer to table allocation and search parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_search_tbl_type)
-			(struct tf *tfp,
-			struct tf_tbl_type_alloc_search_parms *parms);
+	int (*tf_dev_alloc_search_tbl)(struct tf *tfp,
+				       struct tf_tbl_alloc_search_parms *parms);
 
 	/**
 	 * Sets the specified table type element.
@@ -195,14 +240,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type set parameters
+	 *   Pointer to table set parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_set_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_set_parms *parms);
+	int (*tf_dev_set_tbl)(struct tf *tfp,
+			      struct tf_tbl_set_parms *parms);
 
 	/**
 	 * Retrieves the specified table type element.
@@ -214,14 +259,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type get parameters
+	 *   Pointer to table get parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_get_parms *parms);
+	int (*tf_dev_get_tbl)(struct tf *tfp,
+			       struct tf_tbl_get_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c3c4d1e05..c235976fe 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -3,19 +3,87 @@
  * All rights reserved.
  */
 
+#include <rte_common.h>
+#include <cfa_resource_types.h>
+
 #include "tf_device.h"
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
 
+/**
+ * Device specific function that retrieves the MAX number of HCAPI
+ * types the device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] max_types
+ *   Pointer to the MAX number of HCAPI types supported
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
+			uint16_t *max_types)
+{
+	if (max_types == NULL)
+		return -EINVAL;
+
+	*max_types = CFA_RESOURCE_TYPE_P4_LAST + 1;
+
+	return 0;
+}
+
+/**
+ * Device specific function that retrieves the WC TCAM slices the
+ * device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] slice_size
+ *   Pointer to the WC TCAM slice size
+ *
+ * [out] num_slices_per_row
+ *   Pointer to the WC TCAM row slice configuration
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
+			     uint16_t *slice_size,
+			     uint16_t *num_slices_per_row)
+{
+#define CFA_P4_WC_TCAM_SLICE_SIZE       12
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+
+	if (slice_size == NULL || num_slices_per_row == NULL)
+		return -EINVAL;
+
+	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
+	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+
+	return 0;
+}
+
+/**
+ * Truflow P4 device specific functions
+ */
 const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
-	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
-	.tf_dev_free_tbl_type = tf_tbl_type_free,
-	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
-	.tf_dev_set_tbl_type = tf_tbl_type_set,
-	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 84d90e3a7..5cd02b298 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,11 +12,12 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
@@ -24,41 +25,57 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
-	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
-	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+	/* CFA_RESOURCE_TYPE_P4_UPAR */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EPOC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_METADATA */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_CT_STATE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_PROF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_LAG */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EM_FBK */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_WC_FKB */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EXT */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 726d0b406..e89f9768b 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -6,42 +6,172 @@
 #include <rte_common.h>
 
 #include "tf_identifier.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Identifier DBs.
  */
-/* static void *ident_db[TF_DIR_MAX]; */
+static void *ident_db[TF_DIR_MAX];
 
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 int
-tf_ident_bind(struct tf *tfp __rte_unused,
-	      struct tf_ident_cfg *parms __rte_unused)
+tf_ident_bind(struct tf *tfp,
+	      struct tf_ident_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
+		db_cfg.rm_db = ident_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Identifier DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_ident_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = ident_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		ident_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_ident_alloc(struct tf *tfp __rte_unused,
-	       struct tf_ident_alloc_parms *parms __rte_unused)
+	       struct tf_ident_alloc_parms *parms)
 {
+	int rc;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = (uint32_t *)&parms->id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type);
+		return rc;
+	}
+
 	return 0;
 }
 
 int
 tf_ident_free(struct tf *tfp __rte_unused,
-	      struct tf_ident_free_parms *parms __rte_unused)
+	      struct tf_ident_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = parms->id;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = ident_db[parms->dir];
+	fparms.db_index = parms->ident_type;
+	fparms.index = parms->id;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index b77c91b9d..1c5319b5e 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -12,21 +12,28 @@
  * The Identifier module provides processing of Identifiers.
  */
 
-struct tf_ident_cfg {
+struct tf_ident_cfg_parms {
 	/**
-	 * Number of identifier types in each of the configuration
-	 * arrays
+	 * [in] Number of identifier types in each of the
+	 * configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
-	 * TCAM configuration array
+	 * [in] Identifier configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * [in] Boolean controlling the request shadow copy.
 	 */
-	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+	bool shadow_copy;
+	/**
+	 * [in] Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Identifier allcoation parameter definition
+ * Identifier allocation parameter definition
  */
 struct tf_ident_alloc_parms {
 	/**
@@ -40,7 +47,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [out] Identifier allocated
 	 */
-	uint16_t id;
+	uint16_t *id;
 };
 
 /**
@@ -88,7 +95,7 @@ struct tf_ident_free_parms {
  *   - (-EINVAL) on failure.
  */
 int tf_ident_bind(struct tf *tfp,
-		  struct tf_ident_cfg *parms);
+		  struct tf_ident_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c755c8555..e08a96f23 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -6,15 +6,13 @@
 #include <inttypes.h>
 #include <stdbool.h>
 #include <stdlib.h>
-
-#include "bnxt.h"
-#include "tf_core.h"
-#include "tf_session.h"
-#include "tfp.h"
+#include <string.h>
 
 #include "tf_msg_common.h"
 #include "tf_msg.h"
-#include "hsi_struct_def_dpdk.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tfp.h"
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
@@ -140,6 +138,51 @@ tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
 	return rc;
 }
 
+/**
+ * Allocates a DMA buffer that can be used for message transfer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ *
+ * [in] size
+ *   Requested size of the buffer in bytes
+ *
+ * Returns:
+ *    0      - Success
+ *   -ENOMEM - Unable to allocate buffer, no memory
+ */
+static int
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 4096;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc)
+		return -ENOMEM;
+
+	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+/**
+ * Free's a previous allocated DMA buffer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ */
+static void
+tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
+{
+	tfp_free(buf->va_addr);
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -154,7 +197,7 @@ tf_msg_session_open(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+	tfp_memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
 
 	parms.tf_type = HWRM_TF_SESSION_OPEN;
 	parms.req_data = (uint32_t *)&req;
@@ -870,6 +913,180 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_session_resc_qcaps(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *query,
+			  enum tf_rm_resc_resv_strategy *resv_strategy)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_qcaps_input req = { 0 };
+	struct hwrm_tf_session_resc_qcaps_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf qcaps_buf = { 0 };
+	struct tf_rm_resc_req_entry *data;
+	int dma_size;
+
+	if (size == 0 || query == NULL || resv_strategy == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Resource QCAPS parameter error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffer */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&qcaps_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.qcaps_size = size;
+	req.qcaps_addr = qcaps_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: QCAPS message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].min = tfp_le_to_cpu_16(data[i].min);
+		query[i].max = tfp_le_to_cpu_16(data[i].max);
+	}
+
+	*resv_strategy = resp.flags &
+	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
+
+	tf_msg_free_dma_buf(&qcaps_buf);
+
+	return rc;
+}
+
+int
+tf_msg_session_resc_alloc(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *request,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_alloc_input req = { 0 };
+	struct hwrm_tf_session_resc_alloc_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf req_buf = { 0 };
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_req_entry *req_data;
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&req_buf, dma_size);
+	if (rc)
+		return rc;
+
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.req_size = size;
+
+	req_data = (struct tf_rm_resc_req_entry *)req_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		req_data[i].type = tfp_cpu_to_le_32(request[i].type);
+		req_data[i].min = tfp_cpu_to_le_16(request[i].min);
+		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
+	}
+
+	req.req_addr = req_buf.pa_addr;
+	req.resp_addr = resv_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
+		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
+		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+	}
+
+	tf_msg_free_dma_buf(&req_buf);
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1034,7 +1251,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
@@ -1216,26 +1435,6 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-static int
-tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
-{
-	struct tfp_calloc_parms alloc_parms;
-	int rc;
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = size;
-	alloc_parms.alignment = 4096;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc)
-		return -ENOMEM;
-
-	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
-	buf->va_addr = alloc_parms.mem_va;
-
-	return 0;
-}
-
 int
 tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 			  struct tf_get_bulk_tbl_entry_parms *params)
@@ -1323,12 +1522,14 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		if (rc)
 			goto cleanup;
 		data = buf.va_addr;
-		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
 	}
 
-	memcpy(&data[0], parms->key, key_bytes);
-	memcpy(&data[key_bytes], parms->mask, key_bytes);
-	memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, key_bytes);
+	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
+	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1343,8 +1544,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		goto cleanup;
 
 cleanup:
-	if (buf.va_addr != NULL)
-		tfp_free(buf.va_addr);
+	tf_msg_free_dma_buf(&buf);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8d050c402..06f52ef00 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,8 +6,12 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include <rte_common.h>
+#include <hsi_struct_def_dpdk.h>
+
 #include "tf_tbl.h"
 #include "tf_rm.h"
+#include "tf_rm_new.h"
 
 struct tf;
 
@@ -121,6 +125,61 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the query. Should be set to the max
+ *   elements for the device type
+ *
+ * [out] query
+ *   Pointer to an array of query elements
+ *
+ * [out] resv_strategy
+ *   Pointer to the reservation strategy
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_qcaps(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *query,
+			      enum tf_rm_resc_resv_strategy *resv_strategy);
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] req
+ *   Pointer to an array of request elements
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_alloc(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *request,
+			      struct tf_rm_resc_entry *resv);
+
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 51bb9ba3a..7cadb231f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -3,20 +3,18 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
 #include <rte_common.h>
 
-#include "tf_rm_new.h"
+#include <cfa_resource_types.h>
 
-/**
- * Resource query single entry. Used when accessing HCAPI RM on the
- * firmware.
- */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
-};
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_msg.h"
 
 /**
  * Generic RM Element data type that an RM DB is build upon.
@@ -27,7 +25,7 @@ struct tf_rm_element {
 	 * hcapi_type can be ignored. If Null then the element is not
 	 * valid for the device.
 	 */
-	enum tf_rm_elem_cfg_type type;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/**
 	 * HCAPI RM Type for the element.
@@ -50,53 +48,435 @@ struct tf_rm_element {
 /**
  * TF RM DB definition
  */
-struct tf_rm_db {
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
+
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
 	struct tf_rm_element *db;
 };
 
+
+/**
+ * Resource Manager Adjust of base index definitions.
+ */
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
+
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
+{
+	int rc = 0;
+	uint32_t base_index;
+
+	base_index = db[db_index].alloc.entry.start;
+
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
 int
-tf_rm_create_db(struct tf *tfp __rte_unused,
-		struct tf_rm_create_db_parms *parms __rte_unused)
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
+
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against db requirements */
+
+	/* Alloc request, alignment already set */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			req[i].type = parms->cfg[i].hcapi_type;
+			/* Check that we can get the full amount allocated */
+			if (parms->alloc_num[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[i].min = parms->alloc_num[i];
+				req[i].max = parms->alloc_num[i];
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_num[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		} else {
+			/* Skip the element */
+			req[i].type = CFA_RESOURCE_TYPE_INVALID;
+		}
+	}
+
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       parms->num_elements,
+				       req,
+				       resv);
+	if (rc)
+		return rc;
+
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db = (void *)cparms.mem_va;
+
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
+
+	db = rm_db->db;
+	for (i = 0; i < parms->num_elements; i++) {
+		/* If allocation failed for a single entry the DB
+		 * creation is considered a failure.
+		 */
+		if (parms->alloc_num[i] != resv[i].stride) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    i);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_num[i],
+				    resv[i].stride);
+			goto fail;
+		}
+
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+		db[i].alloc.entry.start = resv[i].start;
+		db[i].alloc.entry.stride = resv[i].stride;
+
+		/* Create pool */
+		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
+			     sizeof(struct bitalloc));
+		/* Alloc request, alignment already set */
+		cparms.nitems = pool_size;
+		cparms.size = sizeof(struct bitalloc);
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+		db[i].pool = (struct bitalloc *)cparms.mem_va;
+	}
+
+	rm_db->num_entries = i;
+	rm_db->dir = parms->dir;
+	parms->rm_db = (void *)rm_db;
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
 	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
 int
 tf_rm_free_db(struct tf *tfp __rte_unused,
-	      struct tf_rm_free_db_parms *parms __rte_unused)
+	      struct tf_rm_free_db_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int i;
+	struct tf_rm_new_db *rm_db;
+
+	/* Traverse the DB and clear each pool.
+	 * NOTE:
+	 *   Firmware is not cleared. It will be cleared on close only.
+	 */
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db->pool);
+
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int id;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -ENOMEM;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				parms->index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -1;
+	}
+
+	return rc;
 }
 
 int
-tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
+
+	return rc;
 }
 
 int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	parms->info = &rm_db->db[parms->db_index].alloc;
+
+	return rc;
 }
 
 int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 72dba0984..6d8234ddc 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
 #include "tf_core.h"
 #include "bitalloc.h"
@@ -32,13 +32,16 @@ struct tf;
  * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
  * structure and several new APIs would need to be added to allow for
  * growth of a single TF resource type.
+ *
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
 
 /**
  * Resource reservation single entry result. Used when accessing HCAPI
  * RM on the firmware.
  */
-struct tf_rm_entry {
+struct tf_rm_new_entry {
 	/** Starting index of the allocated resource */
 	uint16_t start;
 	/** Number of allocated elements */
@@ -52,12 +55,32 @@ struct tf_rm_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
-	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
-	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
 	TF_RM_TYPE_MAX
 };
 
+/**
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
+ */
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
+};
+
 /**
  * RM Element configuration structure, used by the Device to configure
  * how an individual TF type is configured in regard to the HCAPI RM
@@ -68,7 +91,7 @@ struct tf_rm_element_cfg {
 	 * RM Element config controls how the DB for that element is
 	 * processed.
 	 */
-	enum tf_rm_elem_cfg_type cfg;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/* If a HCAPI to TF type conversion is required then TF type
 	 * can be added here.
@@ -92,7 +115,7 @@ struct tf_rm_alloc_info {
 	 * In case of dynamic allocation support this would have
 	 * to be changed to linked list of tf_rm_entry instead.
 	 */
-	struct tf_rm_entry entry;
+	struct tf_rm_new_entry entry;
 };
 
 /**
@@ -104,17 +127,21 @@ struct tf_rm_create_db_parms {
 	 */
 	enum tf_dir dir;
 	/**
-	 * [in] Number of elements in the parameter structure
+	 * [in] Number of elements.
 	 */
 	uint16_t num_elements;
 	/**
-	 * [in] Parameter structure
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Allocation number array. Array size is num_elements.
 	 */
-	struct tf_rm_element_cfg *parms;
+	uint16_t *alloc_num;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -128,7 +155,7 @@ struct tf_rm_free_db_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -138,7 +165,7 @@ struct tf_rm_allocate_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -159,7 +186,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -168,7 +195,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t index;
+	uint16_t index;
 };
 
 /**
@@ -178,7 +205,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -191,7 +218,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] Pointer to flag that indicates the state of the query
 	 */
-	uint8_t *allocated;
+	int *allocated;
 };
 
 /**
@@ -201,7 +228,7 @@ struct tf_rm_get_alloc_info_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -221,7 +248,7 @@ struct tf_rm_get_hcapi_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -306,6 +333,7 @@ int tf_rm_free_db(struct tf *tfp,
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
 int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
@@ -317,7 +345,7 @@ int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
  *
  * Returns
  *   - (0) if successful.
- *   - (-EpINVAL) on failure.
+ *   - (-EINVAL) on failure.
  */
 int tf_rm_free(struct tf_rm_free_parms *parms);
 
@@ -365,4 +393,4 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index c74994546..1917f8100 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -3,29 +3,269 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_session.h"
+#include "tf_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session = NULL;
+	struct tfp_calloc_parms cparms;
+	uint8_t fw_session_id;
+	union tf_session_id *session_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->open_cfg->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
+		else
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
+
+		parms->open_cfg->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_info);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session = (struct tf_session_info *)cparms.mem_va;
+
+	/* Allocate core data for the session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session->core_data = cparms.mem_va;
+
+	/* Initialize Session and Device */
+	session = (struct tf_session *)tfp->session->core_data;
+	session->ver.major = 0;
+	session->ver.minor = 0;
+	session->ver.update = 0;
+
+	session_id = &parms->open_cfg->session_id;
+	session->session_id.internal.domain = session_id->internal.domain;
+	session->session_id.internal.bus = session_id->internal.bus;
+	session->session_id.internal.device = session_id->internal.device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+	/* Return the allocated fw session id */
+	session_id->internal.fw_session_id = fw_session_id;
+
+	session->shadow_copy = parms->open_cfg->shadow_copy;
+
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->open_cfg->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	rc = dev_bind(tfp,
+		      parms->open_cfg->device_type,
+		      session->shadow_copy,
+		      &parms->open_cfg->resources,
+		      session->dev);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	/* Query for Session Config
+	 */
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	tf_close_session(tfp);
+	return -EINVAL;
+}
+
+int
+tf_session_attach_session(struct tf *tfp __rte_unused,
+			  struct tf_session_attach_session_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	TFP_DRV_LOG(ERR,
+		    "Attach not yet supported, rc:%s\n",
+		    strerror(-rc));
+	return rc;
+}
+
+int
+tf_session_close_session(struct tf *tfp,
+			 struct tf_session_close_session_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs = NULL;
+	struct tf_dev_info *tfd;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (tfs->session_id.id == TF_SESSION_ID_INVALID) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid session id, unable to close, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Record the session we're closing so the caller knows the
+	 * details.
+	 */
+	*parms->session_id = tfs->session_id;
+
+	rc = tf_session_get_device(tfs, &tfd);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* In case we're attached only the session client gets closed */
+	rc = tf_msg_session_close(tfp);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "FW Session close failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	tfs->ref_count--;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		/* Unbind the device */
+		rc = dev_unbind(tfp, tfd);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "Device unbind failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	return 0;
+}
+
 int
 tf_session_get_session(struct tf *tfp,
-		       struct tf_session *tfs)
+		       struct tf_session **tfs)
 {
+	int rc;
+
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		TFP_DRV_LOG(ERR, "Session not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	*tfs = (struct tf_session *)(tfp->session->core_data);
 
 	return 0;
 }
 
 int
 tf_session_get_device(struct tf_session *tfs,
-		      struct tf_device *tfd)
+		      struct tf_dev_info **tfd)
 {
+	int rc;
+
 	if (tfs->dev == NULL) {
-		TFP_DRV_LOG(ERR, "Device not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Device not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	*tfd = tfs->dev;
+
+	return 0;
+}
+
+int
+tf_session_get_fw_session_id(struct tf *tfp,
+			     uint8_t *fw_session_id)
+{
+	int rc;
+	struct tf_session *tfs = NULL;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
-	tfd = tfs->dev;
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*fw_session_id = tfs->session_id.internal.fw_session_id;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index b1cc7a4a7..92792518b 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -63,12 +63,7 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Device type, provided by tf_open_session().
-	 */
-	enum tf_device_type device_type;
-
-	/** Session ID, allocated by FW on tf_open_session().
-	 */
+	/** Session ID, allocated by FW on tf_open_session() */
 	union tf_session_id session_id;
 
 	/**
@@ -101,7 +96,7 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
-	/** Device */
+	/** Device handle */
 	struct tf_dev_info *dev;
 
 	/** Session HW and SRAM resources */
@@ -323,13 +318,97 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * Session open parameter definition
+ */
+struct tf_session_open_session_parms {
+	/**
+	 * [in] Pointer to the TF open session configuration
+	 */
+	struct tf_open_session_parms *open_cfg;
+};
+
+/**
+ * Session attach parameter definition
+ */
+struct tf_session_attach_session_parms {
+	/**
+	 * [in] Pointer to the TF attach session configuration
+	 */
+	struct tf_attach_session_parms *attach_cfg;
+};
+
+/**
+ * Session close parameter definition
+ */
+struct tf_session_close_session_parms {
+	uint8_t *ref_count;
+	union tf_session_id *session_id;
+};
+
 /**
  * @page session Session Management
  *
+ * @ref tf_session_open_session
+ *
+ * @ref tf_session_attach_session
+ *
+ * @ref tf_session_close_session
+ *
  * @ref tf_session_get_session
  *
  * @ref tf_session_get_device
+ *
+ * @ref tf_session_get_fw_session_id
+ */
+
+/**
+ * Creates a host session with a corresponding firmware session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
+int tf_session_open_session(struct tf *tfp,
+			    struct tf_session_open_session_parms *parms);
+
+/**
+ * Attaches a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_attach_session(struct tf *tfp,
+			      struct tf_session_attach_session_parms *parms);
+
+/**
+ * Closes a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in/out] parms
+ *   Pointer to the session close parameters.
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_close_session(struct tf *tfp,
+			     struct tf_session_close_session_parms *parms);
 
 /**
  * Looks up the private session information from the TF session info.
@@ -338,14 +417,14 @@ struct tf_session {
  *   Pointer to TF handle
  *
  * [out] tfs
- *   Pointer to the session
+ *   Pointer pointer to the session
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_session(struct tf *tfp,
-			   struct tf_session *tfs);
+			   struct tf_session **tfs);
 
 /**
  * Looks up the device information from the TF Session.
@@ -354,13 +433,30 @@ int tf_session_get_session(struct tf *tfp,
  *   Pointer to TF handle
  *
  * [out] tfd
- *   Pointer to the device
+ *   Pointer pointer to the device
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_device(struct tf_session *tfs,
-			  struct tf_dev_info *tfd);
+			  struct tf_dev_info **tfd);
+
+/**
+ * Looks up the FW session id of the firmware connection for the
+ * requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_fw_session_id(struct tf *tfp,
+				 uint8_t *fw_session_id);
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 7a5443678..a8bb0edab 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,8 +7,12 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+
+#include "tf_core.h"
 #include "stack.h"
 
+struct tf_session;
+
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
 	PT_LVL_1,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index a57a5ddf2..b79706f97 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -10,12 +10,12 @@
 struct tf;
 
 /**
- * Table Type DBs.
+ * Table DBs.
  */
 /* static void *tbl_db[TF_DIR_MAX]; */
 
 /**
- * Table Type Shadow DBs
+ * Table Shadow DBs
  */
 /* static void *shadow_tbl_db[TF_DIR_MAX]; */
 
@@ -30,49 +30,49 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_type_bind(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp __rte_unused,
+	    struct tf_tbl_cfg_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc(struct tf *tfp __rte_unused,
-		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_free(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_free_parms *parms __rte_unused)
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
-			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_set(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp __rte_unused,
+	   struct tf_tbl_set_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_get(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp __rte_unused,
+	   struct tf_tbl_get_parms *parms __rte_unused)
 {
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index c880b368b..11f2aa333 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -11,33 +11,39 @@
 struct tf;
 
 /**
- * The Table Type module provides processing of Internal TF table types.
+ * The Table module provides processing of Internal TF table types.
  */
 
 /**
- * Table Type configuration parameters
+ * Table configuration parameters
  */
-struct tf_tbl_type_cfg_parms {
+struct tf_tbl_cfg_parms {
 	/**
 	 * Number of table types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * Table Type element configuration array
 	 */
-	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Table Type allocation parameters
+ * Table allocation parameters
  */
-struct tf_tbl_type_alloc_parms {
+struct tf_tbl_alloc_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -53,9 +59,9 @@ struct tf_tbl_type_alloc_parms {
 };
 
 /**
- * Table Type free parameters
+ * Table free parameters
  */
-struct tf_tbl_type_free_parms {
+struct tf_tbl_free_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -75,7 +81,10 @@ struct tf_tbl_type_free_parms {
 	uint16_t ref_cnt;
 };
 
-struct tf_tbl_type_alloc_search_parms {
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -117,9 +126,9 @@ struct tf_tbl_type_alloc_search_parms {
 };
 
 /**
- * Table Type set parameters
+ * Table set parameters
  */
-struct tf_tbl_type_set_parms {
+struct tf_tbl_set_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -143,9 +152,9 @@ struct tf_tbl_type_set_parms {
 };
 
 /**
- * Table Type get parameters
+ * Table get parameters
  */
-struct tf_tbl_type_get_parms {
+struct tf_tbl_get_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -169,39 +178,39 @@ struct tf_tbl_type_get_parms {
 };
 
 /**
- * @page tbl_type Table Type
+ * @page tbl Table
  *
- * @ref tf_tbl_type_bind
+ * @ref tf_tbl_bind
  *
- * @ref tf_tbl_type_unbind
+ * @ref tf_tbl_unbind
  *
- * @ref tf_tbl_type_alloc
+ * @ref tf_tbl_alloc
  *
- * @ref tf_tbl_type_free
+ * @ref tf_tbl_free
  *
- * @ref tf_tbl_type_alloc_search
+ * @ref tf_tbl_alloc_search
  *
- * @ref tf_tbl_type_set
+ * @ref tf_tbl_set
  *
- * @ref tf_tbl_type_get
+ * @ref tf_tbl_get
  */
 
 /**
- * Initializes the Table Type module with the requested DBs. Must be
+ * Initializes the Table module with the requested DBs. Must be
  * invoked as the first thing before any of the access functions.
  *
  * [in] tfp
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table configuration parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_bind(struct tf *tfp,
-		     struct tf_tbl_type_cfg_parms *parms);
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
@@ -216,7 +225,7 @@ int tf_tbl_type_bind(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_unbind(struct tf *tfp);
+int tf_tbl_unbind(struct tf *tfp);
 
 /**
  * Allocates the requested table type from the internal RM DB.
@@ -225,14 +234,14 @@ int tf_tbl_type_unbind(struct tf *tfp);
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table allocation parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc(struct tf *tfp,
-		      struct tf_tbl_type_alloc_parms *parms);
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
 
 /**
  * Free's the requested table type and returns it to the DB. If shadow
@@ -244,14 +253,14 @@ int tf_tbl_type_alloc(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_free(struct tf *tfp,
-		     struct tf_tbl_type_free_parms *parms);
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
 
 /**
  * Supported if Shadow DB is configured. Searches the Shadow DB for
@@ -269,8 +278,8 @@ int tf_tbl_type_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc_search(struct tf *tfp,
-			     struct tf_tbl_type_alloc_search_parms *parms);
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
 
 /**
  * Configures the requested element by sending a firmware request which
@@ -280,14 +289,14 @@ int tf_tbl_type_alloc_search(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table set parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_set(struct tf *tfp,
-		    struct tf_tbl_type_set_parms *parms);
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
 
 /**
  * Retrieves the requested element by sending a firmware request to get
@@ -297,13 +306,13 @@ int tf_tbl_type_set(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table get parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_get(struct tf *tfp,
-		    struct tf_tbl_type_get_parms *parms);
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
 
 #endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 1420c9ed5..68c25eb1b 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -20,16 +20,22 @@ struct tf_tcam_cfg_parms {
 	 * Number of tcam types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * TCAM configuration array
 	 */
-	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tcam_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 14/51] net/bnxt: support two-level priority for TCAMs
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (12 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 13/51] net/bnxt: update multi device design support Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
                           ` (36 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru, Randy Schacher

From: Shahaji Bhosle <sbhosle@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 36 ++++++++++++++++++++++++------
 drivers/net/bnxt/tf_core/tf_core.h |  4 +++-
 drivers/net/bnxt/tf_core/tf_em.c   |  6 ++---
 drivers/net/bnxt/tf_core/tf_tbl.c  |  2 +-
 4 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 81a88e211..eac57e7bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -893,7 +893,7 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
+	int index = 0;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
 
@@ -916,12 +916,34 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority) {
+		for (index = session_pool->size - 1; index >= 0; index--) {
+			if (ba_inuse(session_pool,
+					  index) == BA_ENTRY_FREE) {
+				break;
+			}
+		}
+		if (ba_alloc_index(session_pool,
+				   index) == BA_FAIL) {
+			TFP_DRV_LOG(ERR,
+				    "%s: %s: ba_alloc index %d failed\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+				    index);
+			return -ENOMEM;
+		}
+	} else {
+		index = ba_alloc(session_pool);
+		if (index == BA_FAIL) {
+			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+			return -ENOMEM;
+		}
 	}
 
 	parms->idx = index;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 74ed24e5a..f1ef00b30 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -799,7 +799,9 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested (definition TBD)
+	 * [in] Priority of entry requested
+	 * 0: index from top i.e. highest priority first
+	 * !0: index from bottom i.e lowest priority first
 	 */
 	uint32_t priority;
 	/**
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index fd1797e39..91cbc6299 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -479,8 +479,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG
-		   (ERR,
+		TFP_DRV_LOG(ERR,
 		   "dir:%d, EM entry index allocation failed\n",
 		   parms->dir);
 		return rc;
@@ -495,8 +494,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	PMD_DRV_LOG
-		   (ERR,
+	TFP_DRV_LOG(INFO,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 26313ed3c..4e236d56c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -1967,7 +1967,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
+		TFP_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 15/51] net/bnxt: add HCAPI interface support
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (13 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
                           ` (35 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Add new hardware shim APIs to support multiple
device generations

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/Makefile           |  10 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h        | 271 +++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h   | 672 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     | 399 +++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h     | 451 +++++++++++++++
 drivers/net/bnxt/meson.build              |   2 +
 drivers/net/bnxt/tf_core/tf_em.c          |  28 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  94 +--
 drivers/net/bnxt/tf_core/tf_tbl.h         |  24 +-
 10 files changed, 1970 insertions(+), 73 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h

diff --git a/drivers/net/bnxt/hcapi/Makefile b/drivers/net/bnxt/hcapi/Makefile
new file mode 100644
index 000000000..65cddd789
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/Makefile
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019-2020 Broadcom Limited.
+# All rights reserved.
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_p4.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_defs.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_p4.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/cfa_p40_hw.h
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
new file mode 100644
index 000000000..f60af4e56
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -0,0 +1,271 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_H_
+#define _HCAPI_CFA_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#include "hcapi_cfa_defs.h"
+
+#define SUPPORT_CFA_HW_P4  1
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL  1
+#endif
+
+/**
+ * Index used for the sram_entries field
+ */
+enum hcapi_cfa_resc_type_sram {
+	HCAPI_CFA_RESC_TYPE_SRAM_FULL_ACTION,
+	HCAPI_CFA_RESC_TYPE_SRAM_MCG,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_8B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_16B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	HCAPI_CFA_RESC_TYPE_SRAM_COUNTER_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_SPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_DPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_S_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_D_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_MAX
+};
+
+/**
+ * Index used for the hw_entries field in struct cfa_rm_db
+ */
+enum hcapi_cfa_resc_type_hw {
+	/* common HW resources for all chip variants */
+	HCAPI_CFA_RESC_TYPE_HW_L2_CTXT_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_EM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_EM_REC,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_METER_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_METER_INST,
+	HCAPI_CFA_RESC_TYPE_HW_MIRROR,
+	HCAPI_CFA_RESC_TYPE_HW_UPAR,
+	/* Wh+/SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_SP_TCAM,
+	/* Thor, SR2 common HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_FKB,
+	/* SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_TBL_SCOPE,
+	HCAPI_CFA_RESC_TYPE_HW_L2_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH0,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH1,
+	HCAPI_CFA_RESC_TYPE_HW_METADATA,
+	HCAPI_CFA_RESC_TYPE_HW_CT_STATE,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_LAG_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_MAX
+};
+
+struct hcapi_cfa_key_result {
+	uint64_t bucket_mem_ptr;
+	uint8_t bucket_idx;
+};
+
+/* common CFA register access macros */
+#define CFA_REG(x)		OFFSETOF(cfa_reg_t, cfa_##x)
+
+#ifndef REG_WR
+#define REG_WR(_p, x, y)  (*((uint32_t volatile *)(x)) = (y))
+#endif
+#ifndef REG_RD
+#define REG_RD(_p, x)  (*((uint32_t volatile *)(x)))
+#endif
+#define CFA_REG_RD(_p, x)	\
+	REG_RD(0, (uint32_t)(_p)->base_addr + CFA_REG(x))
+#define CFA_REG_WR(_p, x, y)	\
+	REG_WR(0, (uint32_t)(_p)->base_addr + CFA_REG(x), y)
+
+
+/* Constants used by Resource Manager Registration*/
+#define RM_CLIENT_NAME_MAX_LEN          32
+
+/**
+ *  Resource Manager Data Structures used for resource requests
+ */
+struct hcapi_cfa_resc_req_entry {
+	uint16_t min;
+	uint16_t max;
+};
+
+struct hcapi_cfa_resc_req {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_req_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the starting index and the number of
+	 * resources in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_req_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_req_db {
+	struct hcapi_cfa_resc_req rx;
+	struct hcapi_cfa_resc_req tx;
+};
+
+struct hcapi_cfa_resc_entry {
+	uint16_t start;
+	uint16_t stride;
+	uint16_t tag;
+};
+
+struct hcapi_cfa_resc {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the startin index and the number of resources
+	 * in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_db {
+	struct hcapi_cfa_resc rx;
+	struct hcapi_cfa_resc tx;
+};
+
+/**
+ * This is the main data structure used by the CFA Resource
+ * Manager.  This data structure holds all the state and table
+ * management information.
+ */
+typedef struct hcapi_cfa_rm_data {
+	uint32_t dummy_data;
+} hcapi_cfa_rm_data_t;
+
+/* End RM support */
+
+struct hcapi_cfa_devops;
+
+struct hcapi_cfa_devinfo {
+	uint8_t global_cfg_data[CFA_GLOBAL_CFG_DATA_SZ];
+	struct hcapi_cfa_layout_tbl layouts;
+	struct hcapi_cfa_devops *devops;
+};
+
+int hcapi_cfa_dev_bind(enum hcapi_cfa_ver hw_ver,
+		       struct hcapi_cfa_devinfo *dev_info);
+
+int hcapi_cfa_key_compile_layout(struct hcapi_cfa_key_template *key_template,
+				 struct hcapi_cfa_key_layout *key_layout);
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data, uint16_t bitlen);
+int
+hcapi_cfa_action_compile_layout(struct hcapi_cfa_action_template *act_template,
+				struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_init_obj(uint64_t *act_obj,
+			      struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_compute_ptr(uint64_t *act_obj,
+				 struct hcapi_cfa_action_layout *act_layout,
+				 uint32_t base_ptr);
+
+int hcapi_cfa_action_hw_op(struct hcapi_cfa_hwop *op,
+			   uint8_t *act_tbl,
+			   struct hcapi_cfa_data *act_obj);
+int hcapi_cfa_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_rm_register_client(hcapi_cfa_rm_data_t *data,
+				 const char *client_name,
+				 int *client_id);
+int hcapi_cfa_rm_unregister_client(hcapi_cfa_rm_data_t *data,
+				   int client_id);
+int hcapi_cfa_rm_query_resources(hcapi_cfa_rm_data_t *data,
+				 int client_id,
+				 uint16_t chnl_id,
+				 struct hcapi_cfa_resc_req_db *req_db);
+int hcapi_cfa_rm_query_resources_one(hcapi_cfa_rm_data_t *data,
+				     int clien_id,
+				     struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_reserve_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_release_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_initialize(hcapi_cfa_rm_data_t *data);
+
+#if SUPPORT_CFA_HW_P4
+
+int hcapi_cfa_p4_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxt_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxtrmp_hwop(struct hcapi_cfa_hwop *op,
+				      struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcam_hwop(struct hcapi_cfa_hwop *op,
+				 struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcamrmp_hwop(struct hcapi_cfa_hwop *op,
+				    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
+			       struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+#endif /* SUPPORT_CFA_HW_P4 */
+/**
+ *  HCAPI CFA device HW operation function callback definition
+ *  This is standardized function callback hook to install different
+ *  CFA HW table programming function callback.
+ */
+
+struct hcapi_cfa_tbl_cb {
+	/**
+	 * This function callback provides the functionality to read/write
+	 * HW table entry from a HW table.
+	 *
+	 * @param[in] op
+	 *   A pointer to the Hardware operation parameter
+	 *
+	 * @param[in] obj_data
+	 *   A pointer to the HW data object for the hardware operation
+	 *
+	 * @return
+	 *   0 for SUCCESS, negative value for FAILURE
+	 */
+	int (*hwop_cb)(struct hcapi_cfa_hwop *op,
+		       struct hcapi_cfa_data *obj_data);
+};
+
+#endif  /* HCAPI_CFA_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
new file mode 100644
index 000000000..39afd4dbc
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
@@ -0,0 +1,92 @@
+/*
+ *   Copyright(c) 2019-2020 Broadcom Limited.
+ *   All rights reserved.
+ */
+
+#include "bitstring.h"
+#include "hcapi_cfa_defs.h"
+#include <errno.h>
+#include "assert.h"
+
+/* HCAPI CFA common PUT APIs */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val)
+{
+	assert(layout);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		bs_put_msb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	else
+		bs_put_lsb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	return 0;
+}
+
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz)
+{
+	int i;
+	uint16_t bitpos;
+	uint8_t bitlen;
+	uint16_t field_id;
+
+	assert(layout);
+	assert(field_tbl);
+
+	if (layout->is_msb_order) {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_msb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	} else {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_lsb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	}
+	return 0;
+}
+
+/* HCAPI CFA common GET APIs */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id,
+			uint64_t *val)
+{
+	assert(layout);
+	assert(val);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		*val = bs_get_msb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	else
+		*val = bs_get_lsb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	return 0;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
new file mode 100644
index 000000000..ea8d99d01
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -0,0 +1,672 @@
+
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+/*!
+ *   \file
+ *   \brief Exported functions for CFA HW programming
+ */
+#ifndef _HCAPI_CFA_DEFS_H_
+#define _HCAPI_CFA_DEFS_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#define SUPPORT_CFA_HW_ALL 0
+#define SUPPORT_CFA_HW_P4  1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+
+#define CFA_BITS_PER_BYTE (8)
+#define __CFA_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+#define CFA_ALIGN(x, a) __CFA_ALIGN_MASK(x, (a) - 1)
+#define CFA_ALIGN_128(x) CFA_ALIGN(x, 128)
+#define CFA_ALIGN_32(x) CFA_ALIGN(x, 32)
+
+#define NUM_WORDS_ALIGN_32BIT(x)                                               \
+	(CFA_ALIGN_32(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+#define NUM_WORDS_ALIGN_128BIT(x)                                              \
+	(CFA_ALIGN_128(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+
+#define CFA_GLOBAL_CFG_DATA_SZ (100)
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL (1)
+#endif
+
+#include "hcapi_cfa_p4.h"
+#define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+#define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
+#define CFA_PROF_MAX_KEY_CFG_SZ sizeof(struct cfa_p4_prof_key_cfg)
+#define CFA_KEY_MAX_FIELD_CNT 41
+#define CFA_ACT_MAX_TEMPLATE_SZ sizeof(struct cfa_p4_action_template)
+
+/**
+ * CFA HW version definition
+ */
+enum hcapi_cfa_ver {
+	HCAPI_CFA_P40 = 0, /**< CFA phase 4.0 */
+	HCAPI_CFA_P45 = 1, /**< CFA phase 4.5 */
+	HCAPI_CFA_P58 = 2, /**< CFA phase 5.8 */
+	HCAPI_CFA_P59 = 3, /**< CFA phase 5.9 */
+	HCAPI_CFA_PMAX = 4
+};
+
+/**
+ * CFA direction definition
+ */
+enum hcapi_cfa_dir {
+	HCAPI_CFA_DIR_RX = 0, /**< Receive */
+	HCAPI_CFA_DIR_TX = 1, /**< Transmit */
+	HCAPI_CFA_DIR_MAX = 2
+};
+
+/**
+ * CFA HW OPCODE definition
+ */
+enum hcapi_cfa_hwops {
+	HCAPI_CFA_HWOPS_PUT, /**< Write to HW operation */
+	HCAPI_CFA_HWOPS_GET, /**< Read from HW operation */
+	HCAPI_CFA_HWOPS_ADD, /**< For operations which require more than simple
+			      * writes to HW, this operation is used. The
+			      * distinction with this operation when compared
+			      * to the PUT ops is that this operation is used
+			      * in conjunction with the HCAPI_CFA_HWOPS_DEL
+			      * op to remove the operations issued by the
+			      * ADD OP.
+			      */
+	HCAPI_CFA_HWOPS_DEL, /**< This issues operations to clear the hardware.
+			      * This operation is used in conjunction
+			      * with the HCAPI_CFA_HWOPS_ADD op and is the
+			      * way to undo/clear the ADD op.
+			      */
+	HCAPI_CFA_HWOPS_MAX
+};
+
+/**
+ * CFA HW KEY CONTROL OPCODE definition
+ */
+enum hcapi_cfa_key_ctrlops {
+	HCAPI_CFA_KEY_CTRLOPS_INSERT, /**< insert control bits */
+	HCAPI_CFA_KEY_CTRLOPS_STRIP, /**< strip control bits */
+	HCAPI_CFA_KEY_CTRLOPS_MAX
+};
+
+/**
+ * CFA HW field structure definition
+ */
+struct hcapi_cfa_field {
+	/** [in] Starting bit position pf the HW field within a HW table
+	 *  entry.
+	 */
+	uint16_t bitpos;
+	/** [in] Number of bits for the HW field. */
+	uint8_t bitlen;
+};
+
+/**
+ * CFA HW table entry layout structure definition
+ */
+struct hcapi_cfa_layout {
+	/** [out] Bit order of layout */
+	bool is_msb_order;
+	/** [out] Size in bits of entry */
+	uint32_t total_sz_in_bits;
+	/** [out] data pointer of the HW layout fields array */
+	const struct hcapi_cfa_field *field_array;
+	/** [out] number of HW field entries in the HW layout field array */
+	uint32_t array_sz;
+};
+
+/**
+ * CFA HW data object definition
+ */
+struct hcapi_cfa_data_obj {
+	/** [in] HW field identifier. Used as an index to a HW table layout */
+	uint16_t field_id;
+	/** [in] Value of the HW field */
+	uint64_t val;
+};
+
+/**
+ * CFA HW definition
+ */
+struct hcapi_cfa_hw {
+	/** [in] HW table base address for the operation with optional device
+	 *  handle. For on-chip HW table operation, this is the either the TX
+	 *  or RX CFA HW base address. For off-chip table, this field is the
+	 *  base memory address of the off-chip table.
+	 */
+	uint64_t base_addr;
+	/** [in] Optional opaque device handle. It is generally used to access
+	 *  an GRC register space through PCIE BAR and passed to the BAR memory
+	 *  accessor routine.
+	 */
+	void *handle;
+};
+
+/**
+ * CFA HW operation definition
+ *
+ */
+struct hcapi_cfa_hwop {
+	/** [in] HW opcode */
+	enum hcapi_cfa_hwops opcode;
+	/** [in] CFA HW information used by accessor routines.
+	 */
+	struct hcapi_cfa_hw hw;
+};
+
+/**
+ * CFA HW data structure definition
+ */
+struct hcapi_cfa_data {
+	/** [in] physical offset to the HW table for the data to be
+	 *  written to.  If this is an array of registers, this is the
+	 *  index into the array of registers.  For writing keys, this
+	 *  is the byte offset into the memory wher the key should be
+	 *  written.
+	 */
+	union {
+		uint32_t index;
+		uint32_t byte_offset;
+	} u;
+	/** [in] HW data buffer pointer */
+	uint8_t *data;
+	/** [in] HW data mask buffer pointer */
+	uint8_t *data_mask;
+	/** [in] size of the HW data buffer in bytes */
+	uint16_t data_sz;
+};
+
+/*********************** Truflow start ***************************/
+enum hcapi_cfa_pg_tbl_lvl {
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
+};
+
+enum hcapi_cfa_em_table_type {
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
+};
+
+struct hcapi_cfa_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct hcapi_cfa_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct hcapi_cfa_em_page_tbl    pg_tbl[TF_PT_LVL_MAX];
+};
+
+struct hcapi_cfa_em_ctx_mem_info {
+	struct hcapi_cfa_em_table		em_tables[TF_MAX_TABLE];
+};
+
+/*********************** Truflow end ****************************/
+
+/**
+ * CFA HW key table definition
+ *
+ * Applicable to EEM and off-chip EM table only.
+ */
+struct hcapi_cfa_key_tbl {
+	/** [in] For EEM, this is the KEY0 base mem pointer. For off-chip EM,
+	 *  this is the base mem pointer of the key table.
+	 */
+	uint8_t *base0;
+	/** [in] total size of the key table in bytes. For EEM, this size is
+	 *  same for both KEY0 and KEY1 table.
+	 */
+	uint32_t size;
+	/** [in] number of key buckets, applicable for newer chips */
+	uint32_t num_buckets;
+	/** [in] For EEM, this is KEY1 base mem pointer. Fo off-chip EM,
+	 *  this is the key record memory base pointer within the key table,
+	 *  applicable for newer chip
+	 */
+	uint8_t *base1;
+};
+
+/**
+ * CFA HW key buffer definition
+ */
+struct hcapi_cfa_key_obj {
+	/** [in] pointer to the key data buffer */
+	uint32_t *data;
+	/** [in] buffer len in bits */
+	uint32_t len;
+	/** [in] Pointer to the key layout */
+	struct hcapi_cfa_key_layout *layout;
+};
+
+/**
+ * CFA HW key data definition
+ */
+struct hcapi_cfa_key_data {
+	/** [in] For on-chip key table, it is the offset in unit of smallest
+	 *  key. For off-chip key table, it is the byte offset relative
+	 *  to the key record memory base.
+	 */
+	uint32_t offset;
+	/** [in] HW key data buffer pointer */
+	uint8_t *data;
+	/** [in] size of the key in bytes */
+	uint16_t size;
+};
+
+/**
+ * CFA HW key location definition
+ */
+struct hcapi_cfa_key_loc {
+	/** [out] on-chip EM bucket offset or off-chip EM bucket mem pointer */
+	uint64_t bucket_mem_ptr;
+	/** [out] index within the EM bucket */
+	uint8_t bucket_idx;
+};
+
+/**
+ * CFA HW layout table definition
+ */
+struct hcapi_cfa_layout_tbl {
+	/** [out] data pointer to an array of fix formatted layouts supported.
+	 *  The index to the array is the CFA HW table ID
+	 */
+	const struct hcapi_cfa_layout *tbl;
+	/** [out] number of fix formatted layouts in the layout array */
+	uint16_t num_layouts;
+};
+
+/**
+ * Key template consists of key fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_key_template {
+	/** [in] key field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t field_en[CFA_KEY_MAX_FIELD_CNT];
+	/** [in] Identified if the key template is for TCAM. If false, the
+	 *  the key template is for EM. This field is mandantory for device that
+	 *  only support fix key formats.
+	 */
+	bool is_wc_tcam_key;
+};
+
+/**
+ * key layout consist of field array, key bitlen, key ID, and other meta data
+ * pertain to a key
+ */
+struct hcapi_cfa_key_layout {
+	/** [out] key layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual key size in number of bits */
+	uint16_t bitlen;
+	/** [out] key identifier and this field is only valid for device
+	 *  that supports fix key formats
+	 */
+	uint16_t id;
+	/** [out] Indentified the key layout is WC TCAM key */
+	bool is_wc_tcam_key;
+	/** [out] total slices size, valid for WC TCAM key only. It can be
+	 *  used by the user to determine the total size of WC TCAM key slices
+	 *  in bytes.
+	 */
+	uint16_t slices_size;
+};
+
+/**
+ * key layout memory contents
+ */
+struct hcapi_cfa_key_layout_contents {
+	/** key layouts */
+	struct hcapi_cfa_key_layout key_layout;
+
+	/** layout */
+	struct hcapi_cfa_layout layout;
+
+	/** fields */
+	struct hcapi_cfa_field field_array[CFA_KEY_MAX_FIELD_CNT];
+};
+
+/**
+ * Action template consists of action fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_action_template {
+	/** [in] CFA version for the action template */
+	enum hcapi_cfa_ver hw_ver;
+	/** [in] action field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t data[CFA_ACT_MAX_TEMPLATE_SZ];
+};
+
+/**
+ * action layout consist of field array, action wordlen and action format ID
+ */
+struct hcapi_cfa_action_layout {
+	/** [in] action identifier */
+	uint16_t id;
+	/** [out] action layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual action record size in number of bits */
+	uint16_t wordlen;
+};
+
+/**
+ *  \defgroup CFA_HCAPI_PUT_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to program a specified value to a
+ * HW field based on the provided programming layout.
+ *
+ * @param[in,out] obj_data
+ *   A data pointer to a CFA HW key/mask data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[in] val
+ *   Value of the HW field to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val);
+
+/**
+ * This API provides the functionality to program an array of field values
+ * with corresponding field IDs to a number of profiler sub-block fields
+ * based on the fixed profiler sub-block hardware programming layout.
+ *
+ * @param[in, out] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * This API provides the functionality to write a value to a
+ * field within the bit position and bit length of a HW data
+ * object based on a provided programming layout.
+ *
+ * @param[in, out] act_obj
+ *   A pointer of the action object to be initialized
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param field_id
+ *   [in] Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[in] val
+ *   HW field value to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t val);
+
+/*@}*/
+
+/**
+ *  \defgroup CFA_HCAPI_GET_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to get the word length of
+ * a layout object.
+ *
+ * @param[in] layout
+ *   A pointer of the HW layout
+ *
+ * @return
+ *   Word length of the layout object
+ */
+uint16_t hcapi_cfa_get_wordlen(const struct hcapi_cfa_layout *layout);
+
+/**
+ * The API provides the functionality to get bit offset and bit
+ * length information of a field from a programming layout.
+ *
+ * @param[in] layout
+ *   A pointer of the action layout
+ *
+ * @param[out] slice
+ *   A pointer to the action offset info data structure
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_slice(const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, struct hcapi_cfa_field *slice);
+
+/**
+ * This API provides the functionality to read the value of a
+ * CFA HW field from CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA HW key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t *val);
+
+/**
+ * This API provides the functionality to read a number of
+ * HW fields from a CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in, out] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * Get a value to a specific location relative to a HW field
+ *
+ * This API provides the functionality to read HW field from
+ * a section of a HW data object identified by the bit position
+ * and bit length from a given programming layout in order to avoid
+ * reading the entire HW data object.
+ *
+ * @param[in] obj_data
+ *   A pointer of the data object to read from
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param[in] field_id
+ *   Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t *val);
+
+/**
+ * This function is used to initialize a layout_contents structure
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * initialized.
+ *
+ * @param[in] layout_contents
+ *  A pointer of the layout contents to initialize
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_init_key_layout_contents(struct hcapi_cfa_key_layout_contents *cont);
+
+/**
+ * This function is used to validate a key template
+ *
+ * The struct hcapi_cfa_key_template is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_template
+ *  A pointer of the key template contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_is_valid_key_template(struct hcapi_cfa_key_template *key_template);
+
+/**
+ * This function is used to validate a key layout
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_layout
+ *  A pointer of the key layout contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_is_valid_key_layout(struct hcapi_cfa_key_layout *key_layout);
+
+/**
+ * This function is used to hash E/EM keys
+ *
+ *
+ * @param[in] key_data
+ *  A pointer of the key
+ *
+ * @param[in] bitlen
+ *  Number of bits in the key
+ *
+ * @return
+ *   CRC32 and Lookup3 hashes of the input key
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen);
+
+/**
+ * This function is used to execute an operation
+ *
+ *
+ * @param[in] op
+ *  Operation
+ *
+ * @param[in] key_tbl
+ *  Table
+ *
+ * @param[in] key_obj
+ *  Key data
+ *
+ * @param[in] key_key_loc
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc);
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset);
+#endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
new file mode 100644
index 000000000..ca0b1c923
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <string.h>
+#include "lookup3.h"
+#include "rand.h"
+
+#include "hcapi_cfa_defs.h"
+
+#define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
+#define TF_EM_PAGE_SIZE (1 << 21)
+uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
+uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
+bool hcapi_cfa_lkup_init;
+
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
+static void hcapi_cfa_seeds_init(void)
+{
+	int i;
+	uint32_t r;
+
+	if (hcapi_cfa_lkup_init)
+		return;
+
+	hcapi_cfa_lkup_init = true;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	hcapi_cfa_lkup_lkup3_init_cfg = SWAP_WORDS32(rand32());
+
+	for (i = 0; i < HCAPI_CFA_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2 + 1] = (r & 0x1);
+	}
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t hcapi_cfa_crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "CRC2:");
+#endif
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+#ifdef TF_EEM_DEBUG
+		TFP_DRV_LOG(DEBUG,
+			    "%02X %08X %08X\n",
+			    (buf[l] & 0xff),
+			    crc,
+			    ~crc);
+#endif
+	}
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "\n");
+#endif
+
+	return ~crc;
+}
+
+static uint32_t hcapi_cfa_crc32_hash(uint8_t *key)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = CFA_P4_EEM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = hcapi_cfa_lkup_em_seed_mem[index * 2];
+	val2 = hcapi_cfa_lkup_em_seed_mem[index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	val1 = hcapi_cfa_crc32i(~val1,
+		      (key - (CFA_P4_EEM_KEY_MAX_SIZE - 1)),
+		      CFA_P4_EEM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((const uint32_t *)(uintptr_t *)in_key) + 1,
+			 CFA_P4_EEM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 hcapi_cfa_lkup_lkup3_init_cfg);
+
+	return val1;
+}
+
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	uint64_t addr;
+
+	if (mem == NULL)
+		return 0;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = mem->num_lvl - 1;
+
+	addr = (uintptr_t)mem->pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Approximation of HCAPI hcapi_cfa_key_hash()
+ *
+ * Return:
+ *
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen)
+{
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+
+	/*
+	 * Init the seeds if needed
+	 */
+	if (!hcapi_cfa_lkup_init)
+		hcapi_cfa_seeds_init();
+
+	key0_hash = hcapi_cfa_crc32_hash(((uint8_t *)key_data) +
+					      (bitlen / 8) - 1);
+
+	key1_hash = hcapi_cfa_lookup3_hash((uint8_t *)key_data);
+
+	return ((uint64_t)key0_hash) << 32 | (uint64_t)key1_hash;
+}
+
+static int hcapi_cfa_key_hw_op_put(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_get(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy(key_obj->data,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_add(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Is entry free?
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is entry is valid then report failure
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT))
+		return -1;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_del(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Read entry
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is not a valid entry then report failure.
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT)) {
+		/*
+		 * If a key has been provided then verify the key matches
+		 * before deleting the entry.
+		 */
+		if (key_obj->data != NULL) {
+			if (memcmp(&table_entry,
+				   key_obj->data,
+				   key_obj->size) != 0)
+				return -1;
+		}
+	} else {
+		return -1;
+	}
+
+
+	/*
+	 * Delete entry
+	 */
+	memset((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       0,
+	       key_obj->size);
+
+	return rc;
+}
+
+
+/** Apporiximation of hcapi_cfa_key_hw_op()
+ *
+ *
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc)
+{
+	int rc = 0;
+
+	if (op == NULL ||
+	    key_tbl == NULL ||
+	    key_obj == NULL ||
+	    key_loc == NULL)
+		return -1;
+
+	op->hw.base_addr =
+		hcapi_get_table_page((struct hcapi_cfa_em_table *)
+				     key_tbl->base0,
+				     key_obj->offset);
+
+	if (op->hw.base_addr == 0)
+		return -1;
+
+	switch (op->opcode) {
+	case HCAPI_CFA_HWOPS_PUT: /**< Write to HW operation */
+		rc = hcapi_cfa_key_hw_op_put(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_GET: /**< Read from HW operation */
+		rc = hcapi_cfa_key_hw_op_get(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_ADD:
+		/**< For operations which require more than
+		 * simple writes to HW, this operation is used. The
+		 * distinction with this operation when compared
+		 * to the PUT ops is that this operation is used
+		 * in conjunction with the HCAPI_CFA_HWOPS_DEL
+		 * op to remove the operations issued by the
+		 * ADD OP.
+		 */
+
+		rc = hcapi_cfa_key_hw_op_add(op, key_obj);
+
+		break;
+	case HCAPI_CFA_HWOPS_DEL:
+		rc = hcapi_cfa_key_hw_op_del(op, key_obj);
+		break;
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
new file mode 100644
index 000000000..0661d6363
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -0,0 +1,451 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_P4_H_
+#define _HCAPI_CFA_P4_H_
+
+#include "cfa_p40_hw.h"
+
+/** CFA phase 4 fix formatted table(layout) ID definition
+ *
+ */
+enum cfa_p4_tbl_id {
+	CFA_P4_TBL_L2CTXT_TCAM = 0,
+	CFA_P4_TBL_L2CTXT_REMAP,
+	CFA_P4_TBL_PROF_TCAM,
+	CFA_P4_TBL_PROF_TCAM_REMAP,
+	CFA_P4_TBL_WC_TCAM,
+	CFA_P4_TBL_WC_TCAM_REC,
+	CFA_P4_TBL_WC_TCAM_REMAP,
+	CFA_P4_TBL_VEB_TCAM,
+	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_MAX
+};
+
+#define CFA_P4_PROF_MAX_KEYS 4
+enum cfa_p4_mac_sel_mode {
+	CFA_P4_MAC_SEL_MODE_FIRST = 0,
+	CFA_P4_MAC_SEL_MODE_LOWEST = 1,
+};
+
+struct cfa_p4_prof_key_cfg {
+	uint8_t mac_sel[CFA_P4_PROF_MAX_KEYS];
+#define CFA_P4_PROF_MAC_SEL_DMAC0 (1 << 0)
+#define CFA_P4_PROF_MAC_SEL_T_MAC0 (1 << 1)
+#define CFA_P4_PROF_MAC_SEL_OUTERMOST_MAC0 (1 << 2)
+#define CFA_P4_PROF_MAC_SEL_DMAC1 (1 << 3)
+#define CFA_P4_PROF_MAC_SEL_T_MAC1 (1 << 4)
+#define CFA_P4_PROF_MAC_OUTERMOST_MAC1 (1 << 5)
+	uint8_t pass_cnt;
+	enum cfa_p4_mac_sel_mode mode;
+};
+
+/**
+ * CFA action layout definition
+ */
+
+#define CFA_P4_ACTION_MAX_LAYOUT_SIZE 184
+
+/**
+ * Action object template structure
+ *
+ * Template structure presents data fields that are necessary to know
+ * at the beginning of Action Builder (AB) processing. Like before the
+ * AB compilation. One such example could be a template that is
+ * flexible in size (Encap Record) and the presence of these fields
+ * allows for determining the template size as well as where the
+ * fields are located in the record.
+ *
+ * The template may also present fields that are not made visible to
+ * the caller by way of the action fields.
+ *
+ * Template fields also allow for additional checking on user visible
+ * fields. One such example could be the encap pointer behavior on a
+ * CFA_P4_ACT_OBJ_TYPE_ACT or CFA_P4_ACT_OBJ_TYPE_ACT_SRAM.
+ */
+struct cfa_p4_action_template {
+	/** Action Object type
+	 *
+	 * Controls the type of the Action Template
+	 */
+	enum {
+		/** Select this type to build an Action Record Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT,
+		/** Select this type to build an Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT,
+		/** Select this type to build a SRAM Action Record
+		 * Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT_SRAM,
+		/** Select this type to build a SRAM Action
+		 * Encapsulation Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ENCAP_SRAM,
+		/** Select this type to build a SRAM Action Modify
+		 * Object, with IPv4 capability.
+		 */
+		/* In case of Stingray the term Modify is used for the 'NAT
+		 * action'. Action builder is leveraged to fill in the NAT
+		 * object which then can be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_MODIFY_IPV4_SRAM,
+		/** Select this type to build a SRAM Action Source
+		 * Property Object.
+		 */
+		/* In case of Stingray this is not a 'pure' action record.
+		 * Action builder is leveraged to full in the Source Property
+		 * object which can then be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM,
+		/** Select this type to build a SRAM Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT_SRAM,
+	} obj_type;
+
+	/** Action Control
+	 *
+	 * Controls the internals of the Action Template
+	 *
+	 * act is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_ACT)
+	 */
+	/*
+	 * Stat and encap are always inline for EEM as table scope
+	 * allocation does not allow for separate Stats allocation,
+	 * but has the xx_inline flags as to be forward compatible
+	 * with Stingray 2, always treated as TRUE.
+	 */
+	struct {
+		/** Set to CFA_HCAPI_TRUE to enable statistics
+		 */
+		uint8_t stat_enable;
+		/** Set to CFA_HCAPI_TRUE to enable statistics to be inlined
+		 */
+		uint8_t stat_inline;
+
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation
+		 */
+		uint8_t encap_enable;
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation to be inlined
+		 */
+		uint8_t encap_inline;
+	} act;
+
+	/** Modify Setting
+	 *
+	 * Controls the type of the Modify Action the template is
+	 * describing
+	 *
+	 * modify is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_MODIFY_SRAM)
+	 */
+	enum {
+		/** Set to enable Modify of Source IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_SOURCE_IPV4 = 0,
+		/** Set to enable Modify of Destination IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_DEST_IPV4
+	} modify;
+
+	/** Encap Control
+	 * Controls the type of encapsulation the template is
+	 * describing
+	 *
+	 * encap is valid when:
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_ACT) &&
+	 *   act.encap_enable) ||
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM)
+	 */
+	struct {
+		/* Direction is required as Stingray Encap on RX is
+		 * limited to l2 and VTAG only.
+		 */
+		/** Receive or Transmit direction
+		 */
+		uint8_t direction;
+		/** Set to CFA_HCAPI_TRUE to enable L2 capability in the
+		 *  template
+		 */
+		uint8_t l2_enable;
+		/** vtag controls the Encap Vector - VTAG Encoding, 4 bits
+		 *
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_0, default, no VLAN
+		 *      Tags applied
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_1, adds capability to
+		 *      set 1 VLAN Tag. Action Template compile adds
+		 *      the following field to the action object
+		 *      ::TF_ER_VLAN1
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_2, adds capability to
+		 *      set 2 VLAN Tags. Action Template compile adds
+		 *      the following fields to the action object
+		 *      ::TF_ER_VLAN1 and ::TF_ER_VLAN2
+		 * </ul>
+		 */
+		enum { CFA_P4_ACT_ENCAP_VTAGS_PUSH_0 = 0,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_1,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_2 } vtag;
+
+		/*
+		 * The remaining fields are NOT supported when
+		 * direction is RX and ((obj_type ==
+		 * CFA_P4_ACT_OBJ_TYPE_ACT) && act.encap_enable).
+		 * ab_compile_layout will perform the checking and
+		 * skip remaining fields.
+		 */
+		/** L3 Encap controls the Encap Vector - L3 Encoding,
+		 *  3 bits. Defines the type of L3 Encapsulation the
+		 *  template is describing.
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_L3_NONE, default, no L3
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV4, enables L3 IPv4
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV6, enables L3 IPv6
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8847, enables L3 MPLS
+		 *      8847 Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8848, enables L3 MPLS
+		 *      8848 Encapsulation.
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable any L3 encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_L3_NONE = 0,
+			/** Set to enable L3 IPv4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV4 = 4,
+			/** Set to enable L3 IPv6 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV6 = 5,
+			/** Set to enable L3 MPLS 8847 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8847 = 6,
+			/** Set to enable L3 MPLS 8848 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8848 = 7
+		} l3;
+
+#define CFA_P4_ACT_ENCAP_MAX_MPLS_LABELS 8
+		/** 1-8 labels, valid when
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8847) ||
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8848)
+		 *
+		 * MAX number of MPLS Labels 8.
+		 */
+		uint8_t l3_num_mpls_labels;
+
+		/** Set to CFA_HCAPI_TRUE to enable L4 capability in the
+		 * template.
+		 *
+		 * CFA_HCAPI_TRUE adds ::TF_EN_UDP_SRC_PORT and
+		 * ::TF_EN_UDP_DST_PORT to the template.
+		 */
+		uint8_t l4_enable;
+
+		/** Tunnel Encap controls the Encap Vector - Tunnel
+		 *  Encap, 3 bits. Defines the type of Tunnel
+		 *  encapsulation the template is describing
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NONE, default, no Tunnel
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL
+		 * <li> CFA_P4_ACT_ENCAP_TNL_VXLAN. NOTE: Expects
+		 *      l4_enable set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NGE. NOTE: Expects l4_enable
+		 *      set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NVGRE. NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GRE.NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable Tunnel header encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NONE = 0,
+			/** Set to enable Tunnel Generic Full header
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL,
+			/** Set to enable VXLAN header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_VXLAN,
+			/** Set to enable NGE (VXLAN2) header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NGE,
+			/** Set to enable NVGRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NVGRE,
+			/** Set to enable GRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GRE,
+			/** Set to enable Generic header after Tunnel
+			 * L4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4,
+			/** Set to enable Generic header after Tunnel
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		} tnl;
+
+		/** Number of bytes of generic tunnel header,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL)
+		 */
+		uint8_t tnl_generic_size;
+		/** Number of 32b words of nge options,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_NGE)
+		 */
+		uint8_t tnl_nge_op_len;
+		/* Currently not planned */
+		/* Custom Header */
+		/*	uint8_t custom_enable; */
+	} encap;
+};
+
+/**
+ * Enumeration of SRAM entry types, used for allocation of
+ * fixed SRAM entities. The memory model for CFA HCAPI
+ * determines if an SRAM entry type is supported.
+ */
+enum cfa_p4_action_sram_entry_type {
+	/* NOTE: Any additions to this enum must be reflected on FW
+	 * side as well.
+	 */
+
+	/** SRAM Action Record */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	/** SRAM Action Encap 8 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
+	/** SRAM Action Encap 16 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
+	/** SRAM Action Encap 64 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+	/** SRAM Action Modify IPv4 Source */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
+	/** SRAM Action Modify IPv4 Destination */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+	/** SRAM Action Source Properties SMAC */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
+	/** SRAM Action Source Properties SMAC IPv4 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV4,
+	/** SRAM Action Source Properties SMAC IPv6 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV6,
+	/** SRAM Action Statistics 64 Bits */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_STATS_64,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MAX
+};
+
+/**
+ * SRAM Action Record structure holding either an action index or an
+ * action ptr.
+ */
+union cfa_p4_action_sram_act_record {
+	/** SRAM Action idx specifies the offset of the SRAM
+	 * element within its SRAM Entry Type block. This
+	 * index can be written into i.e. an L2 Context. Use
+	 * this type for all SRAM Action Record types except
+	 * SRAM Full Action records. Use act_ptr instead.
+	 */
+	uint16_t act_idx;
+	/** SRAM Full Action is special in that it needs an
+	 * action record pointer. This pointer can be written
+	 * into i.e. a Wildcard TCAM entry.
+	 */
+	uint32_t act_ptr;
+};
+
+/**
+ * cfa_p4_action_param parameter definition
+ */
+struct cfa_p4_action_param {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	uint8_t dir;
+	/**
+	 * [in] type of the sram allocation type
+	 */
+	enum cfa_p4_action_sram_entry_type type;
+	/**
+	 * [in] action record to set. The 'type' specified lists the
+	 *	record definition to use in the passed in record.
+	 */
+	union cfa_p4_action_sram_act_record record;
+	/**
+	 * [in] number of elements in act_data
+	 */
+	uint32_t act_size;
+	/**
+	 * [in] ptr to array of action data
+	 */
+	uint64_t *act_data;
+};
+
+/**
+ * EEM Key entry sizes
+ */
+#define CFA_P4_EEM_KEY_MAX_SIZE 52
+#define CFA_P4_EEM_KEY_RECORD_SIZE 64
+
+/**
+ * cfa_eem_entry_hdr
+ */
+struct cfa_p4_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define CFA_P4_EEM_ENTRY_VALID_SHIFT 31
+#define CFA_P4_EEM_ENTRY_VALID_MASK 0x80000000
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_SHIFT 30
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_MASK 0x40000000
+#define CFA_P4_EEM_ENTRY_STRENGTH_SHIFT 28
+#define CFA_P4_EEM_ENTRY_STRENGTH_MASK 0x30000000
+#define CFA_P4_EEM_ENTRY_RESERVED_SHIFT 17
+#define CFA_P4_EEM_ENTRY_RESERVED_MASK 0x0FFE0000
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_SHIFT 8
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_MASK 0x0001FF00
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_SHIFT 3
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_MASK 0x000000F8
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_SHIFT 2
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK 0x00000004
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_SHIFT 1
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_MASK 0x00000002
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_SHIFT 0
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/**
+ *  cfa_p4_eem_key_entry
+ */
+struct cfa_p4_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[CFA_P4_EEM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct cfa_p4_eem_entry_hdr hdr;
+};
+
+#endif /* _CFA_HW_P4_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 1f7df9d06..33e6ebd66 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,6 +43,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_rm_new.c',
 
+	'hcapi/hcapi_cfa_p4.c',
+
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
 	'tf_ulp/ulp_flow_db.c',
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 91cbc6299..38f7fe419 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -189,7 +189,7 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
 		return NULL;
 
-	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
 		return NULL;
 
 	/*
@@ -325,7 +325,7 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key0_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key0_hash,
-					 KEY0_TABLE,
+					 TF_KEY0_TABLE,
 					 dir);
 
 	/*
@@ -334,23 +334,23 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key1_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key1_hash,
-					 KEY1_TABLE,
+					 TF_KEY1_TABLE,
 					 dir);
 
 	if (key0_entry == -EEXIST) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return -EEXIST;
 	} else if (key1_entry == -EEXIST) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return -EEXIST;
 	} else if (key0_entry == 0) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return 0;
 	} else if (key1_entry == 0) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return 0;
 	}
@@ -384,7 +384,7 @@ int tf_insert_eem_entry(struct tf_session *session,
 	int		   num_of_entry;
 
 	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
 
 	if (!mask)
 		return -EINVAL;
@@ -392,13 +392,13 @@ int tf_insert_eem_entry(struct tf_session *session,
 	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
 
 	key0_hash = tf_em_lkup_get_crc32_hash(session,
-				      &parms->key[num_of_entry] - 1,
-				      parms->dir);
+					      &parms->key[num_of_entry] - 1,
+					      parms->dir);
 	key0_index = key0_hash & mask;
 
 	key1_hash =
 	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-					parms->key);
+				       parms->key);
 	key1_index = key1_hash & mask;
 
 	/*
@@ -420,14 +420,14 @@ int tf_insert_eem_entry(struct tf_session *session,
 				      key1_index,
 				      &index,
 				      &table_type) == 0) {
-		if (table_type == KEY0_TABLE) {
+		if (table_type == TF_KEY0_TABLE) {
 			TF_SET_GFID(gfid,
 				    key0_index,
-				    KEY0_TABLE);
+				    TF_KEY0_TABLE);
 		} else {
 			TF_SET_GFID(gfid,
 				    key1_index,
-				    KEY1_TABLE);
+				    TF_KEY1_TABLE);
 		}
 
 		/*
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 4e236d56c..35a7cfab5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -285,8 +285,8 @@ tf_em_setup_page_table(struct tf_em_table *tbl)
 		tf_em_link_page_table(tp, tp_next, set_pte_last);
 	}
 
-	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
 /**
@@ -317,7 +317,7 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 			uint64_t *num_data_pages)
 {
 	uint64_t lvl_data_size = page_size;
-	int lvl = PT_LVL_0;
+	int lvl = TF_PT_LVL_0;
 	uint64_t data_size;
 
 	*num_data_pages = 0;
@@ -326,10 +326,10 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 	while (lvl_data_size < data_size) {
 		lvl++;
 
-		if (lvl == PT_LVL_1)
+		if (lvl == TF_PT_LVL_1)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				page_size;
-		else if (lvl == PT_LVL_2)
+		else if (lvl == TF_PT_LVL_2)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				MAX_PAGE_PTRS(page_size) * page_size;
 		else
@@ -386,18 +386,18 @@ tf_em_size_page_tbls(int max_lvl,
 		     uint32_t page_size,
 		     uint32_t *page_cnt)
 {
-	if (max_lvl == PT_LVL_0) {
-		page_cnt[PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == PT_LVL_1) {
-		page_cnt[PT_LVL_1] = num_data_pages;
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
-	} else if (max_lvl == PT_LVL_2) {
-		page_cnt[PT_LVL_2] = num_data_pages;
-		page_cnt[PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
 	} else {
 		return;
 	}
@@ -434,7 +434,7 @@ tf_em_size_table(struct tf_em_table *tbl)
 	/* Determine number of page table levels and the number
 	 * of data pages needed to process the given eem table.
 	 */
-	if (tbl->type == RECORD_TABLE) {
+	if (tbl->type == TF_RECORD_TABLE) {
 		/*
 		 * For action records just a memory size is provided. Work
 		 * backwards to resolve to number of entries
@@ -480,9 +480,9 @@ tf_em_size_table(struct tf_em_table *tbl)
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
 		    num_data_pages,
-		    page_cnt[PT_LVL_0],
-		    page_cnt[PT_LVL_1],
-		    page_cnt[PT_LVL_2]);
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
 
 	return 0;
 }
@@ -508,7 +508,7 @@ tf_em_ctx_unreg(struct tf *tfp,
 	struct tf_em_table *tbl;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
@@ -544,7 +544,7 @@ tf_em_ctx_reg(struct tf *tfp,
 	int rc = 0;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
@@ -719,41 +719,41 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		return -EINVAL;
 	}
 	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->rx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->tx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	return 0;
@@ -1572,11 +1572,11 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 
 		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
 		rc = tf_msg_em_cfg(tfp,
-				   em_tables[KEY0_TABLE].num_entries,
-				   em_tables[KEY0_TABLE].ctx_id,
-				   em_tables[KEY1_TABLE].ctx_id,
-				   em_tables[RECORD_TABLE].ctx_id,
-				   em_tables[EFC_TABLE].ctx_id,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
@@ -1600,9 +1600,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 * actions related to a single table scope.
 		 */
 		rc = tf_create_tbl_pool_external(dir,
-					    tbl_scope_cb,
-					    em_tables[RECORD_TABLE].num_entries,
-					    em_tables[RECORD_TABLE].entry_size);
+				    tbl_scope_cb,
+				    em_tables[TF_RECORD_TABLE].num_entries,
+				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
 			PMD_DRV_LOG(ERR,
 				    "%d TBL: Unable to allocate idx pools %s\n",
@@ -1672,7 +1672,7 @@ tf_set_tbl_entry(struct tf *tfp,
 		base_addr = tf_em_get_table_page(tbl_scope_cb,
 						 parms->dir,
 						 offset,
-						 RECORD_TABLE);
+						 TF_RECORD_TABLE);
 		if (base_addr == NULL) {
 			PMD_DRV_LOG(ERR,
 				    "dir:%d, Base address lookup failed\n",
@@ -1972,7 +1972,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
 
-		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
 			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
 			printf
 	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index a8bb0edab..ee8a14665 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -14,18 +14,18 @@
 struct tf_session;
 
 enum tf_pg_tbl_lvl {
-	PT_LVL_0,
-	PT_LVL_1,
-	PT_LVL_2,
-	PT_LVL_MAX
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
 };
 
 enum tf_em_table_type {
-	KEY0_TABLE,
-	KEY1_TABLE,
-	RECORD_TABLE,
-	EFC_TABLE,
-	MAX_TABLE
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
 };
 
 struct tf_em_page_tbl {
@@ -41,15 +41,15 @@ struct tf_em_table {
 	uint16_t			ctx_id;
 	uint32_t			entry_size;
 	int				num_lvl;
-	uint32_t			page_cnt[PT_LVL_MAX];
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
 	uint64_t			num_data_pages;
 	void				*l0_addr;
 	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
 };
 
 struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[MAX_TABLE];
+	struct tf_em_table		em_tables[TF_MAX_TABLE];
 };
 
 /** table scope control block content */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 16/51] net/bnxt: add core changes for EM and EEM lookups
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (14 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
                           ` (34 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru, Shahaji Bhosle

From: Randy Schacher <stuart.schacher@broadcom.com>

- Move External Exact and Exact Match to device module using HCAPI
  to add and delete entries
- Make EM active through the device interface.

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Shahaji Bhosle <shahaji.bhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   3 +-
 drivers/net/bnxt/hcapi/cfa_p40_hw.h       | 781 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 ---
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     |   2 +-
 drivers/net/bnxt/tf_core/Makefile         |   8 +
 drivers/net/bnxt/tf_core/hwrm_tf.h        |  24 +-
 drivers/net/bnxt/tf_core/tf_core.c        | 441 ++++++------
 drivers/net/bnxt/tf_core/tf_core.h        | 141 ++--
 drivers/net/bnxt/tf_core/tf_device.h      |  32 +
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   3 +
 drivers/net/bnxt/tf_core/tf_em.c          | 567 +++++-----------
 drivers/net/bnxt/tf_core/tf_em.h          |  72 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |  23 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |   4 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  25 +-
 drivers/net/bnxt/tf_core/tf_rm.c          | 156 +++--
 drivers/net/bnxt/tf_core/tf_tbl.c         | 437 +++++-------
 drivers/net/bnxt/tf_core/tf_tbl.h         |  49 +-
 18 files changed, 1627 insertions(+), 1233 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 delete mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 365627499..349b09c36 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -46,9 +46,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
+include $(SRCDIR)/hcapi/Makefile
 endif
 
 #
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
new file mode 100644
index 000000000..172706f12
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
@@ -0,0 +1,781 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_hw.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  taken from 12/16/19 17:18:12
+ *
+ * Note:  This file was first generated using  tflib_decode.py.
+ *
+ *        Changes have been made due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union table fields)
+ *        Changes not autogenerated are noted in comments.
+ */
+
+#ifndef _CFA_P40_HW_H_
+#define _CFA_P40_HW_H_
+
+/**
+ * Valid TCAM entry. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Key type (pass). (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS 164
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS 2
+/**
+ * Tunnel HDR type. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS 160
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Number of VLAN tags in tunnel l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS 158
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Number of VLAN tags in l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS 156
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS    108
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS  48
+/**
+ * Tunnel Outer VLAN Tag ID. (for idx 3 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS  96
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS 12
+/**
+ * Tunnel Inner VLAN Tag ID. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS  84
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS 12
+/**
+ * Source Partition. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  80
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+/**
+ * Source Virtual I/F. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  8
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS    24
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS  48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS    12
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS  12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS    0
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS  12
+
+enum cfa_p40_prof_l2_ctxt_tcam_flds {
+	CFA_P40_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 1,
+	CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 2,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 3,
+	CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 4,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC1_FLD = 5,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_FLD = 6,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_FLD = 7,
+	CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_FLD = 8,
+	CFA_P40_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P40_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P40_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 167
+
+/**
+ * Valid entry. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_VALID_BITPOS        79
+#define CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS      1
+/**
+ * reserved program to 0. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS     78
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS   1
+/**
+ * PF Parif Number. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS     74
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS   4
+/**
+ * Number of VLAN Tags. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS    72
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS  2
+/**
+ * Dest. MAC Address.
+ */
+#define CFA_P40_ACT_VEB_TCAM_MAC_BITPOS          24
+#define CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS        48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_OVID_BITPOS         12
+#define CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS       12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_IVID_BITPOS         0
+#define CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS       12
+
+enum cfa_p40_act_veb_tcam_flds {
+	CFA_P40_ACT_VEB_TCAM_VALID_FLD = 0,
+	CFA_P40_ACT_VEB_TCAM_RESERVED_FLD = 1,
+	CFA_P40_ACT_VEB_TCAM_PARIF_IN_FLD = 2,
+	CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_FLD = 3,
+	CFA_P40_ACT_VEB_TCAM_MAC_FLD = 4,
+	CFA_P40_ACT_VEB_TCAM_OVID_FLD = 5,
+	CFA_P40_ACT_VEB_TCAM_IVID_FLD = 6,
+	CFA_P40_ACT_VEB_TCAM_MAX_FLD
+};
+
+#define CFA_P40_ACT_VEB_TCAM_TOTAL_NUM_BITS 80
+
+/**
+ * Entry is valid.
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS 18
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS 1
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS 2
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS 16
+/**
+ * for resolving TCAM/EM conflicts
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS 0
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS 2
+
+enum cfa_p40_lkup_tcam_record_mem_flds {
+	CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_FLD = 0,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_FLD = 1,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_FLD = 2,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_MAX_FLD
+};
+
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_TOTAL_NUM_BITS 19
+
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS 62
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_tpid_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_MAX = 0x3UL
+};
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS 60
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_pri_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_MAX = 0x3UL
+};
+/**
+ * Bypass Source Properties Lookup. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS 59
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS 1
+/**
+ * SP Record Pointer. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS 43
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS 16
+/**
+ * BD Action pointer passing enable. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS 42
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS 1
+/**
+ * Default VLAN TPID. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS 39
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS 3
+/**
+ * Allowed VLAN TPIDs. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS 33
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS 6
+/**
+ * Default VLAN PRI.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS 30
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS 3
+/**
+ * Allowed VLAN PRIs.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS 22
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS 8
+/**
+ * Partition.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS 18
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS 4
+/**
+ * Bypass Lookup.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS 17
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS 1
+
+/**
+ * L2 Context Remap Data. Action bypass mode (1) {7'd0,prof_vnic[9:0]} Note:
+ * should also set byp_lkup_en. Action bypass mode (0) byp_lkup_en(0) -
+ * {prof_func[6:0],l2_context[9:0]} byp_lkup_en(1) - {1'b0,act_rec_ptr[15:0]}
+ */
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS 12
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS 10
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS 7
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS 10
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS 16
+
+enum cfa_p40_prof_ctxt_remap_mem_flds {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_FLD = 0,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_FLD = 1,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_FLD = 2,
+	CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_FLD = 3,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_FLD = 4,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_FLD = 5,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_FLD = 6,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_FLD = 7,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_FLD = 8,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_FLD = 9,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_FLD = 10,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_FLD = 11,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_FLD = 12,
+	CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_FLD = 13,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ARP_FLD = 14,
+	CFA_P40_PROF_CTXT_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TOTAL_NUM_BITS 64
+
+/**
+ * Bypass action pointer look up (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS 37
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS 1
+/**
+ * Exact match search enable (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS 36
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS 1
+/**
+ * Exact match profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS 28
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS 8
+/**
+ * Exact match key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS 5
+/**
+ * Exact match key mask
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS 13
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS 10
+/**
+ * TCAM search enable
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS 12
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS 1
+/**
+ * TCAM profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS 4
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS 8
+/**
+ * TCAM key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS 4
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS 2
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS 16
+
+enum cfa_p40_prof_profile_tcam_remap_mem_flds {
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TOTAL_NUM_BITS 38
+
+/**
+ * Valid TCAM entry (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS   80
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS 1
+/**
+ * Packet type (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS 76
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS 4
+/**
+ * Pass through CFA (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS 74
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS 2
+/**
+ * Aggregate error (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS 73
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS 1
+/**
+ * Profile function (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS 66
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS 7
+/**
+ * Reserved for future use. Set to 0.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS 57
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS 9
+/**
+ * non-tunnel(0)/tunneled(1) packet (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS 56
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS 1
+/**
+ * Tunnel L2 tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS 55
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L2 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS 53
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped tunnel L2 dest_type UC(0)/MC(2)/BC(3) (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS 51
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS 2
+/**
+ * Tunnel L2 1+ VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS 50
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * Tunnel L2 2 VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS 49
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS 1
+/**
+ * Tunnel L3 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS 48
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS 1
+/**
+ * Tunnel L3 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS 47
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS 1
+/**
+ * Tunnel L3 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS 43
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L3 header is IPV4 or IPV6. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS 42
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 src address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS 41
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 dest address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS 40
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * Tunnel L4 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS 39
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L4 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS 38
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS 1
+/**
+ * Tunnel L4 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS 34
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L4 header is UDP or TCP (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS 33
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS 1
+/**
+ * Tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS 32
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS 31
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS 1
+/**
+ * Tunnel header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS 27
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel header flags
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS 24
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS 3
+/**
+ * L2 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS 1
+/**
+ * L2 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS 22
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS 1
+/**
+ * L2 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS 20
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped L2 dest_type UC(0)/MC(2)/BC(3)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS 18
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS 2
+/**
+ * L2 header 1+ VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS 17
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * L2 header 2 VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS 1
+/**
+ * L3 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS 15
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS 1
+/**
+ * L3 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS 14
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS 1
+/**
+ * L3 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS 10
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS 4
+/**
+ * L3 header is IPV4 or IPV6.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS 9
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS 1
+/**
+ * L3 header IPV6 src address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS 8
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * L3 header IPV6 dest address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS 7
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * L4 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS 6
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS 1
+/**
+ * L4 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS 5
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS 1
+/**
+ * L4 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS 1
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS 4
+/**
+ * L4 header is UDP or TCP
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS 1
+
+enum cfa_p40_prof_profile_tcam_flds {
+	CFA_P40_PROF_PROFILE_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_RESERVED_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_FLD = 10,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_FLD = 11,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_FLD = 12,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_FLD = 13,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_FLD = 14,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_FLD = 15,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_FLD = 16,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_FLD = 17,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_FLD = 18,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_FLD = 19,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_FLD = 20,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_FLD = 21,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_FLD = 22,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_FLD = 23,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_FLD = 24,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_FLD = 25,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_FLD = 26,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_FLD = 27,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_FLD = 28,
+	CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_FLD = 29,
+	CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_FLD = 30,
+	CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_FLD = 31,
+	CFA_P40_PROF_PROFILE_TCAM_L3_VALID_FLD = 32,
+	CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_FLD = 33,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_FLD = 34,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_FLD = 35,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_FLD = 36,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_FLD = 37,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_FLD = 38,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_FLD = 39,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_FLD = 40,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_FLD = 41,
+	CFA_P40_PROF_PROFILE_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_TOTAL_NUM_BITS 81
+
+/**
+ * CFA flexible key layout definition
+ */
+enum cfa_p40_key_fld_id {
+	CFA_P40_KEY_FLD_ID_MAX
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+/**
+ * Valid
+ */
+#define CFA_P40_EEM_KEY_TBL_VALID_BITPOS 0
+#define CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS 1
+
+/**
+ * L1 Cacheable
+ */
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS 1
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS 1
+
+/**
+ * Strength
+ */
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS 2
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS 2
+
+/**
+ * Key Size
+ */
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS 15
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS 9
+
+/**
+ * Record Size
+ */
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS 24
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS 5
+
+/**
+ * Action Record Internal
+ */
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS 29
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS 1
+
+/**
+ * External Flow Counter
+ */
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS 30
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS 31
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS 33
+
+/**
+ * EEM Key omitted - create using keybuilder
+ * Fields here cannot be larger than a uint64_t
+ */
+
+#define CFA_P40_EEM_KEY_TBL_TOTAL_NUM_BITS 64
+
+enum cfa_p40_eem_key_tbl_flds {
+	CFA_P40_EEM_KEY_TBL_VALID_FLD = 0,
+	CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_FLD = 1,
+	CFA_P40_EEM_KEY_TBL_STRENGTH_FLD = 2,
+	CFA_P40_EEM_KEY_TBL_KEY_SZ_FLD = 3,
+	CFA_P40_EEM_KEY_TBL_REC_SZ_FLD = 4,
+	CFA_P40_EEM_KEY_TBL_ACT_REC_INT_FLD = 5,
+	CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_FLD = 6,
+	CFA_P40_EEM_KEY_TBL_AR_PTR_FLD = 7,
+	CFA_P40_EEM_KEY_TBL_MAX_FLD
+};
+
+/**
+ * Mirror Destination 0 Source Property Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_SP_PTR_BITPOS 0
+#define CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS 11
+
+/**
+ * igonore or honor drop
+ */
+#define CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS 13
+#define CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS 1
+
+/**
+ * ingress or egress copy
+ */
+#define CFA_P40_MIRROR_TBL_COPY_BITPOS 14
+#define CFA_P40_MIRROR_TBL_COPY_NUM_BITS 1
+
+/**
+ * Mirror Destination enable.
+ */
+#define CFA_P40_MIRROR_TBL_EN_BITPOS 15
+#define CFA_P40_MIRROR_TBL_EN_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_AR_PTR_BITPOS 16
+#define CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS 16
+
+#define CFA_P40_MIRROR_TBL_TOTAL_NUM_BITS 32
+
+enum cfa_p40_mirror_tbl_flds {
+	CFA_P40_MIRROR_TBL_SP_PTR_FLD = 0,
+	CFA_P40_MIRROR_TBL_IGN_DROP_FLD = 1,
+	CFA_P40_MIRROR_TBL_COPY_FLD = 2,
+	CFA_P40_MIRROR_TBL_EN_FLD = 3,
+	CFA_P40_MIRROR_TBL_AR_PTR_FLD = 4,
+	CFA_P40_MIRROR_TBL_MAX_FLD
+};
+
+/**
+ * P45 Specific Updates (SR) - Non-autogenerated
+ */
+/**
+ * Valid TCAM entry.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Source Partition.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  166
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+
+/**
+ * Source Virtual I/F.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  12
+
+
+/* The SR layout of the l2 ctxt key is different from the Wh+.  Switch to
+ * cfa_p45_hw.h definition when available.
+ */
+enum cfa_p45_prof_l2_ctxt_tcam_flds {
+	CFA_P45_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_FLD = 1,
+	CFA_P45_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 2,
+	CFA_P45_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 3,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 4,
+	CFA_P45_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 5,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC1_FLD = 6,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_OVID_FLD = 7,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_IVID_FLD = 8,
+	CFA_P45_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P45_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P45_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P45_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 171
+
+#endif /* _CFA_P40_HW_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
deleted file mode 100644
index 39afd4dbc..000000000
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- *   Copyright(c) 2019-2020 Broadcom Limited.
- *   All rights reserved.
- */
-
-#include "bitstring.h"
-#include "hcapi_cfa_defs.h"
-#include <errno.h>
-#include "assert.h"
-
-/* HCAPI CFA common PUT APIs */
-int hcapi_cfa_put_field(uint64_t *data_buf,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id, uint64_t val)
-{
-	assert(layout);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		bs_put_msb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	else
-		bs_put_lsb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	return 0;
-}
-
-int hcapi_cfa_put_fields(uint64_t *obj_data,
-			 const struct hcapi_cfa_layout *layout,
-			 struct hcapi_cfa_data_obj *field_tbl,
-			 uint16_t field_tbl_sz)
-{
-	int i;
-	uint16_t bitpos;
-	uint8_t bitlen;
-	uint16_t field_id;
-
-	assert(layout);
-	assert(field_tbl);
-
-	if (layout->is_msb_order) {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_msb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	} else {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_lsb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	}
-	return 0;
-}
-
-/* HCAPI CFA common GET APIs */
-int hcapi_cfa_get_field(uint64_t *obj_data,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id,
-			uint64_t *val)
-{
-	assert(layout);
-	assert(val);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		*val = bs_get_msb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	else
-		*val = bs_get_lsb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	return 0;
-}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index ca0b1c923..42b37da0f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
-
+#include <inttypes.h>
 #include <stdint.h>
 #include <stdlib.h>
 #include <stdbool.h>
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 2c02e29e7..5ed32f12a 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -24,3 +24,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_device.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index c04d1034a..1e78296c6 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -27,8 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
+	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -82,8 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_tbl_type_get_bulk_input;
-struct tf_tbl_type_get_bulk_output;
+struct tf_tbl_type_bulk_get_input;
+struct tf_tbl_type_bulk_get_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -905,8 +905,6 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -922,17 +920,17 @@ typedef struct tf_tbl_type_get_output {
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
-typedef struct tf_tbl_type_get_bulk_input {
+typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
 	uint16_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_TX	   (0x1)
 	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Starting index to get from */
@@ -941,12 +939,12 @@ typedef struct tf_tbl_type_get_bulk_input {
 	uint32_t			 num_entries;
 	/* Host memory where data will be stored */
 	uint64_t			 host_addr;
-} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+} tf_tbl_type_bulk_get_input_t, *ptf_tbl_type_bulk_get_input_t;
 
 /* Output params for table type get */
-typedef struct tf_tbl_type_get_bulk_output {
+typedef struct tf_tbl_type_bulk_get_output {
 	/* Size of the total data read in bytes */
 	uint16_t			 size;
-} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+} tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index eac57e7bd..648d0d1bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,33 +19,41 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static inline uint32_t SWAP_WORDS32(uint32_t val32)
+static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
+			       enum tf_device_type device,
+			       uint16_t key_sz_in_bits,
+			       uint16_t *num_slice_per_row)
 {
-	return (((val32 & 0x0000ffff) << 16) |
-		((val32 & 0xffff0000) >> 16));
-}
+	uint16_t key_bytes;
+	uint16_t slice_sz = 0;
+
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
+
+	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
+		if (device == TF_DEVICE_TYPE_WH) {
+			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
+			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		} else {
+			TFP_DRV_LOG(ERR,
+				    "Unsupported device type %d\n",
+				    device);
+			return -ENOTSUP;
+		}
 
-static void tf_seeds_init(struct tf_session *session)
-{
-	int i;
-	uint32_t r;
-
-	/* Initialize the lfsr */
-	rand_init();
-
-	/* RX and TX use the same seed values */
-	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
-		session->lkup_lkup3_init_cfg[TF_DIR_TX] =
-						SWAP_WORDS32(rand32());
-
-	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+		if (key_bytes > *num_slice_per_row * slice_sz) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Key size %d is not supported\n",
+				    tf_tcam_tbl_2_str(tcam_tbl_type),
+				    key_bytes);
+			return -ENOTSUP;
+		}
+	} else { /* for other type of tcam */
+		*num_slice_per_row = 1;
 	}
+
+	return 0;
 }
 
 /**
@@ -153,15 +161,18 @@ tf_open_session(struct tf                    *tfp,
 	uint8_t fw_session_id;
 	int dir;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
 	 * firmware open session succeeds.
 	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH)
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
 		return -ENOTSUP;
+	}
 
 	/* Build the beginning of session_id */
 	rc = sscanf(parms->ctrl_chan_name,
@@ -171,7 +182,7 @@ tf_open_session(struct tf                    *tfp,
 		    &slot,
 		    &device);
 	if (rc != 4) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "Failed to scan device ctrl_chan_name\n");
 		return -EINVAL;
 	}
@@ -183,13 +194,13 @@ tf_open_session(struct tf                    *tfp,
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
-			PMD_DRV_LOG(ERR,
-				    "Session is already open, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
 		else
-			PMD_DRV_LOG(ERR,
-				    "Open message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
 
 		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
 		return rc;
@@ -202,13 +213,13 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
-	tfp->session = alloc_parms.mem_va;
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -217,9 +228,9 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -240,12 +251,13 @@ tf_open_session(struct tf                    *tfp,
 	session->session_id.internal.device = device;
 	session->session_id.internal.fw_session_id = fw_session_id;
 
+	/* Query for Session Config
+	 */
 	rc = tf_msg_session_qcfg(tfp);
 	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
@@ -256,9 +268,9 @@ tf_open_session(struct tf                    *tfp,
 #if (TF_SHADOW == 1)
 		rc = tf_rm_shadow_db_init(tfs);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%d",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%s",
+				    strerror(-rc));
 		/* Add additional processing */
 #endif /* TF_SHADOW */
 	}
@@ -266,13 +278,12 @@ tf_open_session(struct tf                    *tfp,
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
-		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Rm allocate validate failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
-	/* Setup hash seeds */
-	tf_seeds_init(session);
-
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		rc = tf_create_em_pool(session,
@@ -290,11 +301,11 @@ tf_open_session(struct tf                    *tfp,
 	/* Return session ID */
 	parms->session_id = session->session_id;
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session created, session_id:%d\n",
 		    parms->session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
@@ -379,8 +390,7 @@ tf_attach_session(struct tf *tfp __rte_unused,
 #if (TF_SHARED == 1)
 	int rc;
 
-	if (tfp == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	/* - Open the shared memory for the attach_chan_name
 	 * - Point to the shared session for this Device instance
@@ -389,12 +399,10 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	 *   than one client of the session.
 	 */
 
-	if (tfp->session) {
-		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-			rc = tf_msg_session_attach(tfp,
-						   parms->ctrl_chan_name,
-						   parms->session_id);
-		}
+	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_attach(tfp,
+					   parms->ctrl_chan_name,
+					   parms->session_id);
 	}
 #endif /* TF_SHARED */
 	return -1;
@@ -472,8 +480,7 @@ tf_close_session(struct tf *tfp)
 	union tf_session_id session_id;
 	int dir;
 
-	if (tfp == NULL || tfp->session == NULL)
-		return -EINVAL;
+	TF_CHECK_TFP_SESSION(tfp);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -487,9 +494,9 @@ tf_close_session(struct tf *tfp)
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "Message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Message send failed, rc:%s\n",
+				    strerror(-rc));
 		}
 
 		/* Update the ref_count */
@@ -509,11 +516,11 @@ tf_close_session(struct tf *tfp)
 		tfp->session = NULL;
 	}
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session closed, session_id:%d\n",
 		    session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    session_id.internal.domain,
 		    session_id.internal.bus,
@@ -565,27 +572,39 @@ tf_close_session_new(struct tf *tfp)
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL) {
-		/* External EEM */
-		return tf_insert_eem_entry((struct tf_session *)
-					   (tfp->session->core_data),
-					   tbl_scope_cb,
-					   parms);
-	} else if (parms->mem == TF_MEM_INTERNAL) {
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM insert failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
 	return -EINVAL;
@@ -600,27 +619,44 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tfp, parms);
-	else
-		return tf_delete_em_internal_entry(tfp, parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM delete failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
 }
 
-/** allocate identifier resource
- *
- * Returns success or failure code.
- */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms)
 {
@@ -629,14 +665,7 @@ int tf_alloc_identifier(struct tf *tfp,
 	int id;
 	int rc;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -662,30 +691,31 @@ int tf_alloc_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: %s\n",
+		TFP_DRV_LOG(ERR, "%s: %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	id = ba_alloc(session_pool);
 
 	if (id == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		return -ENOMEM;
@@ -763,14 +793,7 @@ int tf_free_identifier(struct tf *tfp,
 	int ba_rc;
 	struct tf_session *tfs;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -796,29 +819,31 @@ int tf_free_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s Identifier pool access failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	ba_rc = ba_inuse(session_pool, (int)parms->id);
 
 	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type),
 			    parms->id);
@@ -893,21 +918,30 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index = 0;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -916,36 +950,16 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority) {
-		for (index = session_pool->size - 1; index >= 0; index--) {
-			if (ba_inuse(session_pool,
-					  index) == BA_ENTRY_FREE) {
-				break;
-			}
-		}
-		if (ba_alloc_index(session_pool,
-				   index) == BA_FAIL) {
-			TFP_DRV_LOG(ERR,
-				    "%s: %s: ba_alloc index %d failed\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-				    index);
-			return -ENOMEM;
-		}
-	} else {
-		index = ba_alloc(session_pool);
-		if (index == BA_FAIL) {
-			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-			return -ENOMEM;
-		}
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
 	}
 
+	index *= num_slice_per_row;
+
 	parms->idx = index;
 	return 0;
 }
@@ -956,26 +970,29 @@ tf_set_tcam_entry(struct tf *tfp,
 {
 	int rc;
 	int id;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
-	/*
-	 * Each tcam send msg function should check for key sizes range
-	 */
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
 
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
@@ -985,11 +1002,12 @@ tf_set_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 		   "%s: %s: Invalid or not allocated index, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
@@ -1006,21 +1024,8 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	int rc = -EOPNOTSUPP;
-
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	return rc;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+	return -EOPNOTSUPP;
 }
 
 int
@@ -1028,20 +1033,29 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row = 1;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 0,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -1050,24 +1064,27 @@ tf_free_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	rc = ba_inuse(session_pool, (int)parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	rc = ba_inuse(session_pool, index);
 	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    index);
 		return -EINVAL;
 	}
 
-	ba_free(session_pool, (int)parms->idx);
+	ba_free(session_pool, index);
 
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    parms->idx,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index f1ef00b30..bb456bba7 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -10,7 +10,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <stdio.h>
-
+#include "hcapi/hcapi_cfa.h"
 #include "tf_project.h"
 
 /**
@@ -54,6 +54,7 @@ enum tf_mem {
 #define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
 #define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
 
+
 /*
  * Helper Macros
  */
@@ -132,34 +133,40 @@ struct tf_session_version {
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
-	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_SR,     /**< Stingray  */
+	TF_DEVICE_TYPE_THOR,   /**< Thor      */
+	TF_DEVICE_TYPE_SR2,    /**< Stingray2 */
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
-/** Identifier resource types
+/**
+ * Identifier resource types
  */
 enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The L2 Context is returned from the L2 Ctxt TCAM lookup
 	 *  and can be used in WC TCAM or EM keys to virtualize further
 	 *  lookups.
 	 */
 	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The WC profile func is returned from the L2 Ctxt TCAM lookup
 	 *  to enable virtualization of the profile TCAM.
 	 */
 	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
+	/**
+	 *  The WC profile ID is included in the WC lookup key
 	 *  to enable virtualization of the WC TCAM hardware.
 	 */
 	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
+	/**
+	 *  The EM profile ID is included in the EM lookup key
 	 *  to enable virtualization of the EM hardware. (not required for SR2
 	 *  as it has table scope)
 	 */
 	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
+	/**
+	 *  The L2 func is included in the ILT result and from recycling to
 	 *  enable virtualization of further lookups.
 	 */
 	TF_IDENT_TYPE_L2_FUNC,
@@ -239,7 +246,8 @@ enum tf_tbl_type {
 
 	/* External */
 
-	/** External table type - initially 1 poolsize entries.
+	/**
+	 * External table type - initially 1 poolsize entries.
 	 * All External table types are associated with a table
 	 * scope. Internal types are not.
 	 */
@@ -279,13 +287,17 @@ enum tf_em_tbl_type {
 	TF_EM_TBL_TYPE_MAX
 };
 
-/** TruFlow Session Information
+/**
+ * TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
  * session. This structure is initialized at time of
  * tf_open_session(). It is passed to all of the TruFlow APIs as way
  * to prescribe and isolate resources between different TruFlow ULP
  * Applications.
+ *
+ * Ownership of the elements is split between ULP and TruFlow. Please
+ * see the individual elements.
  */
 struct tf_session_info {
 	/**
@@ -355,7 +367,8 @@ struct tf_session_info {
 	uint32_t              core_data_sz_bytes;
 };
 
-/** TruFlow handle
+/**
+ * TruFlow handle
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
@@ -405,7 +418,8 @@ struct tf_session_resources {
  * tf_open_session parameters definition.
  */
 struct tf_open_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -417,7 +431,8 @@ struct tf_open_session_parms {
 	 * shared memory allocation.
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
-	/** [in] shadow_copy
+	/**
+	 * [in] shadow_copy
 	 *
 	 * Boolean controlling the use and availability of shadow
 	 * copy. Shadow copy will allow the TruFlow to keep track of
@@ -430,7 +445,8 @@ struct tf_open_session_parms {
 	 * control channel.
 	 */
 	bool shadow_copy;
-	/** [in/out] session_id
+	/**
+	 * [in/out] session_id
 	 *
 	 * Session_id is unique per session.
 	 *
@@ -441,7 +457,8 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
-	/** [in] device type
+	/**
+	 * [in] device type
 	 *
 	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
@@ -484,7 +501,8 @@ int tf_open_session_new(struct tf *tfp,
 			struct tf_open_session_parms *parms);
 
 struct tf_attach_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -497,7 +515,8 @@ struct tf_attach_session_parms {
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] attach_chan_name
+	/**
+	 * [in] attach_chan_name
 	 *
 	 * String containing name of attach channel interface to be
 	 * used for this session.
@@ -510,7 +529,8 @@ struct tf_attach_session_parms {
 	 */
 	char attach_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] session_id
+	/**
+	 * [in] session_id
 	 *
 	 * Session_id is unique per session. For Attach the session_id
 	 * should be the session_id that was returned on the first
@@ -565,7 +585,8 @@ int tf_close_session_new(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-/** tf_alloc_identifier parameter definition
+/**
+ * tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
 	/**
@@ -582,7 +603,8 @@ struct tf_alloc_identifier_parms {
 	uint16_t id;
 };
 
-/** tf_free_identifier parameter definition
+/**
+ * tf_free_identifier parameter definition
  */
 struct tf_free_identifier_parms {
 	/**
@@ -599,7 +621,8 @@ struct tf_free_identifier_parms {
 	uint16_t id;
 };
 
-/** allocate identifier resource
+/**
+ * allocate identifier resource
  *
  * TruFlow core will allocate a free id from the per identifier resource type
  * pool reserved for the session during tf_open().  No firmware is involved.
@@ -611,7 +634,8 @@ int tf_alloc_identifier(struct tf *tfp,
 int tf_alloc_identifier_new(struct tf *tfp,
 			    struct tf_alloc_identifier_parms *parms);
 
-/** free identifier resource
+/**
+ * free identifier resource
  *
  * TruFlow core will return an id back to the per identifier resource type pool
  * reserved for the session.  No firmware is involved.  During tf_close, the
@@ -639,7 +663,8 @@ int tf_free_identifier_new(struct tf *tfp,
  */
 
 
-/** tf_alloc_tbl_scope_parms definition
+/**
+ * tf_alloc_tbl_scope_parms definition
  */
 struct tf_alloc_tbl_scope_parms {
 	/**
@@ -662,7 +687,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -684,7 +709,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -709,7 +734,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -719,7 +744,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -750,7 +775,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -758,7 +783,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-
 /**
  * @page tcam TCAM Access
  *
@@ -771,7 +795,9 @@ int tf_free_tbl_scope(struct tf *tfp,
  * @ref tf_free_tcam_entry
  */
 
-/** tf_alloc_tcam_entry parameter definition
+
+/**
+ * tf_alloc_tcam_entry parameter definition
  */
 struct tf_alloc_tcam_entry_parms {
 	/**
@@ -799,9 +825,7 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested
-	 * 0: index from top i.e. highest priority first
-	 * !0: index from bottom i.e lowest priority first
+	 * [in] Priority of entry requested (definition TBD)
 	 */
 	uint32_t priority;
 	/**
@@ -819,7 +843,8 @@ struct tf_alloc_tcam_entry_parms {
 	uint16_t idx;
 };
 
-/** allocate TCAM entry
+/**
+ * allocate TCAM entry
  *
  * Allocate a TCAM entry - one of these types:
  *
@@ -844,7 +869,8 @@ struct tf_alloc_tcam_entry_parms {
 int tf_alloc_tcam_entry(struct tf *tfp,
 			struct tf_alloc_tcam_entry_parms *parms);
 
-/** tf_set_tcam_entry parameter definition
+/**
+ * tf_set_tcam_entry parameter definition
  */
 struct	tf_set_tcam_entry_parms {
 	/**
@@ -881,7 +907,8 @@ struct	tf_set_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** set TCAM entry
+/**
+ * set TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -892,7 +919,8 @@ struct	tf_set_tcam_entry_parms {
 int tf_set_tcam_entry(struct tf	*tfp,
 		      struct tf_set_tcam_entry_parms *parms);
 
-/** tf_get_tcam_entry parameter definition
+/**
+ * tf_get_tcam_entry parameter definition
  */
 struct tf_get_tcam_entry_parms {
 	/**
@@ -929,7 +957,7 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/*
+/**
  * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
@@ -941,7 +969,7 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/*
+/**
  * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
@@ -963,7 +991,9 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/*
+/**
+ * free TCAM entry
+ *
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -989,6 +1019,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_get_tbl_entry
  */
 
+
 /**
  * tf_alloc_tbl_entry parameter definition
  */
@@ -1201,9 +1232,9 @@ int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
 /**
- * tf_get_bulk_tbl_entry parameter definition
+ * tf_bulk_get_tbl_entry parameter definition
  */
-struct tf_get_bulk_tbl_entry_parms {
+struct tf_bulk_get_tbl_entry_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -1212,11 +1243,6 @@ struct tf_get_bulk_tbl_entry_parms {
 	 * [in] Type of object to get
 	 */
 	enum tf_tbl_type type;
-	/**
-	 * [in] Clear hardware entries on reads only
-	 * supported for TF_TBL_TYPE_ACT_STATS_64
-	 */
-	bool clear_on_read;
 	/**
 	 * [in] Starting index to read from
 	 */
@@ -1250,8 +1276,8 @@ struct tf_get_bulk_tbl_entry_parms {
  * Returns success or failure code. Failure will be returned if the
  * provided data buffer is too small for the data type requested.
  */
-int tf_get_bulk_tbl_entry(struct tf *tfp,
-		     struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_bulk_get_tbl_entry(struct tf *tfp,
+		     struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
@@ -1280,7 +1306,7 @@ struct tf_insert_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1332,12 +1358,12 @@ struct tf_delete_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
 	 * [in] epoch group IDs of entry to delete
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1366,7 +1392,7 @@ struct tf_search_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1387,7 +1413,7 @@ struct tf_search_em_entry_parms {
 	uint16_t em_record_sz_in_bits;
 	/**
 	 * [in] epoch group IDs of entry to lookup
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1415,7 +1441,7 @@ struct tf_search_em_entry_parms {
  * specified direction and table scope.
  *
  * When inserting an entry into an exact match table, the TruFlow library may
- * need to allocate a dynamic bucket for the entry (Brd4 only).
+ * need to allocate a dynamic bucket for the entry (SR2 only).
  *
  * The insertion of duplicate entries in an EM table is not permitted.	If a
  * TruFlow application can guarantee that it will never insert duplicates, it
@@ -1490,4 +1516,5 @@ int tf_delete_em_entry(struct tf *tfp,
  */
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 6aeb6fedb..1501b20d9 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -366,6 +366,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_tcam)(struct tf *tfp,
 			       struct tf_tcam_get_parms *parms);
+
+	/**
+	 * Insert EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_em_entry)(struct tf *tfp,
+				      struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_em_entry)(struct tf *tfp,
+				      struct tf_delete_em_entry_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c235976fe..f4bd95f1c 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
+#include "tf_em.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -89,4 +90,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_insert_em_entry = tf_em_insert_entry,
+	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 38f7fe419..fcbbd7eca 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -17,11 +17,6 @@
 
 #include "bnxt.h"
 
-/* Enable EEM table dump
- */
-#define TF_EEM_DUMP
-
-static struct tf_eem_64b_entry zero_key_entry;
 
 static uint32_t tf_em_get_key_mask(int num_entries)
 {
@@ -36,326 +31,22 @@ static uint32_t tf_em_get_key_mask(int num_entries)
 	return mask;
 }
 
-/* CRC32i support for Key0 hash */
-#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
-#define crc32(x, y) crc32i(~0, x, y)
-
-static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
-0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
-0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
-0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
-0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
-0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
-0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
-0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
-0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
-0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
-0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
-0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
-0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
-0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
-0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
-0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
-0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
-0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
-0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
-0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
-0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
-0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
-0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
-0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
-0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
-0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
-0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
-0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
-0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
-0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
-0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
-0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
-0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
-0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
-0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
-0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
-0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
-0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
-0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
-0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
-0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
-0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
-0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
-0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
-0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
-0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
-0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
-0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
-0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
-0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
-0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
-0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
-0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
-0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
-0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
-0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
-0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
-0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
-0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
-0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
-0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
-0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
-0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
-0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
-0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
-};
-
-static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
-{
-	int l;
-
-	for (l = (len - 1); l >= 0; l--)
-		crc = ucrc32(buf[l], crc);
-
-	return ~crc;
-}
-
-static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
-					  uint8_t *key,
-					  enum tf_dir dir)
-{
-	int i;
-	uint32_t index;
-	uint32_t val1, val2;
-	uint8_t temp[4];
-	uint8_t *kptr = key;
-
-	/* Do byte-wise XOR of the 52-byte HASH key first. */
-	index = *key;
-	kptr--;
-
-	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
-		index = index ^ *kptr;
-		kptr--;
-	}
-
-	/* Get seeds */
-	val1 = session->lkup_em_seed_mem[dir][index * 2];
-	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
-
-	temp[3] = (uint8_t)(val1 >> 24);
-	temp[2] = (uint8_t)(val1 >> 16);
-	temp[1] = (uint8_t)(val1 >> 8);
-	temp[0] = (uint8_t)(val1 & 0xff);
-	val1 = 0;
-
-	/* Start with seed */
-	if (!(val2 & 0x1))
-		val1 = crc32i(~val1, temp, 4);
-
-	val1 = crc32i(~val1,
-		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
-		      TF_HW_EM_KEY_MAX_SIZE);
-
-	/* End with seed */
-	if (val2 & 0x1)
-		val1 = crc32i(~val1, temp, 4);
-
-	return val1;
-}
-
-static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
-					    uint8_t *in_key)
-{
-	uint32_t val1;
-
-	val1 = hashword(((uint32_t *)in_key) + 1,
-			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
-			 lookup3_init_value);
-
-	return val1;
-}
-
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum tf_em_table_type table_type)
-{
-	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
-	void *addr = NULL;
-	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
-
-	if (ctx == NULL)
-		return NULL;
-
-	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
-		return NULL;
-
-	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
-		return NULL;
-
-	/*
-	 * Use the level according to the num_level of page table
-	 */
-	level = ctx->em_tables[table_type].num_lvl - 1;
-
-	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
-
-	return addr;
-}
-
-/** Read Key table entry
- *
- * Entry is read in to entry
- */
-static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
-	return 0;
-}
-
-static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
-
-	return 0;
-}
-
-static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_eem_64b_entry *entry,
-			       uint32_t index,
-			       enum tf_em_table_type table_type,
-			       enum tf_dir dir)
-{
-	int rc;
-	struct tf_eem_64b_entry table_entry;
-
-	rc = tf_em_read_entry(tbl_scope_cb,
-			      &table_entry,
-			      TF_EM_KEY_RECORD_SIZE,
-			      index,
-			      table_type,
-			      dir);
-
-	if (rc != 0)
-		return -EINVAL;
-
-	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
-		if (entry != NULL) {
-			if (memcmp(&table_entry,
-				   entry,
-				   TF_EM_KEY_RECORD_SIZE) == 0)
-				return -EEXIST;
-		} else {
-			return -EEXIST;
-		}
-
-		return -EBUSY;
-	}
-
-	return 0;
-}
-
-static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t *in_key,
-				    struct tf_eem_64b_entry *key_entry)
+static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+				   uint8_t	       *in_key,
+				   struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
 
-	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
 		key_entry->hdr.pointer = result->pointer;
 	else
 		key_entry->hdr.pointer = result->pointer;
 
 	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-}
-
-/* tf_em_select_inject_table
- *
- * Returns:
- * 0 - Key does not exist in either table and can be inserted
- *		at "index" in table "table".
- * EEXIST  - Key does exist in table at "index" in table "table".
- * TF_ERR     - Something went horribly wrong.
- */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
-					  enum tf_dir dir,
-					  struct tf_eem_64b_entry *entry,
-					  uint32_t key0_hash,
-					  uint32_t key1_hash,
-					  uint32_t *index,
-					  enum tf_em_table_type *table)
-{
-	int key0_entry;
-	int key1_entry;
-
-	/*
-	 * Check KEY0 table.
-	 */
-	key0_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key0_hash,
-					 TF_KEY0_TABLE,
-					 dir);
 
-	/*
-	 * Check KEY1 table.
-	 */
-	key1_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key1_hash,
-					 TF_KEY1_TABLE,
-					 dir);
-
-	if (key0_entry == -EEXIST) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return -EEXIST;
-	} else if (key1_entry == -EEXIST) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return -EEXIST;
-	} else if (key0_entry == 0) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return 0;
-	} else if (key1_entry == 0) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return 0;
-	}
-
-	return -EINVAL;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
 }
 
 /** insert EEM entry API
@@ -368,20 +59,24 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms)
+static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			       struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
 	uint32_t	   key0_hash;
 	uint32_t	   key1_hash;
 	uint32_t	   key0_index;
 	uint32_t	   key1_index;
-	struct tf_eem_64b_entry key_entry;
+	struct cfa_p4_eem_64b_entry key_entry;
 	uint32_t	   index;
-	enum tf_em_table_type table_type;
+	enum hcapi_cfa_em_table_type table_type;
 	uint32_t	   gfid;
-	int		   num_of_entry;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
 
 	/* Get mask to use on hash */
 	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
@@ -389,72 +84,84 @@ int tf_insert_eem_entry(struct tf_session *session,
 	if (!mask)
 		return -EINVAL;
 
-	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
 
-	key0_hash = tf_em_lkup_get_crc32_hash(session,
-					      &parms->key[num_of_entry] - 1,
-					      parms->dir);
-	key0_index = key0_hash & mask;
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
 
-	key1_hash =
-	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-				       parms->key);
+	key0_index = key0_hash & mask;
 	key1_index = key1_hash & mask;
 
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
 	/*
 	 * Use the "result" arg to populate all of the key entry then
 	 * store the byte swapped "raw" entry in a local copy ready
 	 * for insertion in to the table.
 	 */
-	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
 				((uint8_t *)parms->key),
 				&key_entry);
 
 	/*
-	 * Find which table to use
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
 	 */
-	if (tf_em_select_inject_table(tbl_scope_cb,
-				      parms->dir,
-				      &key_entry,
-				      key0_index,
-				      key1_index,
-				      &index,
-				      &table_type) == 0) {
-		if (table_type == TF_KEY0_TABLE) {
-			TF_SET_GFID(gfid,
-				    key0_index,
-				    TF_KEY0_TABLE);
-		} else {
-			TF_SET_GFID(gfid,
-				    key1_index,
-				    TF_KEY1_TABLE);
-		}
-
-		/*
-		 * Inject
-		 */
-		if (tf_em_write_entry(tbl_scope_cb,
-				      &key_entry,
-				      TF_EM_KEY_RECORD_SIZE,
-				      index,
-				      table_type,
-				      parms->dir) == 0) {
-			TF_SET_FLOW_ID(parms->flow_id,
-				       gfid,
-				       TF_GFID_TABLE_EXTERNAL,
-				       parms->dir);
-			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-						     0,
-						     0,
-						     0,
-						     index,
-						     0,
-						     table_type);
-			return 0;
-		}
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
 	}
 
-	return -EINVAL;
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
 }
 
 /**
@@ -463,8 +170,8 @@ int tf_insert_eem_entry(struct tf_session *session,
  *  returns:
  *     0 - Success
  */
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms)
+static int tf_insert_em_internal_entry(struct tf                       *tfp,
+				       struct tf_insert_em_entry_parms *parms)
 {
 	int       rc;
 	uint32_t  gfid;
@@ -494,7 +201,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	TFP_DRV_LOG(INFO,
+	PMD_DRV_LOG(ERR,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
@@ -527,8 +234,8 @@ int tf_insert_em_internal_entry(struct tf *tfp,
  * 0
  * -EINVAL
  */
-int tf_delete_em_internal_entry(struct tf *tfp,
-				struct tf_delete_em_entry_parms *parms)
+static int tf_delete_em_internal_entry(struct tf                       *tfp,
+				       struct tf_delete_em_entry_parms *parms)
 {
 	int rc;
 	struct tf_session *session =
@@ -558,46 +265,96 @@ int tf_delete_em_internal_entry(struct tf *tfp,
  *   0
  *   TF_NO_EM_MATCH - entry not found
  */
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms)
+static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_session	   *session;
-	struct tf_tbl_scope_cb	   *tbl_scope_cb;
-	enum tf_em_table_type hash_type;
+	enum hcapi_cfa_em_table_type hash_type;
 	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
 
-	if (parms == NULL)
+	if (parms->flow_handle == 0)
 		return -EINVAL;
 
-	session = (struct tf_session *)tfp->session->core_data;
-	if (session == NULL)
-		return -EINVAL;
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
 
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
 
-	if (parms->flow_handle == 0)
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
+	}
 
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+	/* Process the EM entry per Table Scope type */
+	if (parms->mem == TF_MEM_EXTERNAL)
+		/* External EEM */
+		return tf_insert_eem_entry
+			(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
 
-	if (tf_em_entry_exists(tbl_scope_cb,
-			       NULL,
-			       index,
-			       hash_type,
-			       parms->dir) == -EEXIST) {
-		tf_em_write_entry(tbl_scope_cb,
-				  &zero_key_entry,
-				  TF_EM_KEY_RECORD_SIZE,
-				  index,
-				  hash_type,
-				  parms->dir);
+	return -EINVAL;
+}
 
-		return 0;
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
 	}
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		return tf_delete_em_internal_entry(tfp, parms);
 
 	return -EINVAL;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index c1805df73..2262ae7cc 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,13 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define SUPPORT_CFA_HW_P4 1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+#define SUPPORT_CFA_HW_ALL 0
+
+#include "hcapi/hcapi_cfa_defs.h"
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -26,56 +33,15 @@
 #define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
 #define TF_EM_INTERNAL_ENTRY_MASK  0x3
 
-/** EEM Entry header
- *
- */
-struct tf_eem_entry_hdr {
-	uint32_t pointer;
-	uint32_t word1;  /*
-			  * The header is made up of two words,
-			  * this is the first word. This field has multiple
-			  * subfields, there is no suitable single name for
-			  * it so just going with word1.
-			  */
-#define TF_LKUP_RECORD_VALID_SHIFT 31
-#define TF_LKUP_RECORD_VALID_MASK 0x80000000
-#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
-#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
-#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
-#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
-#define TF_LKUP_RECORD_RESERVED_SHIFT 17
-#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
-#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
-#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
-#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
-#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
-#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
-#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
-#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
-#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
-};
-
-/** EEM Entry
- *  Each EEM entry is 512-bit (64-bytes)
- */
-struct tf_eem_64b_entry {
-	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
-	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
-};
-
 /** EM Entry
  *  Each EM entry is 512-bit (64-bytes) but ordered differently to
  *  EEM.
  */
 struct tf_em_64b_entry {
 	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
+	struct cfa_p4_eem_entry_hdr hdr;
 	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
 /**
@@ -127,22 +93,14 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
 					  uint32_t tbl_scope_id);
 
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms);
-
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms);
-
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms);
-
-int tf_delete_em_internal_entry(struct tf                       *tfp,
-				struct tf_delete_em_entry_parms *parms);
-
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
-			   enum tf_em_table_type table_type);
+			   enum hcapi_cfa_em_table_type table_type);
+
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
 
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e08a96f23..60274eb35 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -183,6 +183,10 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
+/**
+ * NEW HWRM direct messages
+ */
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -1259,8 +1263,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
 	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
-		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.strength =
+		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
@@ -1436,22 +1441,20 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 }
 
 int
-tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *params)
+tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *params)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_bulk_input req = { 0 };
-	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_tbl_type_bulk_get_input req = { 0 };
+	struct tf_tbl_type_bulk_get_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	int data_size = 0;
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16((params->dir) |
-		((params->clear_on_read) ?
-		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.flags = tfp_cpu_to_le_16(params->dir);
 	req.type = tfp_cpu_to_le_32(params->type);
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
@@ -1462,7 +1465,7 @@ tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 	MSG_PREP(parms,
 		 TF_KONG_MB,
 		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 HWRM_TFT_TBL_TYPE_BULK_GET,
 		 req,
 		 resp);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 06f52ef00..1dad2b9fb 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -338,7 +338,7 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  * Returns:
  *  0 on Success else internal Truflow error
  */
-int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 9b7f5a069..b7b445102 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -23,29 +23,27 @@
 					    * IDs
 					    */
 #define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
-					    * TCAM. A slices is a WC TCAM entry.
-					    */
+#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
 #define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
 #define TF_NUM_METER             1024      /* < Number of meter instances */
 #define TF_NUM_MIRROR               2      /* < Number of mirror instances */
 #define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
 
-/* Wh+/Brd2 specific HW resources */
+/* Wh+/SR specific HW resources */
 #define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
 					    * entries
 					    */
 
-/* Brd2/Brd4 specific HW resources */
+/* SR/SR2 specific HW resources */
 #define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
 
 
-/* Brd3, Brd4 common HW resources */
+/* Thor, SR2 common HW resources */
 #define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
 					    * templates
 					    */
 
-/* Brd4 specific HW resources */
+/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
 #define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
 #define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
@@ -149,10 +147,11 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-#define TF_RSVD_MIRROR_RX                         1
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         1
+#define TF_RSVD_MIRROR_TX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
@@ -501,13 +500,13 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_METER_INST,
 	TF_RESC_TYPE_HW_MIRROR,
 	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/Brd2 specific HW resources */
+	/* Wh+/SR specific HW resources */
 	TF_RESC_TYPE_HW_SP_TCAM,
-	/* Brd2/Brd4 specific HW resources */
+	/* SR/SR2 specific HW resources */
 	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Brd3, Brd4 common HW resources */
+	/* Thor, SR2 common HW resources */
 	TF_RESC_TYPE_HW_FKB,
-	/* Brd4 specific HW resources */
+	/* SR2 specific HW resources */
 	TF_RESC_TYPE_HW_TBL_SCOPE,
 	TF_RESC_TYPE_HW_EPOCH0,
 	TF_RESC_TYPE_HW_EPOCH1,
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 2264704d2..b6fe2f1ad 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -14,6 +14,7 @@
 #include "tf_resources.h"
 #include "tf_msg.h"
 #include "bnxt.h"
+#include "tfp.h"
 
 /**
  * Internal macro to perform HW resource allocation check between what
@@ -329,13 +330,13 @@ tf_rm_print_hw_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_hw_2_str(i),
 				    hw_query->hw_query[i].max,
 				    tf_rm_rsvd_hw_value(dir, i));
@@ -359,13 +360,13 @@ tf_rm_print_sram_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_sram_2_str(i),
 				    sram_query->sram_query[i].max,
 				    tf_rm_rsvd_sram_value(dir, i));
@@ -1700,7 +1701,7 @@ tf_rm_hw_alloc_validate(enum tf_dir dir,
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed id:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1727,7 +1728,7 @@ tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed idx:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1820,19 +1821,22 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed,"
+			"error_flag:0x%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
@@ -1845,9 +1849,10 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1857,15 +1862,17 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -1903,19 +1910,22 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed,"
+			"error_flag:%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
 		goto cleanup;
 	}
@@ -1931,9 +1941,10 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 					    sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1943,15 +1954,18 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed,"
+			    " rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -2177,7 +2191,7 @@ tf_rm_hw_to_flush(struct tf_session *tfs,
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
 	} else {
-		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
 			    tf_dir_2_str(dir),
 			    free_cnt,
 			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
@@ -2538,8 +2552,8 @@ tf_rm_log_hw_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_hw_2_str(i));
 	}
@@ -2564,8 +2578,8 @@ tf_rm_log_sram_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_sram_2_str(i));
 	}
@@ -2777,9 +2791,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering HW resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering HW resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
@@ -2789,9 +2804,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2805,9 +2821,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
@@ -2818,9 +2835,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2828,18 +2846,20 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, HW free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, HW free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 
 		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, SRAM free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, SRAM free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 	}
 
@@ -2890,14 +2910,14 @@ tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Tcam type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "%s:, Tcam type lookup failed, type:%d\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type lookup failed, type:%d\n",
 			    tf_dir_2_str(dir),
 			    type);
 		return rc;
@@ -3057,15 +3077,15 @@ tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type lookup failed, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 35a7cfab5..a68335304 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "stack.h"
 #include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
@@ -53,14 +54,14 @@
  *   Pointer to the page table to free
  */
 static void
-tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
 {
 	uint32_t i;
 
 	for (i = 0; i < tp->pg_count; i++) {
 		if (!tp->pg_va_tbl[i]) {
-			PMD_DRV_LOG(WARNING,
-				    "No map for page %d table %016" PRIu64 "\n",
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
 				    i,
 				    (uint64_t)(uintptr_t)tp);
 			continue;
@@ -84,15 +85,14 @@ tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
  *   Pointer to the EM table to free
  */
 static void
-tf_em_free_page_table(struct tf_em_table *tbl)
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int i;
 
 	for (i = 0; i < tbl->num_lvl; i++) {
 		tp = &tbl->pg_tbl[i];
-
-		PMD_DRV_LOG(INFO,
+		TFP_DRV_LOG(INFO,
 			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
 			   TF_EM_PAGE_SIZE,
 			    i,
@@ -124,7 +124,7 @@ tf_em_free_page_table(struct tf_em_table *tbl)
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
 		   uint32_t pg_count,
 		   uint32_t pg_size)
 {
@@ -183,9 +183,9 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_page_table(struct tf_em_table *tbl)
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int rc = 0;
 	int i;
 	uint32_t j;
@@ -197,14 +197,15 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
 					tbl->page_cnt[i],
 					TF_EM_PAGE_SIZE);
 		if (rc) {
-			PMD_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d\n",
-				i);
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
 			goto cleanup;
 		}
 
 		for (j = 0; j < tp->pg_count; j++) {
-			PMD_DRV_LOG(INFO,
+			TFP_DRV_LOG(INFO,
 				"EEM: Allocated page table: size %u lvl %d cnt"
 				" %u VA:%p PA:%p\n",
 				TF_EM_PAGE_SIZE,
@@ -234,8 +235,8 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
  *   Flag controlling if the page table is last
  */
 static void
-tf_em_link_page_table(struct tf_em_page_tbl *tp,
-		      struct tf_em_page_tbl *tp_next,
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
 		      bool set_pte_last)
 {
 	uint64_t *pg_pa = tp_next->pg_pa_tbl;
@@ -270,10 +271,10 @@ tf_em_link_page_table(struct tf_em_page_tbl *tp,
  *   Pointer to EM page table
  */
 static void
-tf_em_setup_page_table(struct tf_em_table *tbl)
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
 	bool set_pte_last = 0;
 	int i;
 
@@ -415,7 +416,7 @@ tf_em_size_page_tbls(int max_lvl,
  *   - ENOMEM - Out of memory
  */
 static int
-tf_em_size_table(struct tf_em_table *tbl)
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
 {
 	uint64_t num_data_pages;
 	uint32_t *page_cnt;
@@ -456,11 +457,10 @@ tf_em_size_table(struct tf_em_table *tbl)
 					  tbl->num_entries,
 					  &num_data_pages);
 	if (max_lvl < 0) {
-		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		PMD_DRV_LOG(WARNING,
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
 			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type,
-			    (uint64_t)num_entries * tbl->entry_size,
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
 			    TF_EM_PAGE_SIZE);
 		return -ENOMEM;
 	}
@@ -474,8 +474,8 @@ tf_em_size_table(struct tf_em_table *tbl)
 	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
 				page_cnt);
 
-	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
 		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
@@ -504,8 +504,9 @@ tf_em_ctx_unreg(struct tf *tfp,
 		struct tf_tbl_scope_cb *tbl_scope_cb,
 		int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 
 	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
@@ -539,8 +540,9 @@ tf_em_ctx_reg(struct tf *tfp,
 	      struct tf_tbl_scope_cb *tbl_scope_cb,
 	      int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int rc = 0;
 	int i;
 
@@ -601,7 +603,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 					TF_MEGABYTE) / (key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
 				    "%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -613,7 +615,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
 				    "%u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -625,7 +627,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx flows "
 				    "requested:%u max:%u\n",
 				    parms->rx_num_flows_in_k * TF_KILOBYTE,
@@ -642,7 +644,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx requested: %u\n",
 				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -658,7 +660,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			(key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Insufficient memory requested:%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -670,7 +672,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -682,7 +684,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx flows "
 				    "requested:%u max:%u\n",
 				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
@@ -696,7 +698,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -705,7 +707,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->rx_num_flows_in_k != 0 &&
 	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Rx key size required: %u\n",
 			    (parms->rx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -713,7 +715,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->tx_num_flows_in_k != 0 &&
 	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Tx key size required: %u\n",
 			    (parms->tx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -795,11 +797,10 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -817,9 +818,9 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -833,11 +834,11 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Set failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -891,9 +892,9 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -907,11 +908,11 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Get failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -932,8 +933,8 @@ tf_get_tbl_entry_internal(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 static int
-tf_get_bulk_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc;
 	int id;
@@ -975,7 +976,7 @@ tf_get_bulk_tbl_entry_internal(struct tf *tfp,
 	}
 
 	/* Get the entry */
-	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1006,10 +1007,9 @@ static int
 tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
 			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Alloc with search not supported\n",
-		    parms->dir);
-
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Alloc with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1032,9 +1032,9 @@ static int
 tf_free_tbl_entry_shadow(struct tf_session *tfs,
 			 struct tf_free_tbl_entry_parms *parms)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Free with search not supported\n",
-		    parms->dir);
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Free with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1074,8 +1074,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	parms.alignment = 0;
 
 	if (tfp_calloc(&parms) != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
-			    dir, strerror(-ENOMEM));
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
 		return -ENOMEM;
 	}
 
@@ -1084,8 +1084,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	rc = stack_init(num_entries, parms.mem_va, pool);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1101,13 +1101,13 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	for (i = 0; i < num_entries; i++) {
 		rc = stack_push(pool, j);
 		if (rc != 0) {
-			PMD_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
 				    tf_dir_2_str(dir), strerror(-rc));
 			goto cleanup;
 		}
 
 		if (j < 0) {
-			PMD_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
 				    dir, j);
 			goto cleanup;
 		}
@@ -1116,8 +1116,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 	return 0;
@@ -1168,18 +1168,7 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1188,9 +1177,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-					"%s, table scope not allocated\n",
-					tf_dir_2_str(parms->dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1200,9 +1189,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type);
 		return rc;
 	}
@@ -1233,18 +1222,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	struct bitalloc *session_pool;
 	struct tf_session *tfs;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1254,11 +1232,10 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1276,9 +1253,9 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	if (id == -1) {
 		free_cnt = ba_free_count(session_pool);
 
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d, free:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d, free:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   free_cnt);
 		return -ENOMEM;
@@ -1323,18 +1300,7 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1343,9 +1309,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1355,9 +1321,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_push(pool, index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 	}
@@ -1386,18 +1352,7 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	struct tf_session *tfs;
 	uint32_t index;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1408,9 +1363,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1439,9 +1394,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	/* Check if element was indeed allocated */
 	id = ba_inuse_free(session_pool, index);
 	if (id == -1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -ENOMEM;
@@ -1485,8 +1440,10 @@ tf_free_eem_tbl_scope_cb(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 parms->tbl_scope_id);
 
-	if (tbl_scope_cb == NULL)
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
 		return -EINVAL;
+	}
 
 	/* Free Table control block */
 	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
@@ -1516,23 +1473,17 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	int rc;
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_table *em_tables;
+	struct hcapi_cfa_em_table *em_tables;
 	int index;
 	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 
-	/* check parameters */
-	if (parms == NULL || tfp->session == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
-
 	session = (struct tf_session *)tfp->session->core_data;
 
 	/* Get Table Scope control block from the session pool */
 	index = ba_alloc(session->tbl_scope_pool_rx);
 	if (index == -1) {
-		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
 			    "Control Block\n");
 		return -ENOMEM;
 	}
@@ -1547,8 +1498,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				     dir,
 				     &tbl_scope_cb->em_caps[dir]);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"EEM: Unable to query for EEM capability\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 	}
@@ -1565,8 +1518,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 */
 		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 
@@ -1580,8 +1535,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"TBL: Unable to configure EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1590,8 +1547,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
 
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1604,9 +1563,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				    em_tables[TF_RECORD_TABLE].num_entries,
 				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "%d TBL: Unable to allocate idx pools %s\n",
-				    dir,
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup_full;
 		}
@@ -1634,13 +1593,12 @@ tf_set_tbl_entry(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct tf_session *session;
 
-	if (tfp == NULL || parms == NULL || parms->data == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 
@@ -1654,9 +1612,9 @@ tf_set_tbl_entry(struct tf *tfp,
 		tbl_scope_id = parms->tbl_scope_id;
 
 		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Table scope not allocated\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Table scope not allocated\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1665,18 +1623,21 @@ tf_set_tbl_entry(struct tf *tfp,
 		 */
 		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
 
-		if (tbl_scope_cb == NULL)
-			return -EINVAL;
+		if (tbl_scope_cb == NULL) {
+			TFP_DRV_LOG(ERR,
+				    "%s, table scope error\n",
+				    tf_dir_2_str(parms->dir));
+				return -EINVAL;
+		}
 
 		/* External table, implicitly the Action table */
-		base_addr = tf_em_get_table_page(tbl_scope_cb,
-						 parms->dir,
-						 offset,
-						 TF_RECORD_TABLE);
+		base_addr = (void *)(uintptr_t)
+		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
+
 		if (base_addr == NULL) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Base address lookup failed\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Base address lookup failed\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1688,11 +1649,11 @@ tf_set_tbl_entry(struct tf *tfp,
 		/* Internal table type processing */
 		rc = tf_set_tbl_entry_internal(tfp, parms);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Set failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Set failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 		}
 	}
 
@@ -1706,31 +1667,24 @@ tf_get_tbl_entry(struct tf *tfp,
 {
 	int rc = 0;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	if (parms->type == TF_TBL_TYPE_EXT) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, External table type not supported\n",
-			    parms->dir);
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
 
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
 		rc = tf_get_tbl_entry_internal(tfp, parms);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Get failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 	}
 
 	return rc;
@@ -1738,8 +1692,8 @@ tf_get_tbl_entry(struct tf *tfp,
 
 /* API defined in tf_core.h */
 int
-tf_get_bulk_tbl_entry(struct tf *tfp,
-		 struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc = 0;
 
@@ -1754,7 +1708,7 @@ tf_get_bulk_tbl_entry(struct tf *tfp,
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
-		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
 		if (rc)
 			TFP_DRV_LOG(ERR,
 				    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1773,11 +1727,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	rc = tf_alloc_eem_tbl_scope(tfp, parms);
 
@@ -1791,11 +1741,7 @@ tf_free_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	/* free table scope and all associated resources */
 	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
@@ -1813,11 +1759,7 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	/*
 	 * No shadow copy support for external tables, allocate and return
 	 */
@@ -1827,13 +1769,6 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1849,9 +1784,9 @@ tf_alloc_tbl_entry(struct tf *tfp,
 
 	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 
 	return rc;
 }
@@ -1866,11 +1801,8 @@ tf_free_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
 	/*
 	 * No shadow of external tables so just free the entry
 	 */
@@ -1880,13 +1812,6 @@ tf_free_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1903,16 +1828,16 @@ tf_free_tbl_entry(struct tf *tfp,
 	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
 
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 	return rc;
 }
 
 
 static void
-tf_dump_link_page_table(struct tf_em_page_tbl *tp,
-			struct tf_em_page_tbl *tp_next)
+tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+			struct hcapi_cfa_em_page_tbl *tp_next)
 {
 	uint64_t *pg_va;
 	uint32_t i;
@@ -1951,9 +1876,9 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 {
 	struct tf_session      *session;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_page_tbl *tp;
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 	int j;
 	int dir;
@@ -1967,7 +1892,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		TFP_DRV_LOG(ERR, "No table scope\n");
+		PMD_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index ee8a14665..b17557345 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -13,45 +13,6 @@
 
 struct tf_session;
 
-enum tf_pg_tbl_lvl {
-	TF_PT_LVL_0,
-	TF_PT_LVL_1,
-	TF_PT_LVL_2,
-	TF_PT_LVL_MAX
-};
-
-enum tf_em_table_type {
-	TF_KEY0_TABLE,
-	TF_KEY1_TABLE,
-	TF_RECORD_TABLE,
-	TF_EFC_TABLE,
-	TF_MAX_TABLE
-};
-
-struct tf_em_page_tbl {
-	uint32_t	pg_count;
-	uint32_t	pg_size;
-	void		**pg_va_tbl;
-	uint64_t	*pg_pa_tbl;
-};
-
-struct tf_em_table {
-	int				type;
-	uint32_t			num_entries;
-	uint16_t			ctx_id;
-	uint32_t			entry_size;
-	int				num_lvl;
-	uint32_t			page_cnt[TF_PT_LVL_MAX];
-	uint64_t			num_data_pages;
-	void				*l0_addr;
-	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
-};
-
-struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[TF_MAX_TABLE];
-};
-
 /** table scope control block content */
 struct tf_em_caps {
 	uint32_t flags;
@@ -74,18 +35,14 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps          em_caps[TF_DIR_MAX];
 	struct stack               ext_act_pool[TF_DIR_MAX];
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/**
- * Hardware Page sizes supported for EEM:
- *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- *
- * Round-down other page sizes to the lower hardware page
- * size supported.
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
  */
 #define TF_EM_PAGE_SIZE_4K 12
 #define TF_EM_PAGE_SIZE_8K 13
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 17/51] net/bnxt: implement support for TCAM access
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (15 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 18/51] net/bnxt: multiple device implementation Ajit Khaparde
                           ` (33 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Implement TCAM alloc, free, bind, and unbind functions
Update tf_core, tf_msg, etc.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      | 258 ++++++++++-----------
 drivers/net/bnxt/tf_core/tf_device.h    |  14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |  25 ++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  31 +--
 drivers/net/bnxt/tf_core/tf_msg.h       |   4 +-
 drivers/net/bnxt/tf_core/tf_tcam.c      | 285 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tcam.h      |  66 ++++--
 7 files changed, 480 insertions(+), 203 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 648d0d1bd..29522c66e 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,43 +19,6 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
-			       enum tf_device_type device,
-			       uint16_t key_sz_in_bits,
-			       uint16_t *num_slice_per_row)
-{
-	uint16_t key_bytes;
-	uint16_t slice_sz = 0;
-
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
-#define CFA_P4_WC_TCAM_SLICE_SIZE     12
-
-	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
-		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
-		if (device == TF_DEVICE_TYPE_WH) {
-			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
-			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
-		} else {
-			TFP_DRV_LOG(ERR,
-				    "Unsupported device type %d\n",
-				    device);
-			return -ENOTSUP;
-		}
-
-		if (key_bytes > *num_slice_per_row * slice_sz) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Key size %d is not supported\n",
-				    tf_tcam_tbl_2_str(tcam_tbl_type),
-				    key_bytes);
-			return -ENOTSUP;
-		}
-	} else { /* for other type of tcam */
-		*num_slice_per_row = 1;
-	}
-
-	return 0;
-}
-
 /**
  * Create EM Tbl pool of memory indexes.
  *
@@ -918,49 +881,56 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_alloc_parms aparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_alloc_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+	aparms.dir = parms->dir;
+	aparms.type = parms->tcam_tbl_type;
+	aparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	aparms.priority = parms->priority;
+	rc = dev->ops->tf_dev_alloc_tcam(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+			    strerror(-rc));
+		return rc;
 	}
 
-	index *= num_slice_per_row;
+	parms->idx = aparms.idx;
 
-	parms->idx = index;
 	return 0;
 }
 
@@ -969,55 +939,60 @@ tf_set_tcam_entry(struct tf *tfp,
 		  struct tf_set_tcam_entry_parms *parms)
 {
 	int rc;
-	int id;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_set_parms sparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_set_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	/* Verify that the entry has been previously allocated */
-	index = parms->idx / num_slice_per_row;
+	sparms.dir = parms->dir;
+	sparms.type = parms->tcam_tbl_type;
+	sparms.idx = parms->idx;
+	sparms.key = parms->key;
+	sparms.mask = parms->mask;
+	sparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	sparms.result = parms->result;
+	sparms.result_size = TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	rc = dev->ops->tf_dev_set_tcam(tfp, &sparms);
+	if (rc) {
 		TFP_DRV_LOG(ERR,
-		   "%s: %s: Invalid or not allocated index, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-		   parms->idx);
-		return -EINVAL;
+			    "%s: TCAM set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
-	rc = tf_msg_tcam_entry_set(tfp, parms);
-
-	return rc;
+	return 0;
 }
 
 int
@@ -1033,59 +1008,52 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row = 1;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_free_parms fparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 0,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = parms->idx / num_slice_per_row;
-
-	rc = ba_inuse(session_pool, index);
-	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+	if (dev->ops->tf_dev_free_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    index);
-		return -EINVAL;
+			    strerror(-rc));
+		return rc;
 	}
 
-	ba_free(session_pool, index);
-
-	rc = tf_msg_tcam_entry_free(tfp, parms);
+	fparms.dir = parms->dir;
+	fparms.type = parms->tcam_tbl_type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 1501b20d9..32d9a5442 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -116,8 +116,11 @@ struct tf_dev_ops {
 	 * [in] tfp
 	 *   Pointer to TF handle
 	 *
-	 * [out] slice_size
-	 *   Pointer to slice size the device supports
+	 * [in] type
+	 *   TCAM table type
+	 *
+	 * [in] key_sz
+	 *   Key size
 	 *
 	 * [out] num_slices_per_row
 	 *   Pointer to number of slices per row the device supports
@@ -126,9 +129,10 @@ struct tf_dev_ops {
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
-					 uint16_t *slice_size,
-					 uint16_t *num_slices_per_row);
+	int (*tf_dev_get_tcam_slice_info)(struct tf *tfp,
+					  enum tf_tcam_tbl_type type,
+					  uint16_t key_sz,
+					  uint16_t *num_slices_per_row);
 
 	/**
 	 * Allocation of an identifier element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index f4bd95f1c..77fb693dd 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -56,18 +56,21 @@ tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
  *   - (-EINVAL) on failure.
  */
 static int
-tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
-			     uint16_t *slice_size,
-			     uint16_t *num_slices_per_row)
+tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
+			      enum tf_tcam_tbl_type type,
+			      uint16_t key_sz,
+			      uint16_t *num_slices_per_row)
 {
-#define CFA_P4_WC_TCAM_SLICE_SIZE       12
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
 
-	if (slice_size == NULL || num_slices_per_row == NULL)
-		return -EINVAL;
-
-	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
-	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+	if (type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
+			return -ENOTSUP;
+	} else { /* for other type of tcam */
+		*num_slices_per_row = 1;
+	}
 
 	return 0;
 }
@@ -77,7 +80,7 @@ tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
  */
 const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
-	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 60274eb35..b50e1d48c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -9,6 +9,7 @@
 #include <string.h>
 
 #include "tf_msg_common.h"
+#include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
 #include "tf_session.h"
@@ -1480,27 +1481,19 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_set_tcam_entry_parms *parms)
+		      struct tf_tcam_set_parms *parms)
 {
 	int rc;
 	struct tfp_send_msg_parms mparms = { 0 };
 	struct hwrm_tf_tcam_set_input req = { 0 };
 	struct hwrm_tf_tcam_set_output resp = { 0 };
-	uint16_t key_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
-	uint16_t result_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
@@ -1508,11 +1501,11 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	if (parms->dir == TF_DIR_TX)
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	req.key_size = key_bytes;
-	req.mask_offset = key_bytes;
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
 	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * key_bytes;
-	req.result_size = result_bytes;
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
 	data_size = 2 * req.key_size + req.result_size;
 
 	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
@@ -1530,9 +1523,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 			   sizeof(buf.pa_addr));
 	}
 
-	tfp_memcpy(&data[0], parms->key, key_bytes);
-	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
-	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1554,7 +1547,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 int
 tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_free_tcam_entry_parms *in_parms)
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
 	struct hwrm_tf_tcam_free_input req =  { 0 };
@@ -1562,7 +1555,7 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1dad2b9fb..a3e0f7bba 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -247,7 +247,7 @@ int tf_msg_em_op(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_set(struct tf *tfp,
-			  struct tf_set_tcam_entry_parms *parms);
+			  struct tf_tcam_set_parms *parms);
 
 /**
  * Sends tcam entry 'free' to the Firmware.
@@ -262,7 +262,7 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_free(struct tf *tfp,
-			   struct tf_free_tcam_entry_parms *parms);
+			   struct tf_tcam_free_parms *parms);
 
 /**
  * Sends Set message of a Table Type element to the firmware.
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 3ad99dd0d..b9dba5323 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -3,16 +3,24 @@
  * All rights reserved.
  */
 
+#include <string.h>
 #include <rte_common.h>
 
 #include "tf_tcam.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_rm_new.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_session.h"
+#include "tf_msg.h"
 
 struct tf;
 
 /**
  * TCAM DBs.
  */
-/* static void *tcam_db[TF_DIR_MAX]; */
+static void *tcam_db[TF_DIR_MAX];
 
 /**
  * TCAM Shadow DBs
@@ -22,7 +30,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -33,19 +41,131 @@ int
 tf_tcam_bind(struct tf *tfp __rte_unused,
 	     struct tf_tcam_cfg_parms *parms __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
+		db_cfg.rm_db = tcam_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: TCAM DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_tcam_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tcam_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tcam_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
-tf_tcam_alloc(struct tf *tfp __rte_unused,
-	      struct tf_tcam_alloc_parms *parms __rte_unused)
+tf_tcam_alloc(struct tf *tfp,
+	      struct tf_tcam_alloc_parms *parms)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Allocate requested element */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = (uint32_t *)&parms->idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed tcam, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	parms->idx *= num_slice_per_row;
+
 	return 0;
 }
 
@@ -53,6 +173,92 @@ int
 tf_tcam_free(struct tf *tfp __rte_unused,
 	     struct tf_tcam_free_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  0,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tcam_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx / num_slice_per_row;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
@@ -67,6 +273,77 @@ int
 tf_tcam_set(struct tf *tfp __rte_unused,
 	    struct tf_tcam_set_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry is not allocated, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 68c25eb1b..67c3bcb49 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -50,10 +50,18 @@ struct tf_tcam_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint16_t idx;
 };
 
 /**
@@ -71,7 +79,7 @@ struct tf_tcam_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t idx;
+	uint16_t idx;
 	/**
 	 * [out] Reference count after free, only valid if session has been
 	 * created with shadow_copy.
@@ -90,7 +98,7 @@ struct tf_tcam_alloc_search_parms {
 	/**
 	 * [in] TCAM table type
 	 */
-	enum tf_tcam_tbl_type tcam_tbl_type;
+	enum tf_tcam_tbl_type type;
 	/**
 	 * [in] Enable search for matching entry
 	 */
@@ -100,9 +108,9 @@ struct tf_tcam_alloc_search_parms {
 	 */
 	uint8_t *key;
 	/**
-	 * [in] key size in bits (if search)
+	 * [in] key size (if search)
 	 */
-	uint16_t key_sz_in_bits;
+	uint16_t key_size;
 	/**
 	 * [in] Mask data to match on (if search)
 	 */
@@ -139,17 +147,29 @@ struct tf_tcam_set_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [in] Entry data
+	 * [in] Entry index to write to
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [in] Entry size
+	 * [in] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to write to
+	 * [in] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
@@ -165,17 +185,29 @@ struct tf_tcam_get_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [out] Entry data
+	 * [in] Entry index to read
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [out] Entry size
+	 * [out] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to read
+	 * [out] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [out] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [out] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 18/51] net/bnxt: multiple device implementation
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (16 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
                           ` (32 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

Implement the Identifier, Table Type and the Resource Manager
modules.
Integrate Resource Manager with HCAPI.
Update open/close session.
Move to direct msgs for qcaps and resv messages.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 751 ++++++++---------------
 drivers/net/bnxt/tf_core/tf_core.h       |  97 ++-
 drivers/net/bnxt/tf_core/tf_device.c     |  10 +-
 drivers/net/bnxt/tf_core/tf_device.h     |   1 +
 drivers/net/bnxt/tf_core/tf_device_p4.c  |  26 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |  29 +-
 drivers/net/bnxt/tf_core/tf_identifier.h |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  45 +-
 drivers/net/bnxt/tf_core/tf_msg.h        |   1 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 225 +++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  11 +-
 drivers/net/bnxt/tf_core/tf_session.c    |  28 +-
 drivers/net/bnxt/tf_core/tf_session.h    |   2 +-
 drivers/net/bnxt/tf_core/tf_tbl.c        | 611 +-----------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c   | 252 +++++++-
 drivers/net/bnxt/tf_core/tf_tbl_type.h   |   2 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |  12 +-
 drivers/net/bnxt/tf_core/tf_util.h       |  45 +-
 drivers/net/bnxt/tf_core/tfp.c           |   4 +-
 19 files changed, 880 insertions(+), 1276 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 29522c66e..3e23d0513 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,284 +19,15 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- * [in] num_entries
- *   number of entries to write
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i, j;
-	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
-			    strerror(-ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Fill pool with indexes
-	 */
-	j = num_entries - 1;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-		j--;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- *
- * Return:
- */
-static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
-{
-	struct stack *pool = &session->em_pool[dir];
-	uint32_t *ptr;
-
-	ptr = stack_items(pool);
-
-	tfp_free(ptr);
-}
-
 int
-tf_open_session(struct tf                    *tfp,
+tf_open_session(struct tf *tfp,
 		struct tf_open_session_parms *parms)
-{
-	int rc;
-	struct tf_session *session;
-	struct tfp_calloc_parms alloc_parms;
-	unsigned int domain, bus, slot, device;
-	uint8_t fw_session_id;
-	int dir;
-
-	TF_CHECK_PARMS(tfp, parms);
-
-	/* Filter out any non-supported device types on the Core
-	 * side. It is assumed that the Firmware will be supported if
-	 * firmware open session succeeds.
-	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH) {
-		TFP_DRV_LOG(ERR,
-			    "Unsupported device type %d\n",
-			    parms->device_type);
-		return -ENOTSUP;
-	}
-
-	/* Build the beginning of session_id */
-	rc = sscanf(parms->ctrl_chan_name,
-		    "%x:%x:%x.%d",
-		    &domain,
-		    &bus,
-		    &slot,
-		    &device);
-	if (rc != 4) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to scan device ctrl_chan_name\n");
-		return -EINVAL;
-	}
-
-	/* open FW session and get a new session_id */
-	rc = tf_msg_session_open(tfp,
-				 parms->ctrl_chan_name,
-				 &fw_session_id);
-	if (rc) {
-		/* Log error */
-		if (rc == -EEXIST)
-			TFP_DRV_LOG(ERR,
-				    "Session is already open, rc:%s\n",
-				    strerror(-rc));
-		else
-			TFP_DRV_LOG(ERR,
-				    "Open message send failed, rc:%s\n",
-				    strerror(-rc));
-
-		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
-		return rc;
-	}
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session_info);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
-
-	/* Allocate core data for the session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session->core_data = alloc_parms.mem_va;
-
-	session = (struct tf_session *)tfp->session->core_data;
-	tfp_memcpy(session->ctrl_chan_name,
-		   parms->ctrl_chan_name,
-		   TF_SESSION_NAME_MAX);
-
-	/* Initialize Session */
-	session->dev = NULL;
-	tf_rm_init(tfp);
-
-	/* Construct the Session ID */
-	session->session_id.internal.domain = domain;
-	session->session_id.internal.bus = bus;
-	session->session_id.internal.device = device;
-	session->session_id.internal.fw_session_id = fw_session_id;
-
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Shadow DB configuration */
-	if (parms->shadow_copy) {
-		/* Ignore shadow_copy setting */
-		session->shadow_copy = 0;/* parms->shadow_copy; */
-#if (TF_SHADOW == 1)
-		rc = tf_rm_shadow_db_init(tfs);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%s",
-				    strerror(-rc));
-		/* Add additional processing */
-#endif /* TF_SHADOW */
-	}
-
-	/* Adjust the Session with what firmware allowed us to get */
-	rc = tf_rm_allocate_validate(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Rm allocate validate failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Initialize EM pool */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session,
-				       (enum tf_dir)dir,
-				       TF_SESSION_EM_POOL_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EM Pool initialization failed\n");
-			goto cleanup_close;
-		}
-	}
-
-	session->ref_count++;
-
-	/* Return session ID */
-	parms->session_id = session->session_id;
-
-	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    parms->session_id.internal.domain,
-		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
-
-	return 0;
-
- cleanup:
-	tfp_free(tfp->session->core_data);
-	tfp_free(tfp->session);
-	tfp->session = NULL;
-	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
-}
-
-int
-tf_open_session_new(struct tf *tfp,
-		    struct tf_open_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
 	struct tf_session_open_session_parms oparms;
 
-	TF_CHECK_PARMS(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
@@ -347,33 +78,8 @@ tf_open_session_new(struct tf *tfp,
 }
 
 int
-tf_attach_session(struct tf *tfp __rte_unused,
-		  struct tf_attach_session_parms *parms __rte_unused)
-{
-#if (TF_SHARED == 1)
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/* - Open the shared memory for the attach_chan_name
-	 * - Point to the shared session for this Device instance
-	 * - Check that session is valid
-	 * - Attach to the firmware so it can record there is more
-	 *   than one client of the session.
-	 */
-
-	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_attach(tfp,
-					   parms->ctrl_chan_name,
-					   parms->session_id);
-	}
-#endif /* TF_SHARED */
-	return -1;
-}
-
-int
-tf_attach_session_new(struct tf *tfp,
-		      struct tf_attach_session_parms *parms)
+tf_attach_session(struct tf *tfp,
+		  struct tf_attach_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
@@ -436,65 +142,6 @@ tf_attach_session_new(struct tf *tfp,
 
 int
 tf_close_session(struct tf *tfp)
-{
-	int rc;
-	int rc_close = 0;
-	struct tf_session *tfs;
-	union tf_session_id session_id;
-	int dir;
-
-	TF_CHECK_TFP_SESSION(tfp);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Cleanup if we're last user of the session */
-	if (tfs->ref_count == 1) {
-		/* Cleanup any outstanding resources */
-		rc_close = tf_rm_close(tfp);
-	}
-
-	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Message send failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		/* Update the ref_count */
-		tfs->ref_count--;
-	}
-
-	session_id = tfs->session_id;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Free EM pool */
-		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, (enum tf_dir)dir);
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
-
-	TFP_DRV_LOG(INFO,
-		    "Session closed, session_id:%d\n",
-		    session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    session_id.internal.domain,
-		    session_id.internal.bus,
-		    session_id.internal.device,
-		    session_id.internal.fw_session_id);
-
-	return rc_close;
-}
-
-int
-tf_close_session_new(struct tf *tfp)
 {
 	int rc;
 	struct tf_session_close_session_parms cparms = { 0 };
@@ -620,76 +267,9 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
-int tf_alloc_identifier(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	int id;
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	if (rc) {
-		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	id = ba_alloc(session_pool);
-
-	if (id == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		return -ENOMEM;
-	}
-	parms->id = id;
-	return 0;
-}
-
 int
-tf_alloc_identifier_new(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
+tf_alloc_identifier(struct tf *tfp,
+		    struct tf_alloc_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -732,7 +312,7 @@ tf_alloc_identifier_new(struct tf *tfp,
 	}
 
 	aparms.dir = parms->dir;
-	aparms.ident_type = parms->ident_type;
+	aparms.type = parms->ident_type;
 	aparms.id = &id;
 	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
 	if (rc) {
@@ -748,79 +328,9 @@ tf_alloc_identifier_new(struct tf *tfp,
 	return 0;
 }
 
-int tf_free_identifier(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	int rc;
-	int ba_rc;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: %s Identifier pool access failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	ba_rc = ba_inuse(session_pool, (int)parms->id);
-
-	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    parms->id);
-		return -EINVAL;
-	}
-
-	ba_free(session_pool, (int)parms->id);
-
-	return 0;
-}
-
 int
-tf_free_identifier_new(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
+tf_free_identifier(struct tf *tfp,
+		   struct tf_free_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -862,12 +372,12 @@ tf_free_identifier_new(struct tf *tfp,
 	}
 
 	fparms.dir = parms->dir;
-	fparms.ident_type = parms->ident_type;
+	fparms.type = parms->ident_type;
 	fparms.id = parms->id;
 	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Identifier allocation failed, rc:%s\n",
+			    "%s: Identifier free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
@@ -1057,3 +567,242 @@ tf_free_tcam_entry(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_alloc_parms aparms;
+	uint32_t idx;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_tbl_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.type = parms->type;
+	aparms.idx = &idx;
+	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_tbl_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.type = parms->type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table free failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_set_parms sparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&sparms, 0, sizeof(struct tf_tbl_set_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.data = parms->data;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_parms gparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&gparms, 0, sizeof(struct tf_tbl_get_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.data = parms->data;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_get_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index bb456bba7..a7a7bd38a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -384,34 +384,87 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * Identifier resource definition
+ */
+struct tf_identifier_resources {
+	/**
+	 * Array of TF Identifiers where each entry is expected to be
+	 * set to the requested resource number of that specific type.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t cnt[TF_IDENT_TYPE_MAX];
+};
+
+/**
+ * Table type resource definition
+ */
+struct tf_tbl_resources {
+	/**
+	 * Array of TF Table types where each entry is expected to be
+	 * set to the requeste resource number of that specific
+	 * type. The index used is tf_tbl_type.
+	 */
+	uint16_t cnt[TF_TBL_TYPE_MAX];
+};
+
+/**
+ * TCAM type resource definition
+ */
+struct tf_tcam_resources {
+	/**
+	 * Array of TF TCAM types where each entry is expected to be
+	 * set to the requested resource number of that specific
+	 * type. The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t cnt[TF_TCAM_TBL_TYPE_MAX];
+};
+
+/**
+ * EM type resource definition
+ */
+struct tf_em_resources {
+	/**
+	 * Array of TF EM table types where each entry is expected to
+	 * be set to the requested resource number of that specific
+	 * type. The index used is tf_em_tbl_type.
+	 */
+	uint16_t cnt[TF_EM_TBL_TYPE_MAX];
+};
+
 /**
  * tf_session_resources parameter definition.
  */
 struct tf_session_resources {
-	/** [in] Requested Identifier Resources
+	/**
+	 * [in] Requested Identifier Resources
 	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
+	 * Number of identifier resources requested for the
+	 * session.
 	 */
-	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested Index Table resource counts
+	struct tf_identifier_resources ident_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested Index Table resource counts
 	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
+	 * The number of index table resources requested for the
+	 * session.
 	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested TCAM Table resource counts
+	struct tf_tbl_resources tbl_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested TCAM Table resource counts
 	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
+	 * The number of TCAM table resources requested for the
+	 * session.
 	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested EM resource counts
+
+	struct tf_tcam_resources tcam_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested EM resource counts
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * The number of internal EM table resources requested for the
+	 * session.
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+	struct tf_em_resources em_cnt[TF_DIR_MAX];
 };
 
 /**
@@ -497,9 +550,6 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
-int tf_open_session_new(struct tf *tfp,
-			struct tf_open_session_parms *parms);
-
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -565,8 +615,6 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
-int tf_attach_session_new(struct tf *tfp,
-			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -576,7 +624,6 @@ int tf_attach_session_new(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
-int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -631,8 +678,6 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
-int tf_alloc_identifier_new(struct tf *tfp,
-			    struct tf_alloc_identifier_parms *parms);
 
 /**
  * free identifier resource
@@ -645,8 +690,6 @@ int tf_alloc_identifier_new(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
-int tf_free_identifier_new(struct tf *tfp,
-			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
@@ -1277,7 +1320,7 @@ struct tf_bulk_get_tbl_entry_parms {
  * provided data buffer is too small for the data type requested.
  */
 int tf_bulk_get_tbl_entry(struct tf *tfp,
-		     struct tf_bulk_get_tbl_entry_parms *parms);
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 4c46cadc6..b474e8c25 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -43,6 +43,14 @@ dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
 	/* Initialize the modules */
 
 	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
@@ -78,7 +86,7 @@ dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
-	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 32d9a5442..c31bf2357 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -407,6 +407,7 @@ struct tf_dev_ops {
 /**
  * Supported device operation structures
  */
+extern const struct tf_dev_ops tf_dev_ops_p4_init;
 extern const struct tf_dev_ops tf_dev_ops_p4;
 
 #endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 77fb693dd..9e332c594 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -75,6 +75,26 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 	return 0;
 }
 
+/**
+ * Truflow P4 device specific functions
+ */
+const struct tf_dev_ops tf_dev_ops_p4_init = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
+	.tf_dev_alloc_ident = NULL,
+	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_tbl = NULL,
+	.tf_dev_alloc_search_tbl = NULL,
+	.tf_dev_set_tbl = NULL,
+	.tf_dev_get_tbl = NULL,
+	.tf_dev_alloc_tcam = NULL,
+	.tf_dev_free_tcam = NULL,
+	.tf_dev_alloc_search_tcam = NULL,
+	.tf_dev_set_tcam = NULL,
+	.tf_dev_get_tcam = NULL,
+};
+
 /**
  * Truflow P4 device specific functions
  */
@@ -85,14 +105,14 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
-	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
-	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
-	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_get_tcam = NULL,
 	.tf_dev_insert_em_entry = tf_em_insert_entry,
 	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index e89f9768b..ee07a6aea 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -45,19 +45,22 @@ tf_ident_bind(struct tf *tfp,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
-		db_cfg.rm_db = ident_db[i];
+		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
+		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "%s: Identifier DB creation failed\n",
 				    tf_dir_2_str(i));
+
 			return rc;
 		}
 	}
 
 	init = 1;
 
+	printf("Identifier - initialized\n");
+
 	return 0;
 }
 
@@ -73,8 +76,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 	/* Bail if nothing has been initialized done silent as to
 	 * allow for creation cleanup.
 	 */
-	if (!init)
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Identifier DBs created\n");
 		return -EINVAL;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
@@ -96,6 +102,7 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 	       struct tf_ident_alloc_parms *parms)
 {
 	int rc;
+	uint32_t id;
 	struct tf_rm_allocate_parms aparms = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -109,17 +116,19 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 
 	/* Allocate requested element */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
-	aparms.index = (uint32_t *)&parms->id;
+	aparms.db_index = parms->type;
+	aparms.index = &id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Failed allocate, type:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type);
+			    parms->type);
 		return rc;
 	}
 
+	*parms->id = id;
+
 	return 0;
 }
 
@@ -143,7 +152,7 @@ tf_ident_free(struct tf *tfp __rte_unused,
 
 	/* Check if element is in use */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
+	aparms.db_index = parms->type;
 	aparms.index = parms->id;
 	aparms.allocated = &allocated;
 	rc = tf_rm_is_allocated(&aparms);
@@ -154,21 +163,21 @@ tf_ident_free(struct tf *tfp __rte_unused,
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
 
 	/* Free requested element */
 	fparms.rm_db = ident_db[parms->dir];
-	fparms.db_index = parms->ident_type;
+	fparms.db_index = parms->type;
 	fparms.index = parms->id;
 	rc = tf_rm_free(&fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Free failed, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index 1c5319b5e..6e36c525f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -43,7 +43,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [out] Identifier allocated
 	 */
@@ -61,7 +61,7 @@ struct tf_ident_free_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [in] ID to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b50e1d48c..a2e3840f0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -12,6 +12,7 @@
 #include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
+#include "tf_common.h"
 #include "tf_session.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -935,13 +936,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	struct tf_rm_resc_req_entry *data;
 	int dma_size;
 
-	if (size == 0 || query == NULL || resv_strategy == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Resource QCAPS parameter error, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-EINVAL));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS3(tfp, query, resv_strategy);
 
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
@@ -962,7 +957,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.qcaps_size = size;
-	req.qcaps_addr = qcaps_buf.pa_addr;
+	req.qcaps_addr = tfp_cpu_to_le_64(qcaps_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
 	parms.req_data = (uint32_t *)&req;
@@ -980,18 +975,29 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: QCAPS message error, rc:%s\n",
+			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+
+	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_cpu_to_le_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
+
+		printf("type: %d(0x%x) %d %d\n",
+		       query[i].type,
+		       query[i].type,
+		       query[i].min,
+		       query[i].max);
+
 	}
 
 	*resv_strategy = resp.flags &
@@ -1021,6 +1027,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	struct tf_rm_resc_entry *resv_data;
 	int dma_size;
 
+	TF_CHECK_PARMS3(tfp, request, resv);
+
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1053,8 +1061,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
 	}
 
-	req.req_addr = req_buf.pa_addr;
-	req.resp_addr = resv_buf.pa_addr;
+	req.req_addr = tfp_cpu_to_le_64(req_buf.pa_addr);
+	req.resc_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
 	parms.req_data = (uint32_t *)&req;
@@ -1072,18 +1080,28 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Alloc message error, rc:%s\n",
+			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("\nRESV\n");
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
 		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
 		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
 		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+
+		printf("%d type: %d(0x%x) %d %d\n",
+		       i,
+		       resv[i].type,
+		       resv[i].type,
+		       resv[i].start,
+		       resv[i].stride);
 	}
 
 	tf_msg_free_dma_buf(&req_buf);
@@ -1460,7 +1478,8 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
 
-	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	data_size = params->num_entries * params->entry_sz_in_bytes;
+
 	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
 
 	MSG_PREP(parms,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index a3e0f7bba..fb635f6dc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_rm_new.h"
+#include "tf_tcam.h"
 
 struct tf;
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 7cadb231f..6abf79aa1 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -10,6 +10,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_rm_new.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
 #include "tf_device.h"
@@ -65,6 +66,46 @@ struct tf_rm_new_db {
 	struct tf_rm_element *db;
 };
 
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] cfg
+ *   Pointer to the DB configuration
+ *
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
+ *
+ * [in] count
+ *   Number of DB configuration elements
+ *
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static void
+tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
+{
+	int i;
+	uint16_t cnt = 0;
+
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
+	}
+
+	*valid_count = cnt;
+}
 
 /**
  * Resource Manager Adjust of base index definitions.
@@ -132,6 +173,7 @@ tf_rm_create_db(struct tf *tfp,
 {
 	int rc;
 	int i;
+	int j;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	uint16_t max_types;
@@ -143,6 +185,9 @@ tf_rm_create_db(struct tf *tfp,
 	struct tf_rm_new_db *rm_db;
 	struct tf_rm_element *db;
 	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -177,10 +222,19 @@ tf_rm_create_db(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/* Process capabilities against db requirements */
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
 
 	/* Alloc request, alignment already set */
-	cparms.nitems = parms->num_elements;
+	cparms.nitems = (size_t)hcapi_items;
 	cparms.size = sizeof(struct tf_rm_resc_req_entry);
 	rc = tfp_calloc(&cparms);
 	if (rc)
@@ -195,15 +249,24 @@ tf_rm_create_db(struct tf *tfp,
 	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
 
 	/* Build the request */
-	for (i = 0; i < parms->num_elements; i++) {
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
 		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			req[i].type = parms->cfg[i].hcapi_type;
-			/* Check that we can get the full amount allocated */
-			if (parms->alloc_num[i] <=
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
 			    query[parms->cfg[i].hcapi_type].max) {
-				req[i].min = parms->alloc_num[i];
-				req[i].max = parms->alloc_num[i];
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
 			} else {
 				TFP_DRV_LOG(ERR,
 					    "%s: Resource failure, type:%d\n",
@@ -211,19 +274,16 @@ tf_rm_create_db(struct tf *tfp,
 					    parms->cfg[i].hcapi_type);
 				TFP_DRV_LOG(ERR,
 					"req:%d, avail:%d\n",
-					parms->alloc_num[i],
+					parms->alloc_cnt[i],
 					query[parms->cfg[i].hcapi_type].max);
 				return -EINVAL;
 			}
-		} else {
-			/* Skip the element */
-			req[i].type = CFA_RESOURCE_TYPE_INVALID;
 		}
 	}
 
 	rc = tf_msg_session_resc_alloc(tfp,
 				       parms->dir,
-				       parms->num_elements,
+				       hcapi_items,
 				       req,
 				       resv);
 	if (rc)
@@ -246,42 +306,74 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
 	db = rm_db->db;
-	for (i = 0; i < parms->num_elements; i++) {
-		/* If allocation failed for a single entry the DB
-		 * creation is considered a failure.
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
+
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
 		 */
-		if (parms->alloc_num[i] != resv[i].stride) {
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
 				    "%s: Alloc failed, type:%d\n",
 				    tf_dir_2_str(parms->dir),
-				    i);
+				    db[i].cfg_type);
 			TFP_DRV_LOG(ERR,
 				    "req:%d, alloc:%d\n",
-				    parms->alloc_num[i],
-				    resv[i].stride);
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
 			goto fail;
 		}
-
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-		db[i].alloc.entry.start = resv[i].start;
-		db[i].alloc.entry.stride = resv[i].stride;
-
-		/* Create pool */
-		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
-			     sizeof(struct bitalloc));
-		/* Alloc request, alignment already set */
-		cparms.nitems = pool_size;
-		cparms.size = sizeof(struct bitalloc);
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-		db[i].pool = (struct bitalloc *)cparms.mem_va;
 	}
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
-	parms->rm_db = (void *)rm_db;
+	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
@@ -307,13 +399,15 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 	int i;
 	struct tf_rm_new_db *rm_db;
 
+	TF_CHECK_PARMS1(parms);
+
 	/* Traverse the DB and clear each pool.
 	 * NOTE:
 	 *   Firmware is not cleared. It will be cleared on close only.
 	 */
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db->pool);
+		tfp_free((void *)rm_db->db[i].pool);
 
 	tfp_free((void *)parms->rm_db);
 
@@ -325,11 +419,11 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc = 0;
 	int id;
+	uint32_t index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -339,6 +433,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		TFP_DRV_LOG(ERR,
@@ -353,15 +458,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 				TF_RM_ADJUST_ADD_BASE,
 				parms->db_index,
 				id,
-				parms->index);
+				&index);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc adjust of base index failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -1;
+		return -EINVAL;
 	}
 
+	*parms->index = index;
+
 	return rc;
 }
 
@@ -373,8 +480,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -384,6 +490,17 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -409,8 +526,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -420,6 +536,17 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -442,8 +569,7 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -465,8 +591,7 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 6d8234ddc..ebf38c411 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -135,13 +135,16 @@ struct tf_rm_create_db_parms {
 	 */
 	struct tf_rm_element_cfg *cfg;
 	/**
-	 * Allocation number array. Array size is num_elements.
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
 	 */
-	uint16_t *alloc_num;
+	uint16_t *alloc_cnt;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *rm_db;
+	void **rm_db;
 };
 
 /**
@@ -382,7 +385,7 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
  * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI type.
+ * requested HCAPI RM type.
  *
  * [in] parms
  *   Pointer to get hcapi parameters
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 1917f8100..3a602618c 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -95,21 +95,11 @@ tf_session_open_session(struct tf *tfp,
 		      parms->open_cfg->device_type,
 		      session->shadow_copy,
 		      &parms->open_cfg->resources,
-		      session->dev);
+		      &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
 
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
 	session->ref_count++;
 
 	return 0;
@@ -119,10 +109,6 @@ tf_session_open_session(struct tf *tfp,
 	tfp_free(tfp->session);
 	tfp->session = NULL;
 	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
 }
 
 int
@@ -231,17 +217,7 @@ int
 tf_session_get_device(struct tf_session *tfs,
 		      struct tf_dev_info **tfd)
 {
-	int rc;
-
-	if (tfs->dev == NULL) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR,
-			    "Device not created, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	*tfd = tfs->dev;
+	*tfd = &tfs->dev;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 92792518b..705bb0955 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -97,7 +97,7 @@ struct tf_session {
 	uint8_t ref_count;
 
 	/** Device handle */
-	struct tf_dev_info *dev;
+	struct tf_dev_info dev;
 
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index a68335304..e594f0248 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -761,163 +761,6 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 	return 0;
 }
 
-/**
- * Internal function to set a Table Entry. Supports all internal Table Types
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_set_tbl_entry_internal(struct tf *tfp,
-			  struct tf_set_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -1145,266 +988,6 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
-/**
- * Allocate External Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_external(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	uint32_t index;
-	struct tf_session *tfs;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	/* Allocate an element
-	 */
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type);
-		return rc;
-	}
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Allocate Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	int free_cnt;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	id = ba_alloc(session_pool);
-	if (id == -1) {
-		free_cnt = ba_free_count(session_pool);
-
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d, free:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   free_cnt);
-		return -ENOMEM;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_ADD_BASE,
-			    id,
-			    &index);
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Free External Tbl entry to the session pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry freed
- *
- * - Failure, entry not successfully freed for these reasons
- *  -ENOMEM
- *  -EOPNOTSUPP
- *  -EINVAL
- */
-static int
-tf_free_tbl_entry_pool_external(struct tf *tfp,
-				struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_session *tfs;
-	uint32_t index;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	index = parms->idx;
-
-	rc = stack_push(pool, index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, consistency error, stack full, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-	}
-	return rc;
-}
-
-/**
- * Free Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_free_tbl_entry_pool_internal(struct tf *tfp,
-		       struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	int id;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	uint32_t index;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Check if element was indeed allocated */
-	id = ba_inuse_free(session_pool, index);
-	if (id == -1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -ENOMEM;
-	}
-
-	return rc;
-}
-
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
 tbl_scope_cb_find(struct tf_session *session,
@@ -1584,113 +1167,7 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	return -EINVAL;
 }
 
-/* API defined in tf_core.h */
-int
-tf_set_tbl_entry(struct tf *tfp,
-		 struct tf_set_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		void *base_addr;
-		uint32_t offset = parms->idx;
-		uint32_t tbl_scope_id;
-
-		session = (struct tf_session *)(tfp->session->core_data);
-
-		tbl_scope_id = parms->tbl_scope_id;
-
-		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			TFP_DRV_LOG(ERR,
-				    "%s, Table scope not allocated\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		/* Get the table scope control block associated with the
-		 * external pool
-		 */
-		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
-
-		if (tbl_scope_cb == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, table scope error\n",
-				    tf_dir_2_str(parms->dir));
-				return -EINVAL;
-		}
-
-		/* External table, implicitly the Action table */
-		base_addr = (void *)(uintptr_t)
-		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
-
-		if (base_addr == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Base address lookup failed\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		offset %= TF_EM_PAGE_SIZE;
-		rte_memcpy((char *)base_addr + offset,
-			   parms->data,
-			   parms->data_sz_in_bytes);
-	} else {
-		/* Internal table type processing */
-		rc = tf_set_tbl_entry_internal(tfp, parms);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Set failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-		}
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_get_tbl_entry(struct tf *tfp,
-		 struct tf_get_tbl_entry_parms *parms)
-{
-	int rc = 0;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
-		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
-			    tf_dir_2_str(parms->dir));
-
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_get_tbl_entry_internal(tfp, parms);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s, Get failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
+ /* API defined in tf_core.h */
 int
 tf_bulk_get_tbl_entry(struct tf *tfp,
 		 struct tf_bulk_get_tbl_entry_parms *parms)
@@ -1749,92 +1226,6 @@ tf_free_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_entry(struct tf *tfp,
-		   struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	/*
-	 * No shadow copy support for external tables, allocate and return
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
-		/* Entry found and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_entry(struct tf *tfp,
-		  struct tf_free_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/*
-	 * No shadow of external tables so just free the entry
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_free_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_free_tbl_entry_shadow(tfs, parms);
-		/* Entry free'ed and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
-
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-	return rc;
-}
-
-
 static void
 tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
 			struct hcapi_cfa_em_page_tbl *tp_next)
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index b79706f97..51f8f0740 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -6,13 +6,18 @@
 #include <rte_common.h>
 
 #include "tf_tbl_type.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Table DBs.
  */
-/* static void *tbl_db[TF_DIR_MAX]; */
+static void *tbl_db[TF_DIR_MAX];
 
 /**
  * Table Shadow DBs
@@ -22,7 +27,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -30,29 +35,164 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_bind(struct tf *tfp __rte_unused,
-	    struct tf_tbl_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
 	return 0;
 }
 
 int
 tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Table DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms __rte_unused)
+	     struct tf_tbl_alloc_parms *parms)
 {
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
 	return 0;
 }
 
 int
 tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms __rte_unused)
+	    struct tf_tbl_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
 	return 0;
 }
 
@@ -64,15 +204,107 @@ tf_tbl_alloc_search(struct tf *tfp __rte_unused,
 }
 
 int
-tf_tbl_set(struct tf *tfp __rte_unused,
-	   struct tf_tbl_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
 int
-tf_tbl_get(struct tf *tfp __rte_unused,
-	   struct tf_tbl_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index 11f2aa333..3474489a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -55,7 +55,7 @@ struct tf_tbl_alloc_parms {
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint32_t *idx;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b9dba5323..e0fac31f2 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -38,8 +38,8 @@ static uint8_t init;
 /* static uint8_t shadow_init; */
 
 int
-tf_tcam_bind(struct tf *tfp __rte_unused,
-	     struct tf_tcam_cfg_parms *parms __rte_unused)
+tf_tcam_bind(struct tf *tfp,
+	     struct tf_tcam_cfg_parms *parms)
 {
 	int rc;
 	int i;
@@ -59,8 +59,8 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
-		db_cfg.rm_db = tcam_db[i];
+		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
+		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
@@ -72,11 +72,13 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 
 	init = 1;
 
+	printf("TCAM - initialized\n");
+
 	return 0;
 }
 
 int
-tf_tcam_unbind(struct tf *tfp __rte_unused)
+tf_tcam_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index 4099629ea..ad8edaf30 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -10,32 +10,57 @@
 
 /**
  * Helper function converting direction to text string
+ *
+ * [in] dir
+ *   Receive or transmit direction identifier
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the direction
  */
-const char
-*tf_dir_2_str(enum tf_dir dir);
+const char *tf_dir_2_str(enum tf_dir dir);
 
 /**
  * Helper function converting identifier to text string
+ *
+ * [in] id_type
+ *   Identifier type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the identifier
  */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
+const char *tf_ident_2_str(enum tf_identifier_type id_type);
 
 /**
  * Helper function converting tcam type to text string
+ *
+ * [in] tcam_type
+ *   TCAM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the tcam
  */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+const char *tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
 
 /**
  * Helper function converting tbl type to text string
+ *
+ * [in] tbl_type
+ *   Table type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the table type
  */
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
 
 /**
  * Helper function converting em tbl type to text string
+ *
+ * [in] em_type
+ *   EM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
  */
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
 #endif /* _TF_UTIL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 3bce3ade1..69d1c9a1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -102,13 +102,13 @@ tfp_calloc(struct tfp_calloc_parms *parms)
 				    (parms->nitems * parms->size),
 				    parms->alignment);
 	if (parms->mem_va == NULL) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_va\n");
 		return -ENOMEM;
 	}
 
 	parms->mem_pa = (void *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
 	if (parms->mem_pa == (void *)((uintptr_t)RTE_BAD_IOVA)) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_pa\n");
 		return -ENOMEM;
 	}
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 19/51] net/bnxt: update identifier with remap support
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (17 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 18/51] net/bnxt: multiple device implementation Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
                           ` (31 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add Identifier L2 CTXT Remap to the P4 device and updated the
  cfa_resource_types.h to get the type support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 110 ++++++++++--------
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   2 +-
 2 files changed, 60 insertions(+), 52 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 11e8892f4..058d8cc88 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -20,46 +20,48 @@
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_REMAP   0x1UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x2UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x3UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x4UL
 /* Wildcard TCAM Profile Id */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x5UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x6UL
 /* Meter Profile */
-#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x7UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+#define CFA_RESOURCE_TYPE_P59_METER           0x8UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x9UL
 /* Source Properties TCAM */
-#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0xaUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xbUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xcUL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
-#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
 /* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
@@ -105,30 +107,32 @@
 #define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
 #define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
@@ -176,26 +180,28 @@
 #define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
 #define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
@@ -243,24 +249,26 @@
 #define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
 #define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 5cd02b298..235d81f96 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,7 +12,7 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 20/51] net/bnxt: update RM with residual checker
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (18 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
                           ` (30 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add residual checker to the TF Host RM as well as new RM APIs. On
  close it will scan the DB and check of any remaining elements. If
  found they will be logged and FW msg sent for FW to scrub that
  specific type of resources.
- Update the module bind to be aware of the module type, for each of
  the modules.
- Add additional type 2 string util functions.
- Fix the device naming to be in compliance with TF.
- Update the device unbind order as to assure TCAMs gets flushed
  first.
- Update the close functionality such that the session gets
  closed after the device is unbound.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device.c     |  53 +++--
 drivers/net/bnxt/tf_core/tf_device.h     |  25 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   1 -
 drivers/net/bnxt/tf_core/tf_identifier.c |  10 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  67 +++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |   7 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 287 +++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  45 +++-
 drivers/net/bnxt/tf_core/tf_session.c    |  58 +++--
 drivers/net/bnxt/tf_core/tf_tbl_type.c   |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.h       |   4 +
 drivers/net/bnxt/tf_core/tf_util.c       |  55 ++++-
 drivers/net/bnxt/tf_core/tf_util.h       |  32 +++
 14 files changed, 561 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index b474e8c25..441d0c678 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -10,7 +10,7 @@
 struct tf;
 
 /* Forward declarations */
-static int dev_unbind_p4(struct tf *tfp);
+static int tf_dev_unbind_p4(struct tf *tfp);
 
 /**
  * Device specific bind function, WH+
@@ -32,10 +32,10 @@ static int dev_unbind_p4(struct tf *tfp);
  *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp,
-	    bool shadow_copy,
-	    struct tf_session_resources *resources,
-	    struct tf_dev_info *dev_handle)
+tf_dev_bind_p4(struct tf *tfp,
+	       bool shadow_copy,
+	       struct tf_session_resources *resources,
+	       struct tf_dev_info *dev_handle)
 {
 	int rc;
 	int frc;
@@ -93,7 +93,7 @@ dev_bind_p4(struct tf *tfp,
 
  fail:
 	/* Cleanup of already created modules */
-	frc = dev_unbind_p4(tfp);
+	frc = tf_dev_unbind_p4(tfp);
 	if (frc)
 		return frc;
 
@@ -111,7 +111,7 @@ dev_bind_p4(struct tf *tfp,
  *   - (-EINVAL) on failure.
  */
 static int
-dev_unbind_p4(struct tf *tfp)
+tf_dev_unbind_p4(struct tf *tfp)
 {
 	int rc = 0;
 	bool fail = false;
@@ -119,25 +119,28 @@ dev_unbind_p4(struct tf *tfp)
 	/* Unbind all the support modules. As this is only done on
 	 * close we only report errors as everything has to be cleaned
 	 * up regardless.
+	 *
+	 * In case of residuals TCAMs are cleaned up first as to
+	 * invalidate the pipeline in a clean manner.
 	 */
-	rc = tf_ident_unbind(tfp);
+	rc = tf_tcam_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Identifier\n");
+			    "Device unbind failed, TCAM\n");
 		fail = true;
 	}
 
-	rc = tf_tbl_unbind(tfp);
+	rc = tf_ident_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Table Type\n");
+			    "Device unbind failed, Identifier\n");
 		fail = true;
 	}
 
-	rc = tf_tcam_unbind(tfp);
+	rc = tf_tbl_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, TCAM\n");
+			    "Device unbind failed, Table Type\n");
 		fail = true;
 	}
 
@@ -148,18 +151,18 @@ dev_unbind_p4(struct tf *tfp)
 }
 
 int
-dev_bind(struct tf *tfp __rte_unused,
-	 enum tf_device_type type,
-	 bool shadow_copy,
-	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_handle)
+tf_dev_bind(struct tf *tfp __rte_unused,
+	    enum tf_device_type type,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_bind_p4(tfp,
-				   shadow_copy,
-				   resources,
-				   dev_handle);
+		return tf_dev_bind_p4(tfp,
+				      shadow_copy,
+				      resources,
+				      dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
@@ -168,12 +171,12 @@ dev_bind(struct tf *tfp __rte_unused,
 }
 
 int
-dev_unbind(struct tf *tfp,
-	   struct tf_dev_info *dev_handle)
+tf_dev_unbind(struct tf *tfp,
+	      struct tf_dev_info *dev_handle)
 {
 	switch (dev_handle->type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_unbind_p4(tfp);
+		return tf_dev_unbind_p4(tfp);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c31bf2357..c8feac55d 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -14,6 +14,17 @@
 struct tf;
 struct tf_session;
 
+/**
+ *
+ */
+enum tf_device_module_type {
+	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	TF_DEVICE_MODULE_TYPE_TABLE,
+	TF_DEVICE_MODULE_TYPE_TCAM,
+	TF_DEVICE_MODULE_TYPE_EM,
+	TF_DEVICE_MODULE_TYPE_MAX
+};
+
 /**
  * The Device module provides a general device template. A supported
  * device type should implement one or more of the listed function
@@ -60,11 +71,11 @@ struct tf_dev_info {
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_bind(struct tf *tfp,
-	     enum tf_device_type type,
-	     bool shadow_copy,
-	     struct tf_session_resources *resources,
-	     struct tf_dev_info *dev_handle);
+int tf_dev_bind(struct tf *tfp,
+		enum tf_device_type type,
+		bool shadow_copy,
+		struct tf_session_resources *resources,
+		struct tf_dev_info *dev_handle);
 
 /**
  * Device release handles cleanup of the device specific information.
@@ -80,8 +91,8 @@ int dev_bind(struct tf *tfp,
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_unbind(struct tf *tfp,
-	       struct tf_dev_info *dev_handle);
+int tf_dev_unbind(struct tf *tfp,
+		  struct tf_dev_info *dev_handle);
 
 /**
  * Truflow device specific function hooks structure
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 235d81f96..411e21637 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -77,5 +77,4 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
-
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index ee07a6aea..b197bb271 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -39,12 +39,12 @@ tf_ident_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_IDENTIFIER;
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
 		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
@@ -86,8 +86,10 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 		fparms.dir = i;
 		fparms.rm_db = ident_db[i];
 		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "rm free failed on unbind\n");
+		}
 
 		ident_db[i] = NULL;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index a2e3840f0..c015b0ce2 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1110,6 +1110,69 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	return rc;
 }
 
+int
+tf_msg_session_resc_flush(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_flush_input req = { 0 };
+	struct hwrm_tf_session_resc_flush_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	TF_CHECK_PARMS2(tfp, resv);
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.flush_size = size;
+
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv_data[i].type = tfp_cpu_to_le_32(resv[i].type);
+		resv_data[i].start = tfp_cpu_to_le_16(resv[i].start);
+		resv_data[i].stride = tfp_cpu_to_le_16(resv[i].stride);
+	}
+
+	req.flush_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_FLUSH;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1512,9 +1575,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
-	if (rc != 0)
-		return rc;
+	req.type = parms->type;
 
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fb635f6dc..1ff1044e8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -181,6 +181,13 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 			      struct tf_rm_resc_req_entry *request,
 			      struct tf_rm_resc_entry *resv);
 
+/**
+ * Sends session resource flush request to TF Firmware
+ */
+int tf_msg_session_resc_flush(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 6abf79aa1..02b4b5c8f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -60,6 +60,11 @@ struct tf_rm_new_db {
 	 */
 	enum tf_dir dir;
 
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
@@ -167,6 +172,178 @@ tf_rm_adjust_index(struct tf_rm_element *db,
 	return rc;
 }
 
+/**
+ * Logs an array of found residual entries to the console.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] type
+ *   Type of Device Module
+ *
+ * [in] count
+ *   Number of entries in the residual array
+ *
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
+ */
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
+{
+	int i;
+
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
+	 */
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
+	}
+}
+
+/**
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
+ *
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
+{
+	int rc;
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
+	}
+
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
+		}
+		*resv_size = found;
+	}
+
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
 int
 tf_rm_create_db(struct tf *tfp,
 		struct tf_rm_create_db_parms *parms)
@@ -373,6 +550,7 @@ tf_rm_create_db(struct tf *tfp,
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
@@ -392,20 +570,69 @@ tf_rm_create_db(struct tf *tfp,
 }
 
 int
-tf_rm_free_db(struct tf *tfp __rte_unused,
+tf_rm_free_db(struct tf *tfp,
 	      struct tf_rm_free_db_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
 	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
 
-	TF_CHECK_PARMS1(parms);
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	/* Traverse the DB and clear each pool.
-	 * NOTE:
-	 *   Firmware is not cleared. It will be cleared on close only.
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
 	 */
+
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
+	 */
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
+	}
+
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -417,7 +644,7 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 int
 tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int id;
 	uint32_t index;
 	struct tf_rm_new_db *rm_db;
@@ -446,11 +673,12 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
+		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
 			    "%s: Allocation failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -ENOMEM;
+		return rc;
 	}
 
 	/* Adjust for any non zero start value */
@@ -475,7 +703,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 int
 tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -521,7 +749,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 int
 tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -565,7 +793,6 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 int
 tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -579,15 +806,16 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
-	parms->info = &rm_db->db[parms->db_index].alloc;
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
-	return rc;
+	return 0;
 }
 
 int
 tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -603,5 +831,36 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
 
+	return 0;
+}
+
+int
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
+{
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
+	}
+
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
 	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index ebf38c411..a40296ed2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -8,6 +8,7 @@
 
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
 
@@ -57,9 +58,9 @@ struct tf_rm_new_entry {
 enum tf_rm_elem_cfg_type {
 	/** No configuration */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled' */
+	/** HCAPI 'controlled', uses a Pool for internal storage */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled' */
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
 	TF_RM_ELEM_CFG_PRIVATE,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
@@ -123,7 +124,11 @@ struct tf_rm_alloc_info {
  */
 struct tf_rm_create_db_parms {
 	/**
-	 * [in] Receive or transmit direction
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
 	 */
 	enum tf_dir dir;
 	/**
@@ -263,6 +268,25 @@ struct tf_rm_get_hcapi_parms {
 	uint16_t *hcapi_type;
 };
 
+/**
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
 /**
  * @page rm Resource Manager
  *
@@ -279,6 +303,8 @@ struct tf_rm_get_hcapi_parms {
  * @ref tf_rm_get_info
  *
  * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
 
 /**
@@ -396,4 +422,17 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
+ *
+ * [in] parms
+ *   Pointer to get inuse parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
+
 #endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 3a602618c..b08d06306 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -91,11 +91,11 @@ tf_session_open_session(struct tf *tfp,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
-	rc = dev_bind(tfp,
-		      parms->open_cfg->device_type,
-		      session->shadow_copy,
-		      &parms->open_cfg->resources,
-		      &session->dev);
+	rc = tf_dev_bind(tfp,
+			 parms->open_cfg->device_type,
+			 session->shadow_copy,
+			 &parms->open_cfg->resources,
+			 &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
@@ -151,6 +151,8 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	tfs->ref_count--;
+
 	/* Record the session we're closing so the caller knows the
 	 * details.
 	 */
@@ -164,6 +166,32 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	if (tfs->ref_count > 0) {
+		/* In case we're attached only the session client gets
+		 * closed.
+		 */
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "FW Session close failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		return 0;
+	}
+
+	/* Final cleanup as we're last user of the session */
+
+	/* Unbind the device */
+	rc = tf_dev_unbind(tfp, tfd);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
 	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
@@ -173,23 +201,9 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	tfs->ref_count--;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Unbind the device */
-		rc = dev_unbind(tfp, tfd);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Device unbind failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index 51f8f0740..bdf7d2089 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -51,11 +51,12 @@ tf_tbl_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
 		db_cfg.rm_db = &tbl_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index e0fac31f2..2f4441de8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -54,11 +54,12 @@ tf_tcam_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
 		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 67c3bcb49..5090dfd9f 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -146,6 +146,10 @@ struct tf_tcam_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Entry index to write to
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index a9010543d..16c43eb67 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -7,8 +7,8 @@
 
 #include "tf_util.h"
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
+const char *
+tf_dir_2_str(enum tf_dir dir)
 {
 	switch (dir) {
 	case TF_DIR_RX:
@@ -20,8 +20,8 @@ const char
 	}
 }
 
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
+const char *
+tf_ident_2_str(enum tf_identifier_type id_type)
 {
 	switch (id_type) {
 	case TF_IDENT_TYPE_L2_CTXT:
@@ -39,8 +39,8 @@ const char
 	}
 }
 
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+const char *
+tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
 {
 	switch (tcam_type) {
 	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
@@ -60,8 +60,8 @@ const char
 	}
 }
 
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+const char *
+tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 {
 	switch (tbl_type) {
 	case TF_TBL_TYPE_FULL_ACT_RECORD:
@@ -131,8 +131,8 @@ const char
 	}
 }
 
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+const char *
+tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
 {
 	switch (em_type) {
 	case TF_EM_TBL_TYPE_EM_RECORD:
@@ -143,3 +143,38 @@ const char
 		return "Invalid EM type";
 	}
 }
+
+const char *
+tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
+				    uint16_t mod_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return tf_ident_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return tf_tcam_tbl_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return tf_em_tbl_type_2_str(mod_type);
+	default:
+		return "Invalid Device Module type";
+	}
+}
+
+const char *
+tf_device_module_type_2_str(enum tf_device_module_type dm_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return "Identifer";
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return "Table";
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return "TCAM";
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return "EM";
+	default:
+		return "Invalid Device Module type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index ad8edaf30..c97e2a66a 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -7,6 +7,7 @@
 #define _TF_UTIL_H_
 
 #include "tf_core.h"
+#include "tf_device.h"
 
 /**
  * Helper function converting direction to text string
@@ -63,4 +64,35 @@ const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
  */
 const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
+/**
+ * Helper function converting device module type and module type to
+ * text string.
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_subtype_2_str
+					(enum tf_device_module_type dm_type,
+					 uint16_t mod_type);
+
+/**
+ * Helper function converting device module type to text string
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_2_str(enum tf_device_module_type dm_type);
+
 #endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 21/51] net/bnxt: support two level priority for TCAMs
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (19 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
                           ` (29 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/bitalloc.c  | 107 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h  |   5 ++
 drivers/net/bnxt/tf_core/tf_rm_new.c |   9 ++-
 drivers/net/bnxt/tf_core/tf_rm_new.h |   8 ++
 drivers/net/bnxt/tf_core/tf_tcam.c   |   1 +
 5 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
index fb4df9a19..918cabf19 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.c
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -7,6 +7,40 @@
 
 #define BITALLOC_MAX_LEVELS 6
 
+
+/* Finds the last bit set plus 1, equivalent to gcc __builtin_fls */
+static int
+ba_fls(bitalloc_word_t v)
+{
+	int c = 32;
+
+	if (!v)
+		return 0;
+
+	if (!(v & 0xFFFF0000u)) {
+		v <<= 16;
+		c -= 16;
+	}
+	if (!(v & 0xFF000000u)) {
+		v <<= 8;
+		c -= 8;
+	}
+	if (!(v & 0xF0000000u)) {
+		v <<= 4;
+		c -= 4;
+	}
+	if (!(v & 0xC0000000u)) {
+		v <<= 2;
+		c -= 2;
+	}
+	if (!(v & 0x80000000u)) {
+		v <<= 1;
+		c -= 1;
+	}
+
+	return c;
+}
+
 /* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
 static int
 ba_ffs(bitalloc_word_t v)
@@ -120,6 +154,79 @@ ba_alloc(struct bitalloc *pool)
 	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
 }
 
+/**
+ * Help function to alloc entry from highest available index
+ *
+ * Searching the pool from highest index for the empty entry.
+ *
+ * [in] pool
+ *   Pointer to the resource pool
+ *
+ * [in] offset
+ *   Offset of the storage in the pool
+ *
+ * [in] words
+ *   Number of words in this level
+ *
+ * [in] size
+ *   Number of entries in this level
+ *
+ * [in] index
+ *   Index of words that has the entry
+ *
+ * [in] clear
+ *   Indicate if a bit needs to be clear due to the entry is allocated
+ *
+ * Returns:
+ *     0 - Success
+ *    -1 - Failure
+ */
+static int
+ba_alloc_reverse_helper(struct bitalloc *pool,
+			int offset,
+			int words,
+			unsigned int size,
+			int index,
+			int *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int loc = ba_fls(storage[index]);
+	int r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_reverse_helper(pool,
+					    offset + words + 1,
+					    storage[words],
+					    size * 32,
+					    index * 32 + loc,
+					    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_reverse(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_reverse_helper(pool, 0, 1, 32, 0, &clear);
+}
+
 static int
 ba_alloc_index_helper(struct bitalloc *pool,
 		      int              offset,
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
index 563c8531a..2825bb37e 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.h
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -72,6 +72,11 @@ int ba_init(struct bitalloc *pool, int size);
 int ba_alloc(struct bitalloc *pool);
 int ba_alloc_index(struct bitalloc *pool, int index);
 
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc_reverse(struct bitalloc *pool);
+
 /**
  * Query a particular index in a pool to check if its in use.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 02b4b5c8f..de8f11955 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -671,7 +671,14 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 		return rc;
 	}
 
-	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index a40296ed2..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -185,6 +185,14 @@ struct tf_rm_allocate_parms {
 	 * i.e. Full Action Record offsets.
 	 */
 	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 2f4441de8..260fb15a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -157,6 +157,7 @@ tf_tcam_alloc(struct tf *tfp,
 	/* Allocate requested element */
 	aparms.rm_db = tcam_db[parms->dir];
 	aparms.db_index = parms->type;
+	aparms.priority = parms->priority;
 	aparms.index = (uint32_t *)&parms->idx;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 22/51] net/bnxt: support EM and TCAM lookup with table scope
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (20 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 23/51] net/bnxt: update table get to use new design Ajit Khaparde
                           ` (28 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Support for table scope within the EM module
- Support for host and system memory
- Update TCAM set/free.
- Replace TF device type by HCAPI RM type.
- Update TCAM set and free for HCAPI RM type

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |    5 +-
 drivers/net/bnxt/tf_core/Makefile             |    5 +-
 drivers/net/bnxt/tf_core/cfa_resource_types.h |    8 +-
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  864 +-----------
 drivers/net/bnxt/tf_core/tf_core.c            |  100 +-
 drivers/net/bnxt/tf_core/tf_device.c          |   50 +-
 drivers/net/bnxt/tf_core/tf_device.h          |   86 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |   14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   20 +-
 drivers/net/bnxt/tf_core/tf_em.c              |  360 -----
 drivers/net/bnxt/tf_core/tf_em.h              |  310 +++-
 drivers/net/bnxt/tf_core/tf_em_common.c       |  281 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  107 ++
 drivers/net/bnxt/tf_core/tf_em_host.c         | 1146 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  312 +++++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  118 ++
 drivers/net/bnxt/tf_core/tf_msg.c             | 1248 ++++-------------
 drivers/net/bnxt/tf_core/tf_msg.h             |  233 +--
 drivers/net/bnxt/tf_core/tf_rm.c              |   89 +-
 drivers/net/bnxt/tf_core/tf_rm_new.c          |   40 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1134 ---------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |   39 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |   25 +-
 drivers/net/bnxt/tf_core/tf_tcam.h            |    4 +
 drivers/net/bnxt/tf_core/tf_util.c            |    4 +-
 25 files changed, 3030 insertions(+), 3572 deletions(-)
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 33e6ebd66..35038dc8b 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -28,7 +28,10 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_msg.c',
 	'tf_core/rand.c',
 	'tf_core/stack.c',
-	'tf_core/tf_em.c',
+        'tf_core/tf_em_common.c',
+        'tf_core/tf_em_host.c',
+        'tf_core/tf_em_internal.c',
+        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 5ed32f12a..f186741e4 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -12,8 +12,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 058d8cc88..6e79facec 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -202,7 +202,9 @@
 #define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
 #define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
@@ -269,7 +271,9 @@
 #define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
 #define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 1e78296c6..26836e488 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -13,20 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_SESSION_ATTACH = 712,
-	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
-	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
-	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
-	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
-	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
-	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
-	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
-	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
-	HWRM_TFT_TBL_SCOPE_CFG = 731,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
-	HWRM_TFT_TBL_TYPE_SET = 823,
-	HWRM_TFT_TBL_TYPE_GET = 824,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
@@ -66,858 +54,8 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
-struct tf_session_attach_input;
-struct tf_session_hw_resc_qcaps_input;
-struct tf_session_hw_resc_qcaps_output;
-struct tf_session_hw_resc_alloc_input;
-struct tf_session_hw_resc_alloc_output;
-struct tf_session_hw_resc_free_input;
-struct tf_session_hw_resc_flush_input;
-struct tf_session_sram_resc_qcaps_input;
-struct tf_session_sram_resc_qcaps_output;
-struct tf_session_sram_resc_alloc_input;
-struct tf_session_sram_resc_alloc_output;
-struct tf_session_sram_resc_free_input;
-struct tf_session_sram_resc_flush_input;
-struct tf_tbl_type_set_input;
-struct tf_tbl_type_get_input;
-struct tf_tbl_type_get_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
-/* Input params for session attach */
-typedef struct tf_session_attach_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* Session Name */
-	char				 session_name[TF_SESSION_NAME_MAX];
-} tf_session_attach_input_t, *ptf_session_attach_input_t;
-
-/* Input params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
-
-/* Output params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_output {
-	/* Control Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Unused */
-	uint8_t			  unused[4];
-	/* Minimum guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_min;
-	/* Maximum non-guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_max;
-	/* Minimum guaranteed number of profile functions */
-	uint16_t			 prof_func_min;
-	/* Maximum non-guaranteed number of profile functions */
-	uint16_t			 prof_func_max;
-	/* Minimum guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_min;
-	/* Maximum non-guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_max;
-	/* Minimum guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_min;
-	/* Maximum non-guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_max;
-	/* Minimum guaranteed number of EM records entries */
-	uint16_t			 em_record_entries_min;
-	/* Maximum non-guaranteed number of EM record entries */
-	uint16_t			 em_record_entries_max;
-	/* Minimum guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_min;
-	/* Maximum non-guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_max;
-	/* Minimum guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_min;
-	/* Maximum non-guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_max;
-	/* Minimum guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_min;
-	/* Maximum non-guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_max;
-	/* Minimum guaranteed number of meter instances */
-	uint16_t			 meter_inst_min;
-	/* Maximum non-guaranteed number of meter instances */
-	uint16_t			 meter_inst_max;
-	/* Minimum guaranteed number of mirrors */
-	uint16_t			 mirrors_min;
-	/* Maximum non-guaranteed number of mirrors */
-	uint16_t			 mirrors_max;
-	/* Minimum guaranteed number of UPAR */
-	uint16_t			 upar_min;
-	/* Maximum non-guaranteed number of UPAR */
-	uint16_t			 upar_max;
-	/* Minimum guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_min;
-	/* Maximum non-guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_max;
-	/* Minimum guaranteed number of L2 Functions */
-	uint16_t			 l2_func_min;
-	/* Maximum non-guaranteed number of L2 Functions */
-	uint16_t			 l2_func_max;
-	/* Minimum guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_min;
-	/* Maximum non-guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_max;
-	/* Minimum guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_min;
-	/* Maximum non-guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_max;
-	/* Minimum guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_min;
-	/* Maximum non-guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_max;
-	/* Minimum guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_min;
-	/* Maximum non-guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_max;
-	/* Minimum guaranteed number of metadata */
-	uint16_t			 metadata_min;
-	/* Maximum non-guaranteed number of metadata */
-	uint16_t			 metadata_max;
-	/* Minimum guaranteed number of CT states */
-	uint16_t			 ct_state_min;
-	/* Maximum non-guaranteed number of CT states */
-	uint16_t			 ct_state_max;
-	/* Minimum guaranteed number of range profiles */
-	uint16_t			 range_prof_min;
-	/* Maximum non-guaranteed number range profiles */
-	uint16_t			 range_prof_max;
-	/* Minimum guaranteed number of range entries */
-	uint16_t			 range_entries_min;
-	/* Maximum non-guaranteed number of range entries */
-	uint16_t			 range_entries_max;
-	/* Minimum guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_min;
-	/* Maximum non-guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_max;
-} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
-
-/* Input params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of L2 CTX TCAM entries to be allocated */
-	uint16_t			 num_l2_ctx_tcam_entries;
-	/* Number of profile functions to be allocated */
-	uint16_t			 num_prof_func_entries;
-	/* Number of profile TCAM entries to be allocated */
-	uint16_t			 num_prof_tcam_entries;
-	/* Number of EM profile ids to be allocated */
-	uint16_t			 num_em_prof_id;
-	/* Number of EM records entries to be allocated */
-	uint16_t			 num_em_record_entries;
-	/* Number of WC profiles ids to be allocated */
-	uint16_t			 num_wc_tcam_prof_id;
-	/* Number of WC TCAM entries to be allocated */
-	uint16_t			 num_wc_tcam_entries;
-	/* Number of meter profiles to be allocated */
-	uint16_t			 num_meter_profiles;
-	/* Number of meter instances to be allocated */
-	uint16_t			 num_meter_inst;
-	/* Number of mirrors to be allocated */
-	uint16_t			 num_mirrors;
-	/* Number of UPAR to be allocated */
-	uint16_t			 num_upar;
-	/* Number of SP TCAM entries to be allocated */
-	uint16_t			 num_sp_tcam_entries;
-	/* Number of L2 functions to be allocated */
-	uint16_t			 num_l2_func;
-	/* Number of flexible key templates to be allocated */
-	uint16_t			 num_flex_key_templ;
-	/* Number of table scopes to be allocated */
-	uint16_t			 num_tbl_scope;
-	/* Number of epoch0 entries to be allocated */
-	uint16_t			 num_epoch0_entries;
-	/* Number of epoch1 entries to be allocated */
-	uint16_t			 num_epoch1_entries;
-	/* Number of metadata to be allocated */
-	uint16_t			 num_metadata;
-	/* Number of CT states to be allocated */
-	uint16_t			 num_ct_state;
-	/* Number of range profiles to be allocated */
-	uint16_t			 num_range_prof;
-	/* Number of range Entries to be allocated */
-	uint16_t			 num_range_entries;
-	/* Number of LAG table entries to be allocated */
-	uint16_t			 num_lag_tbl_entries;
-} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
-
-/* Output params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_output {
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
-
-/* Input params for session resource HW free */
-typedef struct tf_session_hw_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
-
-/* Input params for session resource HW flush */
-typedef struct tf_session_hw_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
-
-/* Input params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
-
-/* Output params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_output {
-	/* Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Minimum guaranteed number of Full Action */
-	uint16_t			 full_action_min;
-	/* Maximum non-guaranteed number of Full Action */
-	uint16_t			 full_action_max;
-	/* Minimum guaranteed number of MCG */
-	uint16_t			 mcg_min;
-	/* Maximum non-guaranteed number of MCG */
-	uint16_t			 mcg_max;
-	/* Minimum guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_min;
-	/* Maximum non-guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_max;
-	/* Minimum guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_min;
-	/* Maximum non-guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_max;
-	/* Minimum guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_min;
-	/* Maximum non-guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_max;
-	/* Minimum guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_min;
-	/* Maximum non-guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_max;
-	/* Minimum guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_max;
-	/* Minimum guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_max;
-	/* Minimum guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_min;
-	/* Maximum non-guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_max;
-	/* Minimum guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_min;
-	/* Maximum non-guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_max;
-	/* Minimum guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_min;
-	/* Maximum non-guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_max;
-	/* Minimum guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_min;
-	/* Maximum non-guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_max;
-	/* Minimum guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_min;
-	/* Maximum non-guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_max;
-} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
-
-/* Input params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_input {
-	/* FW Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of full action SRAM entries to be allocated */
-	uint16_t			 num_full_action;
-	/* Number of multicast groups to be allocated */
-	uint16_t			 num_mcg;
-	/* Number of Encap 8B entries to be allocated */
-	uint16_t			 num_encap_8b;
-	/* Number of Encap 16B entries to be allocated */
-	uint16_t			 num_encap_16b;
-	/* Number of Encap 64B entries to be allocated */
-	uint16_t			 num_encap_64b;
-	/* Number of SP SMAC entries to be allocated */
-	uint16_t			 num_sp_smac;
-	/* Number of SP SMAC IPv4 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv4;
-	/* Number of SP SMAC IPv6 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv6;
-	/* Number of Counter 64B entries to be allocated */
-	uint16_t			 num_counter_64b;
-	/* Number of NAT source ports to be allocated */
-	uint16_t			 num_nat_sport;
-	/* Number of NAT destination ports to be allocated */
-	uint16_t			 num_nat_dport;
-	/* Number of NAT source iPV4 addresses to be allocated */
-	uint16_t			 num_nat_s_ipv4;
-	/* Number of NAT destination IPV4 addresses to be allocated */
-	uint16_t			 num_nat_d_ipv4;
-} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
-
-/* Output params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_output {
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
-
-/* Input params for session resource SRAM free */
-typedef struct tf_session_sram_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
-
-/* Input params for session resource SRAM flush */
-typedef struct tf_session_sram_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
-
-/* Input params for table type set */
-typedef struct tf_tbl_type_set_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Size of the data to set in bytes */
-	uint16_t			 size;
-	/* Data to set */
-	uint8_t			  data[TF_BULK_SEND];
-	/* Index to set */
-	uint32_t			 index;
-} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
-
-/* Input params for table type get */
-typedef struct tf_tbl_type_get_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Index to get */
-	uint32_t			 index;
-} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
-
-/* Output params for table type get */
-typedef struct tf_tbl_type_get_output {
-	/* Size of the data read in bytes */
-	uint16_t			 size;
-	/* Data read */
-	uint8_t			  data[TF_BULK_RECV];
-} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3e23d0513..8b3e15c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -208,7 +208,15 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL &&
+		dev->ops->tf_dev_insert_ext_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL &&
+		dev->ops->tf_dev_insert_int_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM insert failed, rc:%s\n",
@@ -217,7 +225,7 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	return -EINVAL;
+	return 0;
 }
 
 /** Delete EM hash entry API
@@ -255,7 +263,13 @@ int tf_delete_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		rc = dev->ops->tf_dev_delete_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		rc = dev->ops->tf_dev_delete_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM delete failed, rc:%s\n",
@@ -806,3 +820,83 @@ tf_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl_scope != NULL) {
+		rc = dev->ops->tf_dev_alloc_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Alloc table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl_scope) {
+		rc = dev->ops->tf_dev_free_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Free table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 441d0c678..20b0c5948 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,6 +6,7 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
+#include "tf_em.h"
 
 struct tf;
 
@@ -42,10 +43,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_ident_cfg_parms ident_cfg;
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
-
-	dev_handle->type = TF_DEVICE_TYPE_WH;
-	/* Initial function initialization */
-	dev_handle->ops = &tf_dev_ops_p4_init;
+	struct tf_em_cfg_parms em_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -86,6 +84,36 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * EEM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_ext_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
+
+	rc = tf_em_ext_common_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EEM initialization failure\n");
+		goto fail;
+	}
+
+	/*
+	 * EM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_int_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = 0; /* Not used by EM */
+
+	rc = tf_em_int_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EM initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -144,6 +172,20 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_em_ext_common_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EEM\n");
+		fail = true;
+	}
+
+	rc = tf_em_int_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EM\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c8feac55d..2712d1039 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -15,12 +15,24 @@ struct tf;
 struct tf_session;
 
 /**
- *
+ * Device module types
  */
 enum tf_device_module_type {
+	/**
+	 * Identifier module
+	 */
 	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	/**
+	 * Table type module
+	 */
 	TF_DEVICE_MODULE_TYPE_TABLE,
+	/**
+	 * TCAM module
+	 */
 	TF_DEVICE_MODULE_TYPE_TCAM,
+	/**
+	 * EM module
+	 */
 	TF_DEVICE_MODULE_TYPE_EM,
 	TF_DEVICE_MODULE_TYPE_MAX
 };
@@ -395,8 +407,8 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_insert_em_entry)(struct tf *tfp,
-				      struct tf_insert_em_entry_parms *parms);
+	int (*tf_dev_insert_int_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
 
 	/**
 	 * Delete EM hash entry API
@@ -411,8 +423,72 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_delete_em_entry)(struct tf *tfp,
-				      struct tf_delete_em_entry_parms *parms);
+	int (*tf_dev_delete_int_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Insert EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_ext_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_ext_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Allocate EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope alloc parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_alloc_tbl_scope)(struct tf *tfp,
+				      struct tf_alloc_tbl_scope_parms *parms);
+
+	/**
+	 * Free EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope free parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
+				     struct tf_free_tbl_scope_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9e332c594..127c655a6 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -93,6 +93,12 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = NULL,
 	.tf_dev_get_tcam = NULL,
+	.tf_dev_insert_int_em_entry = NULL,
+	.tf_dev_delete_int_em_entry = NULL,
+	.tf_dev_insert_ext_em_entry = NULL,
+	.tf_dev_delete_ext_em_entry = NULL,
+	.tf_dev_alloc_tbl_scope = NULL,
+	.tf_dev_free_tbl_scope = NULL,
 };
 
 /**
@@ -113,6 +119,10 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = NULL,
-	.tf_dev_insert_em_entry = tf_em_insert_entry,
-	.tf_dev_delete_em_entry = tf_em_delete_entry,
+	.tf_dev_insert_int_em_entry = tf_em_insert_int_entry,
+	.tf_dev_delete_int_em_entry = tf_em_delete_int_entry,
+	.tf_dev_insert_ext_em_entry = tf_em_insert_ext_entry,
+	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
+	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
+	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 411e21637..da6dd65a3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -36,13 +36,12 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
@@ -77,4 +76,17 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
+
+struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
+	/* CFA_RESOURCE_TYPE_P4_EM_REC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+};
+
+struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_REC },
+	/* CFA_RESOURCE_TYPE_P4_TBL_SCOPE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
deleted file mode 100644
index fcbbd7eca..000000000
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ /dev/null
@@ -1,360 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-#include <rte_common.h>
-#include <rte_errno.h>
-#include <rte_log.h>
-
-#include "tf_core.h"
-#include "tf_em.h"
-#include "tf_msg.h"
-#include "tfp.h"
-#include "lookup3.h"
-#include "tf_ext_flow_handle.h"
-
-#include "bnxt.h"
-
-
-static uint32_t tf_em_get_key_mask(int num_entries)
-{
-	uint32_t mask = num_entries - 1;
-
-	if (num_entries & 0x7FFF)
-		return 0;
-
-	if (num_entries > (128 * 1024 * 1024))
-		return 0;
-
-	return mask;
-}
-
-static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
-				   uint8_t	       *in_key,
-				   struct cfa_p4_eem_64b_entry *key_entry)
-{
-	key_entry->hdr.word1 = result->word1;
-
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
-
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
-#endif
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
-			       struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t	   mask;
-	uint32_t	   key0_hash;
-	uint32_t	   key1_hash;
-	uint32_t	   key0_index;
-	uint32_t	   key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t	   index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t	   gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/**
- * Insert EM internal entry API
- *
- *  returns:
- *     0 - Success
- */
-static int tf_insert_em_internal_entry(struct tf                       *tfp,
-				       struct tf_insert_em_entry_parms *parms)
-{
-	int       rc;
-	uint32_t  gfid;
-	uint16_t  rptr_index = 0;
-	uint8_t   rptr_entry = 0;
-	uint8_t   num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-	uint32_t index;
-
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
-		return rc;
-	}
-
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
-	rc = tf_msg_insert_em_internal_entry(tfp,
-					     parms,
-					     &rptr_index,
-					     &rptr_entry,
-					     &num_of_entries);
-	if (rc != 0)
-		return -1;
-
-	PMD_DRV_LOG(ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
-		   rptr_index,
-		   rptr_entry,
-		   num_of_entries);
-
-	TF_SET_GFID(gfid,
-		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
-		     rptr_entry),
-		    0); /* N/A for internal table */
-
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_INTERNAL,
-		       parms->dir);
-
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     num_of_entries,
-				     0,
-				     0,
-				     rptr_index,
-				     rptr_entry,
-				     0);
-	return 0;
-}
-
-/** Delete EM internal entry API
- *
- * returns:
- * 0
- * -EINVAL
- */
-static int tf_delete_em_internal_entry(struct tf                       *tfp,
-				       struct tf_delete_em_entry_parms *parms)
-{
-	int rc;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-
-	rc = tf_msg_delete_em_entry(tfp, parms);
-
-	/* Return resource to pool */
-	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
-
-	return rc;
-}
-
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL)
-		/* External EEM */
-		return tf_insert_eem_entry
-			(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
-
-	return -EINVAL;
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		return tf_delete_em_internal_entry(tfp, parms);
-
-	return -EINVAL;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 2262ae7cc..cf799c200 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
@@ -19,6 +20,9 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+#define TF_EM_MAX_MASK 0x7FFF
+#define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
+
 /*
  * Used to build GFID:
  *
@@ -44,6 +48,47 @@ struct tf_em_64b_entry {
 	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
+/** EEM Memory Type
+ *
+ */
+enum tf_mem_type {
+	TF_EEM_MEM_TYPE_INVALID,
+	TF_EEM_MEM_TYPE_HOST,
+	TF_EEM_MEM_TYPE_SYSTEM
+};
+
+/**
+ * tf_em_cfg_parms definition
+ */
+struct tf_em_cfg_parms {
+	/**
+	 * [in] Num entries in resource config
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Resource config
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+	/**
+	 * [in] Memory type.
+	 */
+	enum tf_mem_type mem_type;
+};
+
+/**
+ * @page table Table
+ *
+ * @ref tf_alloc_eem_tbl_scope
+ *
+ * @ref tf_free_eem_tbl_scope_cb
+ *
+ * @ref tbl_scope_cb_find
+ */
+
 /**
  * Allocates EEM Table scope
  *
@@ -78,29 +123,258 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 			     struct tf_free_tbl_scope_parms *parms);
 
 /**
- * Function to search for table scope control block structure
- * with specified table scope ID.
+ * Insert record in to internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_int_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_int_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
+
+/**
+ * Insert record in to external EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Insert record from external EEM table
  *
- * [in] session
- *   Session to use for the search of the table scope control block
- * [in] tbl_scope_id
- *   Table scope ID to search for
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
  *
  * Returns:
- *  Pointer to the found table scope control block struct or NULL if
- *  table scope control block struct not found
+ *   0       - Success
+ *   -EINVAL - Parameter error
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+int tf_em_delete_ext_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
 
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum hcapi_cfa_em_table_type table_type);
+/**
+ * Insert record in to external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_sys_entry(struct tf *tfp,
+			       struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_ext_sys_entry(struct tf *tfp,
+			       struct tf_delete_em_entry_parms *parms);
 
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms);
+/**
+ * Bind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_bind(struct tf *tfp,
+		   struct tf_em_cfg_parms *parms);
 
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms);
+/**
+ * Unbind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_unbind(struct tf *tfp);
+
+/**
+ * Common bind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_bind(struct tf *tfp,
+			  struct tf_em_cfg_parms *parms);
+
+/**
+ * Common unbind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_unbind(struct tf *tfp);
+
+/**
+ * Alloc for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Alloc for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common free for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_free(struct tf *tfp,
+			  struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common alloc for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_alloc(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
new file mode 100644
index 000000000..ba6aa7ac1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_device.h"
+#include "tf_ext_flow_handle.h"
+#include "cfa_resource_types.h"
+
+#include "bnxt.h"
+
+
+/**
+ * EM DBs.
+ */
+void *eem_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Host or system
+ */
+static enum tf_mem_type mem_type;
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+	struct tf_rm_is_allocated_parms parms;
+	int allocated;
+
+	/* Check that id is valid */
+	parms.rm_db = eem_db[TF_DIR_RX];
+	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.allocated = &allocated;
+
+	i = tf_rm_is_allocated(&parms);
+
+	if (i < 0 || !allocated)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+int
+tf_create_tbl_pool_external(enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i;
+	int32_t j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes in reverse
+	 */
+	j = (num_entries - 1) * entry_sz_bytes;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+				    tf_dir_2_str(dir), strerror(-rc));
+			goto cleanup;
+		}
+
+		if (j < 0) {
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+				    dir, j);
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ */
+void
+tf_destroy_tbl_pool_external(enum tf_dir dir,
+			     struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_act_pool_mem =
+		tbl_scope_cb->ext_act_pool_mem[dir];
+
+	tfp_free(ext_act_pool_mem);
+}
+
+uint32_t
+tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & TF_EM_MAX_MASK)
+		return 0;
+
+	if (num_entries > TF_EM_MAX_ENTRY)
+		return 0;
+
+	return mask;
+}
+
+void
+tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+		       uint8_t *in_key,
+		       struct cfa_p4_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
+}
+
+int
+tf_em_ext_common_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+		db_cfg.rm_db = &eem_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	mem_type = parms->mem_type;
+	init = 1;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = eem_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		eem_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_alloc(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_alloc(tfp, parms);
+	else
+		return tf_em_ext_system_alloc(tfp, parms);
+}
+
+int
+tf_em_ext_common_free(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_free(tfp, parms);
+	else
+		return tf_em_ext_system_free(tfp, parms);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
new file mode 100644
index 000000000..45699a7c3
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_COMMON_H_
+#define _TF_EM_COMMON_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *   table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+/**
+ * Create and initialize a stack to use for action entries
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ * [in] num_entries
+ *   Number of EEM entries
+ * [in] entry_sz_bytes
+ *   Size of the entry
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memory
+ *   -EINVAL - Failure
+ */
+int tf_create_tbl_pool_external(enum tf_dir dir,
+				struct tf_tbl_scope_cb *tbl_scope_cb,
+				uint32_t num_entries,
+				uint32_t entry_sz_bytes);
+
+/**
+ * Delete and cleanup action record allocation stack
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ *
+ */
+void tf_destroy_tbl_pool_external(enum tf_dir dir,
+				  struct tf_tbl_scope_cb *tbl_scope_cb);
+
+/**
+ * Get hash mask for current EEM table size
+ *
+ * [in] num_entries
+ *   Number of EEM entries
+ */
+uint32_t tf_em_get_key_mask(int num_entries);
+
+/**
+ * Populate key_entry
+ *
+ * [in] result
+ *   Entry data
+ * [in] in_key
+ *   Key data
+ * [out] key_entry
+ *   Completed key record
+ */
+void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+			    uint8_t	       *in_key,
+			    struct cfa_p4_eem_64b_entry *key_entry);
+
+/**
+ * Find base page address for offset into specified table type
+ *
+ * [in] tbl_scope_cb
+ *   Table scope
+ * [in] dir
+ *   Direction
+ * [in] Offset
+ *   Offset in to table
+ * [in] table_type
+ *   Table type
+ *
+ * Returns:
+ *
+ * 0                                 - Failure
+ * Void pointer to page base address - Success
+ */
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum hcapi_cfa_em_table_type table_type);
+
+#endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
new file mode 100644
index 000000000..8be39afdd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -0,0 +1,1146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * EM DBs.
+ */
+extern void *eem_db[TF_DIR_MAX];
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)(uintptr_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+		TFP_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			TFP_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(void *)(uintptr_t)tp->pg_va_tbl[j],
+				(void *)(uintptr_t)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_host_alloc(struct tf *tfp,
+		     struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *em_tables;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
+	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
+				   parms->hw_flow_cache_flush_timer,
+				   dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(dir,
+					    tbl_scope_cb,
+					    em_tables[TF_RECORD_TABLE].num_entries,
+					    em_tables[TF_RECORD_TABLE].entry_size);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_host_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_host_free(struct tf *tfp,
+		    struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
+		return -EINVAL;
+	}
+
+	/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
new file mode 100644
index 000000000..9be91ad5d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -0,0 +1,312 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/**
+ * EM DBs.
+ */
+static void *em_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	rc = tfp_calloc(&parms);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	if (ptr != NULL)
+		tfp_free(ptr);
+}
+
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int
+tf_em_insert_int_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	int rc;
+	uint32_t gfid;
+	uint16_t rptr_index = 0;
+	uint8_t rptr_entry = 0;
+	uint8_t num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc) {
+		PMD_DRV_LOG
+		  (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc)
+		return -1;
+
+	PMD_DRV_LOG
+		  (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     (uint32_t)num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int
+tf_em_delete_int_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+int
+tf_em_int_bind(struct tf *tfp,
+	       struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		tf_create_em_pool(session,
+				  i,
+				  TF_SESSION_EM_POOL_SIZE);
+	}
+
+	/*
+	 * I'm not sure that this code is needed.
+	 * leaving for now until resolved
+	 */
+	if (parms->num_elements) {
+		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+
+		for (i = 0; i < TF_DIR_MAX; i++) {
+			db_cfg.dir = i;
+			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+			db_cfg.rm_db = &em_db[i];
+			rc = tf_rm_create_db(tfp, &db_cfg);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: EM DB creation failed\n",
+					    tf_dir_2_str(i));
+
+				return rc;
+			}
+		}
+	}
+
+	init = 1;
+	return 0;
+}
+
+int
+tf_em_int_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		tf_free_em_pool(session, i);
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = em_db[i];
+		if (em_db[i] != NULL) {
+			rc = tf_rm_free_db(tfp, &fparms);
+			if (rc)
+				return rc;
+		}
+
+		em_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
new file mode 100644
index 000000000..ee18a0c70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_insert_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_delete_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_sys_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_sys_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
+		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_em_ext_system_free(struct tf *tfp __rte_unused,
+		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c015b0ce2..d8b80bc84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,82 +18,6 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
-/**
- * Endian converts min and max values from the HW response to the query
- */
-#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
-	(query)->hw_query[index].min =                                       \
-		tfp_le_to_cpu_16(response. element ## _min);                 \
-	(query)->hw_query[index].max =                                       \
-		tfp_le_to_cpu_16(response. element ## _max);                 \
-} while (0)
-
-/**
- * Endian converts the number of entries from the alloc to the request
- */
-#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
-	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(hw_entry[index].start);                     \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
-	hw_entry[index].start =                                              \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	hw_entry[index].stride =                                             \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
-/**
- * Endian converts min and max values from the SRAM response to the
- * query
- */
-#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
-	(query)->sram_query[index].min =                                     \
-		tfp_le_to_cpu_16(response.element ## _min);                  \
-	(query)->sram_query[index].max =                                     \
-		tfp_le_to_cpu_16(response.element ## _max);                  \
-} while (0)
-
-/**
- * Endian converts the number of entries from the action (alloc) to
- * the request
- */
-#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
-	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(sram_entry[index].start);                   \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
-	sram_entry[index].start =                                            \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	sram_entry[index].stride =                                           \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -107,39 +31,6 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
-static int
-tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
-		   uint32_t *hwrm_type)
-{
-	int rc = 0;
-
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	return rc;
-}
-
 /**
  * Allocates a DMA buffer that can be used for message transfer.
  *
@@ -185,13 +76,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
-/**
- * NEW HWRM direct messages
- */
+/* HWRM Direct messages */
 
-/**
- * Sends session open request to TF Firmware
- */
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
@@ -222,9 +108,6 @@ tf_msg_session_open(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends session attach request to TF Firmware
- */
 int
 tf_msg_session_attach(struct tf *tfp __rte_unused,
 		      char *ctrl_chan_name __rte_unused,
@@ -233,9 +116,6 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
-/**
- * Sends session close request to TF Firmware
- */
 int
 tf_msg_session_close(struct tf *tfp)
 {
@@ -261,14 +141,11 @@ tf_msg_session_close(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session query config request to TF Firmware
- */
 int
 tf_msg_session_qcfg(struct tf *tfp)
 {
 	int rc;
-	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
@@ -289,636 +166,6 @@ tf_msg_session_qcfg(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_query *query)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_qcaps_input req = { 0 };
-	struct tf_session_hw_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(query, 0, sizeof(*query));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
-			    resp, meter_inst);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
-			     struct tf_rm_entry *hw_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_alloc_input req = { 0 };
-	struct tf_session_hw_resc_alloc_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			   l2_ctx_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			   prof_func_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			   prof_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			   em_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
-			   em_record_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			   wc_tcam_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
-			   wc_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
-			   meter_profiles);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
-			   meter_inst);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
-			   mirrors);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
-			   upar);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
-			   sp_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
-			   l2_func);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
-			   flex_key_templ);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			   tbl_scope);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
-			   epoch0_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
-			   epoch1_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
-			   metadata);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
-			   ct_state);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			   range_prof);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			   range_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			   lag_tbl_entries);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
-			    meter_inst);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_free(struct tf *tfp,
-			    enum tf_dir dir,
-			    struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_HW_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_flush(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_query *query __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_qcaps_input req = { 0 };
-	struct tf_session_sram_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
-			      full_action);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
-			      sp_smac_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
-			      sp_smac_ipv6);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
-			       struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_alloc_input req = { 0 };
-	struct tf_session_sram_resc_alloc_output resp;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(&resp, 0, sizeof(resp));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			     full_action);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
-			     mcg);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			     encap_8b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			     encap_16b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			     encap_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			     sp_smac);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			     req, sp_smac_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			     req, sp_smac_ipv6);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
-			     req, counter_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			     nat_sport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			     nat_dport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			     nat_s_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			     nat_d_ipv4);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
-			      resp, full_action);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			      resp, sp_smac_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			      resp, sp_smac_ipv6);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
-			      enum tf_dir dir,
-			      struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_flush(struct tf *tfp,
-			       enum tf_dir dir,
-			       struct tf_rm_entry *sram_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
 int
 tf_msg_session_resc_qcaps(struct tf *tfp,
 			  enum tf_dir dir,
@@ -973,7 +220,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -981,14 +228,14 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
 	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
-		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
@@ -1078,7 +325,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -1087,14 +334,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	}
 
 	printf("\nRESV\n");
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
-		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
-		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
-		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+		resv[i].type = tfp_le_to_cpu_32(resv_data[i].type);
+		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
+		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
@@ -1173,24 +420,112 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem register request to Firmware
- */
-int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id)
+int
+tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
 {
 	int rc;
-	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
-	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_insert_input req = { 0 };
+	struct hwrm_tf_em_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.strength = (em_result->hdr.word1 &
+			CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+			CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return 0;
+}
+
+int
+tf_msg_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_delete_input req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return 0;
+}
+
+int
+tf_msg_em_mem_rgtr(struct tf *tfp,
+		   int page_lvl,
+		   int page_size,
+		   uint64_t dma_addr,
+		   uint16_t *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
 
-	req.page_level = page_lvl;
-	req.page_size = page_size;
-	req.page_dir = tfp_cpu_to_le_64(dma_addr);
-
 	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
@@ -1208,11 +543,9 @@ int tf_msg_em_mem_rgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem unregister request to Firmware
- */
-int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t  *ctx_id)
+int
+tf_msg_em_mem_unrgtr(struct tf *tfp,
+		     uint16_t *ctx_id)
 {
 	int rc;
 	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
@@ -1233,12 +566,10 @@ int tf_msg_em_mem_unrgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM qcaps request to Firmware
- */
-int tf_msg_em_qcaps(struct tf *tfp,
-		    int dir,
-		    struct tf_em_caps *em_caps)
+int
+tf_msg_em_qcaps(struct tf *tfp,
+		int dir,
+		struct tf_em_caps *em_caps)
 {
 	int rc;
 	struct hwrm_tf_ext_em_qcaps_input  req = {0};
@@ -1273,17 +604,15 @@ int tf_msg_em_qcaps(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM config request to Firmware
- */
-int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t   num_entries,
-		  uint16_t   key0_ctx_id,
-		  uint16_t   key1_ctx_id,
-		  uint16_t   record_ctx_id,
-		  uint16_t   efc_ctx_id,
-		  uint8_t    flush_interval,
-		  int        dir)
+int
+tf_msg_em_cfg(struct tf *tfp,
+	      uint32_t num_entries,
+	      uint16_t key0_ctx_id,
+	      uint16_t key1_ctx_id,
+	      uint16_t record_ctx_id,
+	      uint16_t efc_ctx_id,
+	      uint8_t flush_interval,
+	      int dir)
 {
 	int rc;
 	struct hwrm_tf_ext_em_cfg_input  req = {0};
@@ -1317,42 +646,23 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM internal insert request to Firmware
- */
-int tf_msg_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *em_parms,
-				uint16_t *rptr_index,
-				uint8_t *rptr_entry,
-				uint8_t *num_of_entries)
+int
+tf_msg_em_op(struct tf *tfp,
+	     int dir,
+	     uint16_t op)
 {
-	int                         rc;
-	struct tfp_send_msg_parms        parms = { 0 };
-	struct hwrm_tf_em_insert_input   req = { 0 };
-	struct hwrm_tf_em_insert_output  resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_em_64b_entry *em_result =
-		(struct tf_em_64b_entry *)em_parms->em_record;
+	int rc;
+	struct hwrm_tf_ext_em_op_input req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	tfp_memcpy(req.em_key,
-		   em_parms->key,
-		   ((em_parms->key_sz_in_bits + 7) / 8));
-
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength =
-		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
-		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
-	req.em_key_bitlen = em_parms->key_sz_in_bits;
-	req.action_ptr = em_result->hdr.pointer;
-	req.em_record_idx = *rptr_index;
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
 
-	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1361,75 +671,86 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &parms);
-	if (rc)
-		return rc;
-
-	*rptr_entry = resp.rptr_entry;
-	*rptr_index = resp.rptr_index;
-	*num_of_entries = resp.num_of_entries;
-
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM delete insert request to Firmware
- */
-int tf_msg_delete_em_entry(struct tf *tfp,
-			   struct tf_delete_em_entry_parms *em_parms)
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_tcam_set_parms *parms)
 {
-	int                             rc;
-	struct tfp_send_msg_parms       parms = { 0 };
-	struct hwrm_tf_em_delete_input  req = { 0 };
-	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.type = parms->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
+	data_size = 2 * req.key_size + req.result_size;
 
-	parms.tf_type = HWRM_TF_EM_DELETE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
+		data = buf.va_addr;
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
+	}
+
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp,
-				 &parms);
+				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
-	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+cleanup:
+	tf_msg_free_dma_buf(&buf);
 
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM operation request to Firmware
- */
-int tf_msg_em_op(struct tf *tfp,
-		 int dir,
-		 uint16_t op)
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input req = {0};
-	struct hwrm_tf_ext_em_op_output resp = {0};
-	uint32_t flags;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
 
-	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_32(flags);
-	req.op = tfp_cpu_to_le_16(op);
+	req.type = in_parms->hcapi_type;
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
 
-	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.tf_type = HWRM_TF_TCAM_FREE;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1444,21 +765,32 @@ int tf_msg_em_op(struct tf *tfp,
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_set_input req = { 0 };
+	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_set_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
 	req.index = tfp_cpu_to_le_32(index);
 
@@ -1466,13 +798,15 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 		   data,
 		   size);
 
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_TBL_TYPE_SET,
-			 req);
+	parms.tf_type = HWRM_TF_TBL_TYPE_SET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1482,32 +816,43 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 int
 tf_msg_get_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_get_input req = { 0 };
+	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_input req = { 0 };
-	struct tf_tbl_type_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_TBL_TYPE_GET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1522,6 +867,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/* HWRM Tunneled messages */
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  struct tf_bulk_get_tbl_entry_parms *params)
@@ -1562,96 +909,3 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
-
-int
-tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_tcam_set_parms *parms)
-{
-	int rc;
-	struct tfp_send_msg_parms mparms = { 0 };
-	struct hwrm_tf_tcam_set_input req = { 0 };
-	struct hwrm_tf_tcam_set_output resp = { 0 };
-	struct tf_msg_dma_buf buf = { 0 };
-	uint8_t *data = NULL;
-	int data_size = 0;
-
-	req.type = parms->type;
-
-	req.idx = tfp_cpu_to_le_16(parms->idx);
-	if (parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
-
-	req.key_size = parms->key_size;
-	req.mask_offset = parms->key_size;
-	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * parms->key_size;
-	req.result_size = parms->result_size;
-	data_size = 2 * req.key_size + req.result_size;
-
-	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
-		/* use pci buffer */
-		data = &req.dev_data[0];
-	} else {
-		/* use dma buffer */
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_alloc_dma_buf(&buf, data_size);
-		if (rc)
-			goto cleanup;
-		data = buf.va_addr;
-		tfp_memcpy(&req.dev_data[0],
-			   &buf.pa_addr,
-			   sizeof(buf.pa_addr));
-	}
-
-	tfp_memcpy(&data[0], parms->key, parms->key_size);
-	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
-	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
-
-	mparms.tf_type = HWRM_TF_TCAM_SET;
-	mparms.req_data = (uint32_t *)&req;
-	mparms.req_size = sizeof(req);
-	mparms.resp_data = (uint32_t *)&resp;
-	mparms.resp_size = sizeof(resp);
-	mparms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &mparms);
-	if (rc)
-		goto cleanup;
-
-cleanup:
-	tf_msg_free_dma_buf(&buf);
-
-	return rc;
-}
-
-int
-tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_tcam_free_parms *in_parms)
-{
-	int rc;
-	struct hwrm_tf_tcam_free_input req =  { 0 };
-	struct hwrm_tf_tcam_free_output resp = { 0 };
-	struct tfp_send_msg_parms parms = { 0 };
-
-	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
-	if (rc != 0)
-		return rc;
-
-	req.count = 1;
-	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
-	if (in_parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
-
-	parms.tf_type = HWRM_TF_TCAM_FREE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &parms);
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1ff1044e8..8e276d4c0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -16,6 +16,8 @@
 
 struct tf;
 
+/* HWRM Direct messages */
+
 /**
  * Sends session open request to Firmware
  *
@@ -29,7 +31,7 @@ struct tf;
  *   Pointer to the fw_session_id that is allocated on firmware side
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
@@ -46,7 +48,7 @@ int tf_msg_session_open(struct tf *tfp,
  *   time of session open
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
@@ -59,73 +61,21 @@ int tf_msg_session_attach(struct tf *tfp,
  *   Pointer to session handle
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_close(struct tf *tfp);
 
 /**
  * Sends session query config request to TF Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_qcfg(struct tf *tfp);
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_query *hw_query);
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int tf_msg_session_hw_resc_alloc(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_alloc *hw_alloc,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int tf_msg_session_hw_resc_free(struct tf *tfp,
-				enum tf_dir dir,
-				struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int tf_msg_session_hw_resc_flush(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_query *sram_query);
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int tf_msg_session_sram_resc_alloc(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_alloc *sram_alloc,
-				   struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int tf_msg_session_sram_resc_free(struct tf *tfp,
-				  enum tf_dir dir,
-				  struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int tf_msg_session_sram_resc_flush(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_entry *sram_entry);
-
 /**
  * Sends session HW resource query capability request to TF Firmware
  *
@@ -183,6 +133,21 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 
 /**
  * Sends session resource flush request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements that needs to be flushed
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_resc_flush(struct tf *tfp,
 			      enum tf_dir dir,
@@ -190,6 +155,24 @@ int tf_msg_session_resc_flush(struct tf *tfp,
 			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to em insert parameter list
+ *
+ * [in] rptr_index
+ *   Record ptr index
+ *
+ * [in] rptr_entry
+ *   Record ptr entry
+ *
+ * [in] num_of_entries
+ *   Number of entries to insert
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    struct tf_insert_em_entry_parms *params,
@@ -198,26 +181,75 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    uint8_t *num_of_entries);
 /**
  * Sends EM internal delete request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] em_parms
+ *   Pointer to em delete parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms);
+
 /**
  * Sends EM mem register request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] page_lvl
+ *   Page level
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] dma_addr
+ *   DMA Address for the memory page
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id);
+		       int page_lvl,
+		       int page_size,
+		       uint64_t dma_addr,
+		       uint16_t *ctx_id);
 
 /**
  * Sends EM mem unregister request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t     *ctx_id);
+			 uint16_t *ctx_id);
 
 /**
  * Sends EM qcaps request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] em_caps
+ *   Pointer to EM capabilities
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_qcaps(struct tf *tfp,
 		    int dir,
@@ -225,22 +257,63 @@ int tf_msg_em_qcaps(struct tf *tfp,
 
 /**
  * Sends EM config request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] num_entries
+ *   EM Table, key 0, number of entries to configure
+ *
+ * [in] key0_ctx_id
+ *   EM Table, Key 0 context id
+ *
+ * [in] key1_ctx_id
+ *   EM Table, Key 1 context id
+ *
+ * [in] record_ctx_id
+ *   EM Table, Record context id
+ *
+ * [in] efc_ctx_id
+ *   EM Table, EFC Table context id
+ *
+ * [in] flush_interval
+ *   Flush pending HW cached flows every 1/10th of value set in
+ *   seconds, both idle and active flows are flushed from the HW
+ *   cache. If set to 0, this feature will be disabled.
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t      num_entries,
-		  uint16_t      key0_ctx_id,
-		  uint16_t      key1_ctx_id,
-		  uint16_t      record_ctx_id,
-		  uint16_t      efc_ctx_id,
-		  uint8_t       flush_interval,
-		  int           dir);
+		  uint32_t num_entries,
+		  uint16_t key0_ctx_id,
+		  uint16_t key1_ctx_id,
+		  uint16_t record_ctx_id,
+		  uint16_t efc_ctx_id,
+		  uint8_t flush_interval,
+		  int dir);
 
 /**
  * Sends EM operation request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] op
+ *   CFA Operator
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op);
+		 int dir,
+		 uint16_t op);
 
 /**
  * Sends tcam entry 'set' to the Firmware.
@@ -281,7 +354,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to set
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to set
  *
  * [in] size
@@ -298,7 +371,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  */
 int tf_msg_set_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
@@ -312,7 +385,7 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to get
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to get
  *
  * [in] size
@@ -329,11 +402,13 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  */
 int tf_msg_get_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
 
+/* HWRM Tunneled messages */
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index b6fe2f1ad..e0a84e64d 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -1818,16 +1818,8 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_entries = tfs->resc.tx.hw_entry;
 
 	/* Query for Session HW Resources */
-	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
 
+	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1846,16 +1838,6 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
 
 	/* Allocate Session HW Resources */
-	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform HW allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -1906,17 +1888,7 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	else
 		sram_entries = tfs->resc.tx.sram_entry;
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
+	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1934,20 +1906,6 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
 		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform SRAM allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -2798,17 +2756,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
-			rc = tf_msg_session_hw_resc_flush(tfp,
-							  i,
-							  hw_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
 		}
 
 		/* Check for any not previously freed SRAM resources
@@ -2828,38 +2775,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
-
-			rc = tf_msg_session_sram_resc_flush(tfp,
-							    i,
-							    sram_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
-		}
-
-		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, HW free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
-		}
-
-		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, SRAM free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
 		}
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index de8f11955..2d9be654a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -95,7 +95,9 @@ struct tf_rm_new_db {
  *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
 			       uint16_t *reservations,
 			       uint16_t count,
 			       uint16_t *valid_count)
@@ -107,6 +109,26 @@ tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
 		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
 		    reservations[i] > 0)
 			cnt++;
+
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
+		 */
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
+			TFP_DRV_LOG(ERR,
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+		}
 	}
 
 	*valid_count = cnt;
@@ -405,7 +427,9 @@ tf_rm_create_db(struct tf *tfp,
 	 * the DB holds them all as to give a fast lookup. We can also
 	 * remove entries where there are no request for elements.
 	 */
-	tf_rm_count_hcapi_reservations(parms->cfg,
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
 				       parms->alloc_cnt,
 				       parms->num_elements,
 				       &hcapi_items);
@@ -507,6 +531,11 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
 			/* Create pool */
 			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
 				     sizeof(struct bitalloc));
@@ -548,11 +577,16 @@ tf_rm_create_db(struct tf *tfp,
 		}
 	}
 
-	rm_db->num_entries = i;
+	rm_db->num_entries = parms->num_elements;
 	rm_db->dir = parms->dir;
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index e594f0248..d7f5de4c4 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -26,741 +26,6 @@
 #include "stack.h"
 #include "tf_common.h"
 
-#define PTU_PTE_VALID          0x1UL
-#define PTU_PTE_LAST           0x2UL
-#define PTU_PTE_NEXT_TO_LAST   0x4UL
-
-/* Number of pointers per page_size */
-#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
-
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define	TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
-/**
- * Function to free a page table
- *
- * [in] tp
- *   Pointer to the page table to free
- */
-static void
-tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
-{
-	uint32_t i;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		if (!tp->pg_va_tbl[i]) {
-			TFP_DRV_LOG(WARNING,
-				    "No mapping for page: %d table: %016" PRIu64 "\n",
-				    i,
-				    (uint64_t)(uintptr_t)tp);
-			continue;
-		}
-
-		tfp_free(tp->pg_va_tbl[i]);
-		tp->pg_va_tbl[i] = NULL;
-	}
-
-	tp->pg_count = 0;
-	tfp_free(tp->pg_va_tbl);
-	tp->pg_va_tbl = NULL;
-	tfp_free(tp->pg_pa_tbl);
-	tp->pg_pa_tbl = NULL;
-}
-
-/**
- * Function to free an EM table
- *
- * [in] tbl
- *   Pointer to the EM table to free
- */
-static void
-tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-		TFP_DRV_LOG(INFO,
-			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
-			   TF_EM_PAGE_SIZE,
-			    i,
-			    tp->pg_count);
-
-		tf_em_free_pg_tbl(tp);
-	}
-
-	tbl->l0_addr = NULL;
-	tbl->l0_dma_addr = 0;
-	tbl->num_lvl = 0;
-	tbl->num_data_pages = 0;
-}
-
-/**
- * Allocation of page tables
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] pg_count
- *   Page count to allocate
- *
- * [in] pg_size
- *   Size of each page
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
-		   uint32_t pg_count,
-		   uint32_t pg_size)
-{
-	uint32_t i;
-	struct tfp_calloc_parms parms;
-
-	parms.nitems = pg_count;
-	parms.size = sizeof(void *);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0)
-		return -ENOMEM;
-
-	tp->pg_va_tbl = parms.mem_va;
-
-	if (tfp_calloc(&parms) != 0) {
-		tfp_free(tp->pg_va_tbl);
-		return -ENOMEM;
-	}
-
-	tp->pg_pa_tbl = parms.mem_va;
-
-	tp->pg_count = 0;
-	tp->pg_size = pg_size;
-
-	for (i = 0; i < pg_count; i++) {
-		parms.nitems = 1;
-		parms.size = pg_size;
-		parms.alignment = TF_EM_PAGE_ALIGNMENT;
-
-		if (tfp_calloc(&parms) != 0)
-			goto cleanup;
-
-		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
-		tp->pg_va_tbl[i] = parms.mem_va;
-
-		memset(tp->pg_va_tbl[i], 0, pg_size);
-		tp->pg_count++;
-	}
-
-	return 0;
-
-cleanup:
-	tf_em_free_pg_tbl(tp);
-	return -ENOMEM;
-}
-
-/**
- * Allocates EM page tables
- *
- * [in] tbl
- *   Table to allocate pages for
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int rc = 0;
-	int i;
-	uint32_t j;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-
-		rc = tf_em_alloc_pg_tbl(tp,
-					tbl->page_cnt[i],
-					TF_EM_PAGE_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d, rc:%s\n",
-				i,
-				strerror(-rc));
-			goto cleanup;
-		}
-
-		for (j = 0; j < tp->pg_count; j++) {
-			TFP_DRV_LOG(INFO,
-				"EEM: Allocated page table: size %u lvl %d cnt"
-				" %u VA:%p PA:%p\n",
-				TF_EM_PAGE_SIZE,
-				i,
-				tp->pg_count,
-				(uint32_t *)tp->pg_va_tbl[j],
-				(uint32_t *)(uintptr_t)tp->pg_pa_tbl[j]);
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_free_page_table(tbl);
-	return rc;
-}
-
-/**
- * Links EM page tables
- *
- * [in] tp
- *   Pointer to page table
- *
- * [in] tp_next
- *   Pointer to the next page table
- *
- * [in] set_pte_last
- *   Flag controlling if the page table is last
- */
-static void
-tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-		      struct hcapi_cfa_em_page_tbl *tp_next,
-		      bool set_pte_last)
-{
-	uint64_t *pg_pa = tp_next->pg_pa_tbl;
-	uint64_t *pg_va;
-	uint64_t valid;
-	uint32_t k = 0;
-	uint32_t i;
-	uint32_t j;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			if (k == tp_next->pg_count - 2 && set_pte_last)
-				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
-			else if (k == tp_next->pg_count - 1 && set_pte_last)
-				valid = PTU_PTE_LAST | PTU_PTE_VALID;
-			else
-				valid = PTU_PTE_VALID;
-
-			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
-			if (++k >= tp_next->pg_count)
-				return;
-		}
-	}
-}
-
-/**
- * Setup a EM page table
- *
- * [in] tbl
- *   Pointer to EM page table
- */
-static void
-tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_page_tbl *tp;
-	bool set_pte_last = 0;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl - 1; i++) {
-		tp = &tbl->pg_tbl[i];
-		tp_next = &tbl->pg_tbl[i + 1];
-		if (i == tbl->num_lvl - 2)
-			set_pte_last = 1;
-		tf_em_link_page_table(tp, tp_next, set_pte_last);
-	}
-
-	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
-}
-
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
-/**
- * Unregisters EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- */
-static void
-tf_em_ctx_unreg(struct tf *tfp,
-		struct tf_tbl_scope_cb *tbl_scope_cb,
-		int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
-			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
-			tf_em_free_page_table(tbl);
-		}
-	}
-}
-
-/**
- * Registers EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of Memory
- */
-static int
-tf_em_ctx_reg(struct tf *tfp,
-	      struct tf_tbl_scope_cb *tbl_scope_cb,
-	      int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int rc = 0;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
-
-			if (rc)
-				goto cleanup;
-
-			rc = tf_em_alloc_page_table(tbl);
-			if (rc)
-				goto cleanup;
-
-			tf_em_setup_page_table(tbl);
-			rc = tf_msg_em_mem_rgtr(tfp,
-						tbl->num_lvl - 1,
-						TF_EM_PAGE_SIZE_ENUM,
-						tbl->l0_dma_addr,
-						&tbl->ctx_id);
-			if (rc)
-				goto cleanup;
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	return rc;
-}
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	return 0;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -883,289 +148,6 @@ tf_free_tbl_entry_shadow(struct tf_session *tfs,
 }
 #endif /* TF_SHADOW */
 
-/**
- * Create External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- * [in] num_entries
- *   number of entries to write
- * [in] entry_sz_bytes
- *   size of each entry
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_tbl_pool_external(enum tf_dir dir,
-			    struct tf_tbl_scope_cb *tbl_scope_cb,
-			    uint32_t num_entries,
-			    uint32_t entry_sz_bytes)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i;
-	int32_t j;
-	int rc = 0;
-	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
-			    tf_dir_2_str(dir), strerror(ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Save the  malloced memory address so that it can
-	 * be freed when the table scope is freed.
-	 */
-	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
-
-	/* Fill pool with indexes in reverse
-	 */
-	j = (num_entries - 1) * entry_sz_bytes;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-				    tf_dir_2_str(dir), strerror(-rc));
-			goto cleanup;
-		}
-
-		if (j < 0) {
-			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
-				    dir, j);
-			goto cleanup;
-		}
-		j -= entry_sz_bytes;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Destroy External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- *
- */
-static void
-tf_destroy_tbl_pool_external(enum tf_dir dir,
-			     struct tf_tbl_scope_cb *tbl_scope_cb)
-{
-	uint32_t *ext_act_pool_mem =
-		tbl_scope_cb->ext_act_pool_mem[dir];
-
-	tfp_free(ext_act_pool_mem);
-}
-
-/* API defined in tf_em.h */
-struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
-{
-	int i;
-
-	/* Check that id is valid */
-	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
-	if (i < 0)
-		return NULL;
-
-	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
-	}
-
-	return NULL;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			 struct tf_free_tbl_scope_parms *parms)
-{
-	int rc = 0;
-	enum tf_dir  dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Table scope error\n");
-		return -EINVAL;
-	}
-
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-
-	/* free table scope locks */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/* Free associated external pools
-		 */
-		tf_destroy_tbl_pool_external(dir,
-					     tbl_scope_cb);
-		tf_msg_em_op(tfp,
-			     dir,
-			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
-
-		/* free table scope and all associated resources */
-		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	}
-
-	return rc;
-}
-
-/* API defined in tf_em.h */
-int
-tf_alloc_eem_tbl_scope(struct tf *tfp,
-		       struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-	enum tf_dir dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_table *em_tables;
-	int index;
-	struct tf_session *session;
-	struct tf_free_tbl_scope_parms free_parms;
-
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* Get Table Scope control block from the session pool */
-	index = ba_alloc(session->tbl_scope_pool_rx);
-	if (index == -1) {
-		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
-			    "Control Block\n");
-		return -ENOMEM;
-	}
-
-	tbl_scope_cb = &session->tbl_scopes[index];
-	tbl_scope_cb->index = index;
-	tbl_scope_cb->tbl_scope_id = index;
-	parms->tbl_scope_id = index;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_msg_em_qcaps(tfp,
-				     dir,
-				     &tbl_scope_cb->em_caps[dir]);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to query for EEM capability,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-	}
-
-	/*
-	 * Validate and setup table sizes
-	 */
-	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
-		goto cleanup;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/*
-		 * Allocate tables and signal configuration to FW
-		 */
-		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-
-		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
-		rc = tf_msg_em_cfg(tfp,
-				   em_tables[TF_KEY0_TABLE].num_entries,
-				   em_tables[TF_KEY0_TABLE].ctx_id,
-				   em_tables[TF_KEY1_TABLE].ctx_id,
-				   em_tables[TF_RECORD_TABLE].ctx_id,
-				   em_tables[TF_EFC_TABLE].ctx_id,
-				   parms->hw_flow_cache_flush_timer,
-				   dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "TBL: Unable to configure EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		rc = tf_msg_em_op(tfp,
-				  dir,
-				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
-
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		/* Allocate the pool of offsets of the external memory.
-		 * Initially, this is a single fixed size pool for all external
-		 * actions related to a single table scope.
-		 */
-		rc = tf_create_tbl_pool_external(dir,
-				    tbl_scope_cb,
-				    em_tables[TF_RECORD_TABLE].num_entries,
-				    em_tables[TF_RECORD_TABLE].entry_size);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s TBL: Unable to allocate idx pools %s\n",
-				    tf_dir_2_str(dir),
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-	}
-
-	return 0;
-
-cleanup_full:
-	free_parms.tbl_scope_id = index;
-	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
-	return -EINVAL;
-
-cleanup:
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-	return -EINVAL;
-}
 
  /* API defined in tf_core.h */
 int
@@ -1196,119 +178,3 @@ tf_bulk_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
-
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_scope(struct tf *tfp,
-		   struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	rc = tf_alloc_eem_tbl_scope(tfp, parms);
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_scope(struct tf *tfp,
-		  struct tf_free_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	/* free table scope and all associated resources */
-	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
-
-	return rc;
-}
-
-static void
-tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-			struct hcapi_cfa_em_page_tbl *tp_next)
-{
-	uint64_t *pg_va;
-	uint32_t i;
-	uint32_t j;
-	uint32_t k = 0;
-
-	printf("pg_count:%d pg_size:0x%x\n",
-	       tp->pg_count,
-	       tp->pg_size);
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-		printf("\t%p\n", (void *)pg_va);
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
-			if (((pg_va[j] & 0x7) ==
-			     tfp_cpu_to_le_64(PTU_PTE_LAST |
-					      PTU_PTE_VALID)))
-				return;
-
-			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
-				printf("** Invalid entry **\n");
-				return;
-			}
-
-			if (++k >= tp_next->pg_count) {
-				printf("** Shouldn't get here **\n");
-				return;
-			}
-		}
-	}
-}
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
-{
-	struct tf_session      *session;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_page_tbl *tp;
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-	int j;
-	int dir;
-
-	printf("called %s\n", __func__);
-
-	/* find session struct */
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* find control block for table scope */
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-
-		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
-			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
-			printf
-	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
-			       j,
-			       tbl->type,
-			       tbl->num_entries,
-			       tbl->entry_size,
-			       tbl->num_lvl);
-			if (tbl->pg_tbl[0].pg_va_tbl &&
-			    tbl->pg_tbl[0].pg_pa_tbl)
-				printf("%p %p\n",
-			       tbl->pg_tbl[0].pg_va_tbl[0],
-			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
-			for (i = 0; i < tbl->num_lvl - 1; i++) {
-				printf("Level:%d\n", i);
-				tp = &tbl->pg_tbl[i];
-				tp_next = &tbl->pg_tbl[i + 1];
-				tf_dump_link_page_table(tp, tp_next);
-			}
-			printf("\n");
-		}
-	}
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index bdf7d2089..2f5af6060 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -209,8 +209,10 @@ tf_tbl_set(struct tf *tfp,
 	   struct tf_tbl_set_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
 	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -240,9 +242,22 @@ tf_tbl_set(struct tf *tfp,
 	}
 
 	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	rc = tf_msg_set_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
@@ -262,8 +277,10 @@ tf_tbl_get(struct tf *tfp,
 	   struct tf_tbl_get_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
+	uint16_t hcapi_type;
 	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -292,10 +309,24 @@ tf_tbl_get(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Get the entry */
 	rc = tf_msg_get_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 260fb15a6..a1761ad56 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -53,7 +53,6 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	db_cfg.num_elements = parms->num_elements;
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -174,14 +173,15 @@ tf_tcam_alloc(struct tf *tfp,
 }
 
 int
-tf_tcam_free(struct tf *tfp __rte_unused,
-	     struct tf_tcam_free_parms *parms __rte_unused)
+tf_tcam_free(struct tf *tfp,
+	     struct tf_tcam_free_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -253,6 +253,15 @@ tf_tcam_free(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
@@ -281,6 +290,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -338,6 +348,15 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 5090dfd9f..ee5bacc09 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -76,6 +76,10 @@ struct tf_tcam_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Index to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 16c43eb67..5472a9aac 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -152,9 +152,9 @@ tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
 	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
 		return tf_ident_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_TABLE:
-		return tf_tcam_tbl_2_str(mod_type);
-	case TF_DEVICE_MODULE_TYPE_TCAM:
 		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tcam_tbl_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_EM:
 		return tf_em_tbl_type_2_str(mod_type);
 	default:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 23/51] net/bnxt: update table get to use new design
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (21 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
                           ` (27 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Move bulk table get implementation to new Tbl Module design.
- Update messages for bulk table get
- Retrieve specified table element using bulk mechanism
- Remove deprecated resource definitions
- Update device type configuration for P4.
- Update RM DB HCAPI count check and fix EM internal and host
  code such that EM DBs can be created correctly.
- Update error logging to be info on unbind in the different modules.
- Move RTE RSVD out of tf_resources.h

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h      |  250 ++
 drivers/net/bnxt/hcapi/hcapi_cfa.h        |    2 +
 drivers/net/bnxt/meson.build              |    3 +-
 drivers/net/bnxt/tf_core/Makefile         |    2 -
 drivers/net/bnxt/tf_core/tf_common.h      |   55 +-
 drivers/net/bnxt/tf_core/tf_core.c        |   86 +-
 drivers/net/bnxt/tf_core/tf_device.h      |   24 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c   |    4 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h   |    5 +-
 drivers/net/bnxt/tf_core/tf_em.h          |   88 +-
 drivers/net/bnxt/tf_core/tf_em_common.c   |   29 +-
 drivers/net/bnxt/tf_core/tf_em_internal.c |   59 +-
 drivers/net/bnxt/tf_core/tf_identifier.c  |   14 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |   31 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |    8 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  529 ---
 drivers/net/bnxt/tf_core/tf_rm.c          | 3695 ++++-----------------
 drivers/net/bnxt/tf_core/tf_rm.h          |  539 +--
 drivers/net/bnxt/tf_core/tf_rm_new.c      |  907 -----
 drivers/net/bnxt/tf_core/tf_rm_new.h      |  446 ---
 drivers/net/bnxt/tf_core/tf_session.h     |  214 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  478 ++-
 drivers/net/bnxt/tf_core/tf_tbl.h         |  436 ++-
 drivers/net/bnxt/tf_core/tf_tbl_type.c    |  342 --
 drivers/net/bnxt/tf_core/tf_tbl_type.h    |  318 --
 drivers/net/bnxt/tf_core/tf_tcam.c        |   15 +-
 26 files changed, 2337 insertions(+), 6242 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
new file mode 100644
index 000000000..c30e4f49c
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_tbl.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  12/16/19 17:18:12
+ *
+ * Note:  This file was originally generated by tflib_decode.py.
+ *        Remainder is hand coded due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union fields)
+ *
+ **/
+#ifndef _CFA_P40_TBL_H_
+#define _CFA_P40_TBL_H_
+
+#include "cfa_p40_hw.h"
+
+#include "hcapi_cfa_defs.h"
+
+const struct hcapi_cfa_field cfa_p40_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_act_veb_tcam_layout[] = {
+	{CFA_P40_ACT_VEB_TCAM_VALID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_MAC_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_OVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_IVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_lkup_tcam_record_mem_layout[] = {
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_ctxt_remap_mem_layout[] = {
+	{CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS},
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
+	{CFA_P40_EEM_KEY_TBL_VALID_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
+
+};
+#endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index f60af4e56..7a67493bd 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -243,6 +243,8 @@ int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
 			       struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
+			     struct hcapi_cfa_data *mirror);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 35038dc8b..7f3ec6204 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -41,10 +41,9 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
 	'tf_core/tf_shadow_tcam.c',
-	'tf_core/tf_tbl_type.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm_new.c',
+	'tf_core/tf_rm.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index f186741e4..9ba60e1c2 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -23,10 +23,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index ec3bca835..b982203db 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -6,52 +6,11 @@
 #ifndef _TF_COMMON_H_
 #define _TF_COMMON_H_
 
-/* Helper to check the parms */
-#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "%s: session error\n", \
-				    tf_dir_2_str((parms)->dir)); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_TFP_SESSION(tfp) do { \
-		if ((tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
+/* Helpers to performs parameter check */
 
+/**
+ * Checks 1 parameter against NULL.
+ */
 #define TF_CHECK_PARMS1(parms) do {					\
 		if ((parms) == NULL) {					\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -59,6 +18,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 2 parameters against NULL.
+ */
 #define TF_CHECK_PARMS2(parms1, parms2) do {				\
 		if ((parms1) == NULL || (parms2) == NULL) {		\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -66,6 +28,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 3 parameters against NULL.
+ */
 #define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
 		if ((parms1) == NULL ||					\
 		    (parms2) == NULL ||					\
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8b3e15c8a..8727900c4 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -186,7 +186,7 @@ int tf_insert_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -241,7 +241,7 @@ int tf_delete_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -523,7 +523,7 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 	return -EOPNOTSUPP;
 }
 
@@ -821,7 +821,80 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
+int
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_bulk_parms bparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+
+		return rc;
+	}
+
+	/* Internal table type processing */
+
+	if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	bparms.dir = parms->dir;
+	bparms.type = parms->type;
+	bparms.starting_idx = parms->starting_idx;
+	bparms.num_entries = parms->num_entries;
+	bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
+	bparms.physical_mem_addr = parms->physical_mem_addr;
+	rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get bulk failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
@@ -830,7 +903,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -861,7 +934,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
 int
 tf_free_tbl_scope(struct tf *tfp,
 		  struct tf_free_tbl_scope_parms *parms)
@@ -870,7 +942,7 @@ tf_free_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 2712d1039..93f3627d4 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -8,7 +8,7 @@
 
 #include "tf_core.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -293,7 +293,27 @@ struct tf_dev_ops {
 	 *   - (-EINVAL) on failure.
 	 */
 	int (*tf_dev_get_tbl)(struct tf *tfp,
-			       struct tf_tbl_get_parms *parms);
+			      struct tf_tbl_get_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element using 'bulk'
+	 * mechanism.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get bulk parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_bulk_tbl)(struct tf *tfp,
+				   struct tf_tbl_get_bulk_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 127c655a6..e3526672f 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -8,7 +8,7 @@
 
 #include "tf_device.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
 
@@ -88,6 +88,7 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
+	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
 	.tf_dev_free_tcam = NULL,
 	.tf_dev_alloc_search_tcam = NULL,
@@ -114,6 +115,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
+	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = NULL,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index da6dd65a3..473e4eae5 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -9,7 +9,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_core.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -41,8 +41,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index cf799c200..6bfcbd59e 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -23,6 +23,56 @@
 #define TF_EM_MAX_MASK 0x7FFF
 #define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
 
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
+ */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
+
+#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /*
  * Used to build GFID:
  *
@@ -80,13 +130,43 @@ struct tf_em_cfg_parms {
 };
 
 /**
- * @page table Table
+ * @page em EM
  *
  * @ref tf_alloc_eem_tbl_scope
  *
  * @ref tf_free_eem_tbl_scope_cb
  *
- * @ref tbl_scope_cb_find
+ * @ref tf_em_insert_int_entry
+ *
+ * @ref tf_em_delete_int_entry
+ *
+ * @ref tf_em_insert_ext_entry
+ *
+ * @ref tf_em_delete_ext_entry
+ *
+ * @ref tf_em_insert_ext_sys_entry
+ *
+ * @ref tf_em_delete_ext_sys_entry
+ *
+ * @ref tf_em_int_bind
+ *
+ * @ref tf_em_int_unbind
+ *
+ * @ref tf_em_ext_common_bind
+ *
+ * @ref tf_em_ext_common_unbind
+ *
+ * @ref tf_em_ext_host_alloc
+ *
+ * @ref tf_em_ext_host_free
+ *
+ * @ref tf_em_ext_system_alloc
+ *
+ * @ref tf_em_ext_system_free
+ *
+ * @ref tf_em_ext_common_free
+ *
+ * @ref tf_em_ext_common_alloc
  */
 
 /**
@@ -328,7 +408,7 @@ int tf_em_ext_host_free(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+			   struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using system memory
@@ -344,7 +424,7 @@ int tf_em_ext_system_alloc(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
+			  struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index ba6aa7ac1..d0d80daeb 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -194,12 +194,13 @@ tf_em_ext_common_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Ext DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -210,19 +211,29 @@ tf_em_ext_common_bind(struct tf *tfp,
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
 		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Ext DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_TBL_SCOPE] == 0)
+			continue;
+
 		db_cfg.rm_db = &eem_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
-				    "%s: EM DB creation failed\n",
+				    "%s: EM Ext DB creation failed\n",
 				    tf_dir_2_str(i));
 
 			return rc;
 		}
+		db_exists = 1;
 	}
 
-	mem_type = parms->mem_type;
-	init = 1;
+	if (db_exists) {
+		mem_type = parms->mem_type;
+		init = 1;
+	}
 
 	return 0;
 }
@@ -236,13 +247,11 @@ tf_em_ext_common_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Ext DBs created\n");
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 9be91ad5d..1c514747d 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -225,12 +225,13 @@ tf_em_int_bind(struct tf *tfp,
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 	struct tf_session *session;
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Int DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -242,31 +243,35 @@ tf_em_int_bind(struct tf *tfp,
 				  TF_SESSION_EM_POOL_SIZE);
 	}
 
-	/*
-	 * I'm not sure that this code is needed.
-	 * leaving for now until resolved
-	 */
-	if (parms->num_elements) {
-		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
-
-		for (i = 0; i < TF_DIR_MAX; i++) {
-			db_cfg.dir = i;
-			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
-			db_cfg.rm_db = &em_db[i];
-			rc = tf_rm_create_db(tfp, &db_cfg);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: EM DB creation failed\n",
-					    tf_dir_2_str(i));
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
-				return rc;
-			}
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Int DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
+			continue;
+
+		db_cfg.rm_db = &em_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM Int DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
 		}
+		db_exists = 1;
 	}
 
-	init = 1;
+	if (db_exists)
+		init = 1;
+
 	return 0;
 }
 
@@ -280,13 +285,11 @@ tf_em_int_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Int DBs created\n");
+		return 0;
 	}
 
 	session = (struct tf_session *)tfp->session->core_data;
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index b197bb271..211371081 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -7,7 +7,7 @@
 
 #include "tf_identifier.h"
 #include "tf_common.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_util.h"
 #include "tfp.h"
 
@@ -35,7 +35,7 @@ tf_ident_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "Identifier DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -65,7 +65,7 @@ tf_ident_bind(struct tf *tfp,
 }
 
 int
-tf_ident_unbind(struct tf *tfp __rte_unused)
+tf_ident_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
@@ -73,13 +73,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
+		TFP_DRV_LOG(INFO,
 			    "No Identifier DBs created\n");
-		return -EINVAL;
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index d8b80bc84..02d8a4971 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -871,26 +871,41 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *params)
+			  enum tf_dir dir,
+			  uint16_t hcapi_type,
+			  uint32_t starting_idx,
+			  uint16_t num_entries,
+			  uint16_t entry_sz_in_bytes,
+			  uint64_t physical_mem_addr)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
 	int data_size = 0;
 
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(params->dir);
-	req.type = tfp_cpu_to_le_32(params->type);
-	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
-	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
+	req.start_index = tfp_cpu_to_le_32(starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
 
-	data_size = params->num_entries * params->entry_sz_in_bytes;
+	data_size = num_entries * entry_sz_in_bytes;
 
-	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+	req.host_addr = tfp_cpu_to_le_64(physical_mem_addr);
 
 	MSG_PREP(parms,
 		 TF_KONG_MB,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8e276d4c0..7432873d7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -11,7 +11,6 @@
 
 #include "tf_tbl.h"
 #include "tf_rm.h"
-#include "tf_rm_new.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -422,6 +421,11 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms);
+			      enum tf_dir dir,
+			      uint16_t hcapi_type,
+			      uint32_t starting_idx,
+			      uint16_t num_entries,
+			      uint16_t entry_sz_in_bytes,
+			      uint64_t physical_mem_addr);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index b7b445102..4688514fc 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,535 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
-/* Common HW resources for all chip variants */
-#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
-					    * entries
-					    */
-#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
-#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
-					    * TCAM
-					    */
-#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
-					    * IDs
-					    */
-#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
-#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
-#define TF_NUM_METER             1024      /* < Number of meter instances */
-#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
-#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
-
-/* Wh+/SR specific HW resources */
-#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
-					    * entries
-					    */
-
-/* SR/SR2 specific HW resources */
-#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
-
-
-/* Thor, SR2 common HW resources */
-#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
-					    * templates
-					    */
-
-/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
-#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
-#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
-#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
-#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
-					    * States
-					    */
-#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
-#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
-#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
-
-/*
- * Common for the Reserved Resource defines below:
- *
- * - HW Resources
- *   For resources where a priority level plays a role, i.e. l2 ctx
- *   tcam entries, both a number of resources and a begin/end pair is
- *   required. The begin/end is used to assure TFLIB gets the correct
- *   priority setting for that resource.
- *
- *   For EM records there is no priority required thus a number of
- *   resources is sufficient.
- *
- *   Example, TCAM:
- *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
- *     0-63 as HW presents 0 as the highest priority entry.
- *
- * - SRAM Resources
- *   Handled as regular resources as there is no priority required.
- *
- * Common for these resources is that they are handled per direction,
- * rx/tx.
- */
-
-/* HW Resources */
-
-/* L2 CTX */
-#define TF_RSVD_L2_CTXT_TCAM_RX                   64
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
-#define TF_RSVD_L2_CTXT_TCAM_TX                   960
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
-
-/* Profiler */
-#define TF_RSVD_PROF_FUNC_RX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
-#define TF_RSVD_PROF_FUNC_TX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
-
-#define TF_RSVD_PROF_TCAM_RX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
-#define TF_RSVD_PROF_TCAM_TX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
-
-/* EM Profiles IDs */
-#define TF_RSVD_EM_PROF_ID_RX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
-#define TF_RSVD_EM_PROF_ID_TX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
-
-/* EM Records */
-#define TF_RSVD_EM_REC_RX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
-#define TF_RSVD_EM_REC_TX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
-
-/* Wildcard */
-#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
-#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
-
-#define TF_RSVD_WC_TCAM_RX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_WC_TCAM_END_IDX_RX                63
-#define TF_RSVD_WC_TCAM_TX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_WC_TCAM_END_IDX_TX                63
-
-#define TF_RSVD_METER_PROF_RX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_PROF_END_IDX_RX             0
-#define TF_RSVD_METER_PROF_TX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_PROF_END_IDX_TX             0
-
-#define TF_RSVD_METER_INST_RX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_INST_END_IDX_RX             0
-#define TF_RSVD_METER_INST_TX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_INST_END_IDX_TX             0
-
-/* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
-#define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
-#define TF_RSVD_MIRROR_END_IDX_TX                 0
-
-/* UPAR */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_UPAR_RX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
-#define TF_RSVD_UPAR_END_IDX_RX                   0
-#define TF_RSVD_UPAR_TX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
-#define TF_RSVD_UPAR_END_IDX_TX                   0
-
-/* Source Properties */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SP_TCAM_RX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_SP_TCAM_END_IDX_RX                0
-#define TF_RSVD_SP_TCAM_TX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_SP_TCAM_END_IDX_TX                0
-
-/* L2 Func */
-#define TF_RSVD_L2_FUNC_RX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
-#define TF_RSVD_L2_FUNC_END_IDX_RX                0
-#define TF_RSVD_L2_FUNC_TX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
-#define TF_RSVD_L2_FUNC_END_IDX_TX                0
-
-/* FKB */
-#define TF_RSVD_FKB_RX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
-#define TF_RSVD_FKB_END_IDX_RX                    0
-#define TF_RSVD_FKB_TX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
-#define TF_RSVD_FKB_END_IDX_TX                    0
-
-/* TBL Scope */
-#define TF_RSVD_TBL_SCOPE_RX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
-#define TF_RSVD_TBL_SCOPE_TX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
-
-/* EPOCH0 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH0_RX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH0_END_IDX_RX                 0
-#define TF_RSVD_EPOCH0_TX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH0_END_IDX_TX                 0
-
-/* EPOCH1 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH1_RX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH1_END_IDX_RX                 0
-#define TF_RSVD_EPOCH1_TX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH1_END_IDX_TX                 0
-
-/* METADATA */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_METADATA_RX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
-#define TF_RSVD_METADATA_END_IDX_RX               0
-#define TF_RSVD_METADATA_TX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
-#define TF_RSVD_METADATA_END_IDX_TX               0
-
-/* CT_STATE */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_CT_STATE_RX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
-#define TF_RSVD_CT_STATE_END_IDX_RX               0
-#define TF_RSVD_CT_STATE_TX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
-#define TF_RSVD_CT_STATE_END_IDX_TX               0
-
-/* RANGE_PROF */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_PROF_RX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
-#define TF_RSVD_RANGE_PROF_TX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
-
-/* RANGE_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_ENTRY_RX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
-#define TF_RSVD_RANGE_ENTRY_TX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
-
-/* LAG_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_LAG_ENTRY_RX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
-#define TF_RSVD_LAG_ENTRY_TX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
-
-
-/* SRAM - Resources
- * Limited to the types that CFA provides.
- */
-#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
-
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SRAM_MCG_RX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
-/* Multicast Group on TX is not supported */
-#define TF_RSVD_SRAM_MCG_TX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
-
-/* First encap of 8B RX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
-/* First encap of 8B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
-
-#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
-/* First encap of 16B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
-
-/* Encap of 64B on RX is not supported */
-#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
-/* First encap of 64B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_SP_SMAC_RX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
-#define TF_RSVD_SRAM_SP_SMAC_TX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
-
-/* SRAM SP IPV4 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
-
-/* SRAM SP IPV6 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
-/* Not yet supported fully in infra */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
-
-#define TF_RSVD_SRAM_COUNTER_64B_RX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_COUNTER_64B_TX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
-
-#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
-
-#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
-
-/* HW Resource Pool names */
-
-#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
-#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
-#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
-
-#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
-#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
-#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
-
-#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
-#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
-#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
-
-#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
-#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
-#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
-
-#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
-
-#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
-#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
-#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
-
-#define TF_METER_PROF_POOL_NAME           meter_prof_pool
-#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
-#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
-
-#define TF_METER_INST_POOL_NAME           meter_inst_pool
-#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
-#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
-
-#define TF_MIRROR_POOL_NAME               mirror_pool
-#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
-#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
-
-#define TF_UPAR_POOL_NAME                 upar_pool
-#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
-#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
-
-#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
-#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
-#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
-
-#define TF_FKB_POOL_NAME                  fkb_pool
-#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
-#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
-
-#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
-#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
-#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
-
-#define TF_L2_FUNC_POOL_NAME              l2_func_pool
-#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
-#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
-
-#define TF_EPOCH0_POOL_NAME               epoch0_pool
-#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
-#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
-
-#define TF_EPOCH1_POOL_NAME               epoch1_pool
-#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
-#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
-
-#define TF_METADATA_POOL_NAME             metadata_pool
-#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
-#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
-
-#define TF_CT_STATE_POOL_NAME             ct_state_pool
-#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
-#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
-
-#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
-#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
-#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
-
-#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
-#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
-#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
-
-#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
-#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
-#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
-
-/* SRAM Resource Pool names */
-#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
-#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
-#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
-
-#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
-#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
-#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
-
-#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
-#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
-#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
-
-#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
-#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
-#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
-
-#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
-#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
-#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
-
-#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
-#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
-#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
-
-#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
-#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
-#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
-
-#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
-#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
-#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
-
-#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
-#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
-#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
-
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
-
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
-
-/* Sw Resource Pool Names */
-
-#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
-#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
-#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
-
-
-/** HW Resource types
- */
-enum tf_resource_type_hw {
-	/* Common HW resources for all chip variants */
-	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-	TF_RESC_TYPE_HW_PROF_FUNC,
-	TF_RESC_TYPE_HW_PROF_TCAM,
-	TF_RESC_TYPE_HW_EM_PROF_ID,
-	TF_RESC_TYPE_HW_EM_REC,
-	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-	TF_RESC_TYPE_HW_WC_TCAM,
-	TF_RESC_TYPE_HW_METER_PROF,
-	TF_RESC_TYPE_HW_METER_INST,
-	TF_RESC_TYPE_HW_MIRROR,
-	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/SR specific HW resources */
-	TF_RESC_TYPE_HW_SP_TCAM,
-	/* SR/SR2 specific HW resources */
-	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Thor, SR2 common HW resources */
-	TF_RESC_TYPE_HW_FKB,
-	/* SR2 specific HW resources */
-	TF_RESC_TYPE_HW_TBL_SCOPE,
-	TF_RESC_TYPE_HW_EPOCH0,
-	TF_RESC_TYPE_HW_EPOCH1,
-	TF_RESC_TYPE_HW_METADATA,
-	TF_RESC_TYPE_HW_CT_STATE,
-	TF_RESC_TYPE_HW_RANGE_PROF,
-	TF_RESC_TYPE_HW_RANGE_ENTRY,
-	TF_RESC_TYPE_HW_LAG_ENTRY,
-	TF_RESC_TYPE_HW_MAX
-};
-
-/** HW Resource types
- */
-enum tf_resource_type_sram {
-	TF_RESC_TYPE_SRAM_FULL_ACTION,
-	TF_RESC_TYPE_SRAM_MCG,
-	TF_RESC_TYPE_SRAM_ENCAP_8B,
-	TF_RESC_TYPE_SRAM_ENCAP_16B,
-	TF_RESC_TYPE_SRAM_ENCAP_64B,
-	TF_RESC_TYPE_SRAM_SP_SMAC,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-	TF_RESC_TYPE_SRAM_COUNTER_64B,
-	TF_RESC_TYPE_SRAM_NAT_SPORT,
-	TF_RESC_TYPE_SRAM_NAT_DPORT,
-	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-	TF_RESC_TYPE_SRAM_MAX
-};
 
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0a84e64d..e0469b653 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -7,3171 +7,916 @@
 
 #include <rte_common.h>
 
+#include <cfa_resource_types.h>
+
 #include "tf_rm.h"
-#include "tf_core.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
-#include "tf_resources.h"
-#include "tf_msg.h"
-#include "bnxt.h"
+#include "tf_device.h"
 #include "tfp.h"
+#include "tf_msg.h"
 
 /**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
- *   enum tf_dir               dir         - Direction to process
- *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
- *                                           in the hw query structure
- *   define                    def_value   - Define value to check against
- *   uint32_t                 *eflag       - Result of the check
- */
-#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
-	if ((dir) == TF_DIR_RX) {					      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	} else {							      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	}								      \
-} while (0)
-
-/**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
- *   enum tf_dir                dir         - Direction to process
- *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
- *                                            in the hw query structure
- *   define                     def_value   - Define value to check against
- *   uint32_t                  *eflag       - Result of the check
- */
-#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
-	if ((dir) == TF_DIR_RX) {					       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	} else {							       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	}								       \
-} while (0)
-
-/**
- * Internal macro to convert a reserved resource define name to be
- * direction specific.
- *
- * Parameters:
- *   enum tf_dir    dir         - Direction to process
- *   string         type        - Type name to append RX or TX to
- *   string         dtype       - Direction specific type
- *
- *
+ * Generic RM Element data type that an RM DB is build upon.
  */
-#define TF_RESC_RSVD(dir, type, dtype) do {	\
-		if ((dir) == TF_DIR_RX)		\
-			(dtype) = type ## _RX;	\
-		else				\
-			(dtype) = type ## _TX;	\
-	} while (0)
-
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
-{
-	switch (hw_type) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		return "L2 ctxt tcam";
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		return "Profile Func";
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		return "Profile tcam";
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		return "EM profile id";
-	case TF_RESC_TYPE_HW_EM_REC:
-		return "EM record";
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		return "WC tcam profile id";
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		return "WC tcam";
-	case TF_RESC_TYPE_HW_METER_PROF:
-		return "Meter profile";
-	case TF_RESC_TYPE_HW_METER_INST:
-		return "Meter instance";
-	case TF_RESC_TYPE_HW_MIRROR:
-		return "Mirror";
-	case TF_RESC_TYPE_HW_UPAR:
-		return "UPAR";
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		return "Source properties tcam";
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		return "L2 Function";
-	case TF_RESC_TYPE_HW_FKB:
-		return "FKB";
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		return "Table scope";
-	case TF_RESC_TYPE_HW_EPOCH0:
-		return "EPOCH0";
-	case TF_RESC_TYPE_HW_EPOCH1:
-		return "EPOCH1";
-	case TF_RESC_TYPE_HW_METADATA:
-		return "Metadata";
-	case TF_RESC_TYPE_HW_CT_STATE:
-		return "Connection tracking state";
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		return "Range profile";
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		return "Range entry";
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		return "LAG";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
-{
-	switch (sram_type) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		return "Full action";
-	case TF_RESC_TYPE_SRAM_MCG:
-		return "MCG";
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		return "Encap 8B";
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		return "Encap 16B";
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		return "Encap 64B";
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		return "Source properties SMAC";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		return "Source properties SMAC IPv4";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		return "Source properties IPv6";
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		return "Counter 64B";
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		return "NAT source port";
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		return "NAT destination port";
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		return "NAT source IPv4";
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		return "NAT destination IPv4";
-	default:
-		return "Invalid identifier";
-	}
-}
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
 
-/**
- * Helper function to perform a HW HCAPI resource type lookup against
- * the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
- */
-static int
-tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
-{
-	uint32_t value = -EOPNOTSUPP;
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
 
-	switch (index) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_REC:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_INST:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
-		break;
-	case TF_RESC_TYPE_HW_MIRROR:
-		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
-		break;
-	case TF_RESC_TYPE_HW_UPAR:
-		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
-		break;
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_FKB:
-		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
-		break;
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH0:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH1:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
-		break;
-	case TF_RESC_TYPE_HW_METADATA:
-		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
-		break;
-	case TF_RESC_TYPE_HW_CT_STATE:
-		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
-		break;
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
-		break;
-	default:
-		break;
-	}
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
 
-	return value;
-}
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
 
 /**
- * Helper function to perform a SRAM HCAPI resource type lookup
- * against the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
+ * TF RM DB definition
  */
-static int
-tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
-{
-	uint32_t value = -EOPNOTSUPP;
-
-	switch (index) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
-		break;
-	case TF_RESC_TYPE_SRAM_MCG:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
-		break;
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
-		break;
-	default:
-		break;
-	}
-
-	return value;
-}
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
 
-/**
- * Helper function to print all the HW resource qcaps errors reported
- * in the error_flag.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] error_flag
- *   Pointer to the hw error flags created at time of the query check
- */
-static void
-tf_rm_print_hw_qcaps_error(enum tf_dir dir,
-			   struct tf_rm_hw_query *hw_query,
-			   uint32_t *error_flag)
-{
-	int i;
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
 
-	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
 
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_hw_2_str(i),
-				    hw_query->hw_query[i].max,
-				    tf_rm_rsvd_hw_value(dir, i));
-	}
-}
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
 
 /**
- * Helper function to print all the SRAM resource qcaps errors
- * reported in the error_flag.
+ * Adjust an index according to the allocation information.
  *
- * [in] dir
- *   Receive or transmit direction
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] error_flag
- *   Pointer to the sram error flags created at time of the query check
- */
-static void
-tf_rm_print_sram_qcaps_error(enum tf_dir dir,
-			     struct tf_rm_sram_query *sram_query,
-			     uint32_t *error_flag)
-{
-	int i;
-
-	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_sram_2_str(i),
-				    sram_query->sram_query[i].max,
-				    tf_rm_rsvd_sram_value(dir, i));
-	}
-}
-
-/**
- * Performs a HW resource check between what firmware capability
- * reports and what the core expects is available.
+ * [in] cfg
+ *   Pointer to the DB configuration
  *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
  *
- * [in] query
- *   Pointer to HW Query data structure. Query holds what the firmware
- *   offers of the HW resources.
+ * [in] count
+ *   Number of DB configuration elements
  *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
  *
  * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
-			    enum tf_dir dir,
-			    uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-			     TF_RSVD_L2_CTXT_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_FUNC,
-			     TF_RSVD_PROF_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_TCAM,
-			     TF_RSVD_PROF_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_PROF_ID,
-			     TF_RSVD_EM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_REC,
-			     TF_RSVD_EM_REC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-			     TF_RSVD_WC_TCAM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM,
-			     TF_RSVD_WC_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_PROF,
-			     TF_RSVD_METER_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_INST,
-			     TF_RSVD_METER_INST,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_MIRROR,
-			     TF_RSVD_MIRROR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_UPAR,
-			     TF_RSVD_UPAR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_SP_TCAM,
-			     TF_RSVD_SP_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_FUNC,
-			     TF_RSVD_L2_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_FKB,
-			     TF_RSVD_FKB,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_TBL_SCOPE,
-			     TF_RSVD_TBL_SCOPE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH0,
-			     TF_RSVD_EPOCH0,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH1,
-			     TF_RSVD_EPOCH1,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METADATA,
-			     TF_RSVD_METADATA,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_CT_STATE,
-			     TF_RSVD_CT_STATE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_PROF,
-			     TF_RSVD_RANGE_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_ENTRY,
-			     TF_RSVD_RANGE_ENTRY,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_LAG_ENTRY,
-			     TF_RSVD_LAG_ENTRY,
-			     error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Performs a SRAM resource check between what firmware capability
- * reports and what the core expects is available.
- *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
- *
- * [in] query
- *   Pointer to SRAM Query data structure. Query holds what the
- *   firmware offers of the SRAM resources.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
- *
- * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
-			      enum tf_dir dir,
-			      uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_FULL_ACTION,
-			       TF_RSVD_SRAM_FULL_ACTION,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_MCG,
-			       TF_RSVD_SRAM_MCG,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_8B,
-			       TF_RSVD_SRAM_ENCAP_8B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_16B,
-			       TF_RSVD_SRAM_ENCAP_16B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_64B,
-			       TF_RSVD_SRAM_ENCAP_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC,
-			       TF_RSVD_SRAM_SP_SMAC,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			       TF_RSVD_SRAM_SP_SMAC_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			       TF_RSVD_SRAM_SP_SMAC_IPV6,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_COUNTER_64B,
-			       TF_RSVD_SRAM_COUNTER_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_SPORT,
-			       TF_RSVD_SRAM_NAT_SPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_DPORT,
-			       TF_RSVD_SRAM_NAT_DPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-			       TF_RSVD_SRAM_NAT_S_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-			       TF_RSVD_SRAM_NAT_D_IPV4,
-			       error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Internal function to mark pool entries used.
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_reserve_range(uint32_t count,
-		    uint32_t rsv_begin,
-		    uint32_t rsv_end,
-		    uint32_t max,
-		    struct bitalloc *pool)
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
 {
-	uint32_t i;
+	int i;
+	uint16_t cnt = 0;
 
-	/* If no resources has been requested we mark everything
-	 * 'used'
-	 */
-	if (count == 0)	{
-		for (i = 0; i < max; i++)
-			ba_alloc_index(pool, i);
-	} else {
-		/* Support 2 main modes
-		 * Reserved range starts from bottom up (with
-		 * pre-reserved value or not)
-		 * - begin = 0 to end xx
-		 * - begin = 1 to end xx
-		 *
-		 * Reserved range starts from top down
-		 * - begin = yy to end max
-		 */
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
 
-		/* Bottom up check, start from 0 */
-		if (rsv_begin == 0) {
-			for (i = rsv_end + 1; i < max; i++)
-				ba_alloc_index(pool, i);
-		}
-
-		/* Bottom up check, start from 1 or higher OR
-		 * Top Down
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
 		 */
-		if (rsv_begin >= 1) {
-			/* Allocate from 0 until start */
-			for (i = 0; i < rsv_begin; i++)
-				ba_alloc_index(pool, i);
-
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
-}
-
-/**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
- */
-static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
-	uint32_t end = 0;
-
-	/* l2 ctxt rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
-
-	/* l2 ctxt tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the profile tcam and profile func
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
-	uint32_t end = 0;
-
-	/* profile func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
-
-	/* profile func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_PROF_TCAM;
-
-	/* profile tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
-
-	/* profile tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the em profile id allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_em_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
-	uint32_t end = 0;
-
-	/* em prof id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
-
-	/* em prof id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the wildcard tcam and profile id
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_wc(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
-	uint32_t end = 0;
-
-	/* wc profile id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
-
-	/* wc profile id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_WC_TCAM;
-
-	/* wc tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_RX);
-
-	/* wc tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the meter resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_meter(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
-	uint32_t end = 0;
-
-	/* meter profiles rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_RX);
-
-	/* meter profiles tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_METER_INST;
-
-	/* meter rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_RX);
-
-	/* meter tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the mirror resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_mirror(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
-	uint32_t end = 0;
-
-	/* mirror rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_RX);
-
-	/* mirror tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the upar resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_upar(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_UPAR;
-	uint32_t end = 0;
-
-	/* upar rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_RX);
-
-	/* upar tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp tcam resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
-	uint32_t end = 0;
-
-	/* sp tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_RX);
-
-	/* sp tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 func resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
-	uint32_t end = 0;
-
-	/* l2 func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
-
-	/* l2 func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the fkb resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_fkb(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_FKB;
-	uint32_t end = 0;
-
-	/* fkb rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_RX);
-
-	/* fkb tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the tbld scope resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
-	uint32_t end = 0;
-
-	/* tbl scope rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
-
-	/* tbl scope tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 epoch resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_epoch(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
-	uint32_t end = 0;
-
-	/* epoch0 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_RX);
-
-	/* epoch0 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_EPOCH1;
-
-	/* epoch1 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_RX);
-
-	/* epoch1 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the metadata resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_metadata(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METADATA;
-	uint32_t end = 0;
-
-	/* metadata rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_RX);
-
-	/* metadata tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the ct state resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_ct_state(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
-	uint32_t end = 0;
-
-	/* ct state rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_RX);
-
-	/* ct state tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the range resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_range(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
-	uint32_t end = 0;
-
-	/* range profile rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
-
-	/* range profile tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
-
-	/* range entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
-
-	/* range entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the lag resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_lag_entry(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
-	uint32_t end = 0;
-
-	/* lag entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
-
-	/* lag entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the full action resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
-	uint16_t end = 0;
-
-	/* full action rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_RX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
-
-	/* full action tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_TX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the multicast group resources
- * allocated that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
-	uint16_t end = 0;
-
-	/* multicast group rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_MCG_RX,
-			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
-
-	/* Multicast Group on TX is not supported */
-}
-
-/**
- * Internal function to mark all the encap resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_encap(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
-	uint16_t end = 0;
-
-	/* encap 8b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_RX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
-
-	/* encap 8b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_TX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
-
-	/* encap 16b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_RX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
-
-	/* encap 16b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_TX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
-
-	/* Encap 64B not supported on RX */
-
-	/* Encap 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_64B_TX,
-			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_sp(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
-	uint16_t end = 0;
-
-	/* sp smac rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_RX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
-
-	/* sp smac tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_TX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-
-	/* SP SMAC IPv4 not supported on RX */
-
-	/* sp smac ipv4 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-
-	/* SP SMAC IPv6 not supported on RX */
-
-	/* sp smac ipv6 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the stat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_stats(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
-	uint16_t end = 0;
-
-	/* counter 64b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_RX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
-
-	/* counter 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_TX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the nat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_nat(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
-	uint16_t end = 0;
-
-	/* nat source port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_RX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
-
-	/* nat source port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_TX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
-
-	/* nat destination port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_RX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
-
-	/* nat destination port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_TX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-
-	/* nat source port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
-
-	/* nat source ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-
-	/* nat destination port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
-
-	/* nat destination ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
-}
-
-/**
- * Internal function used to validate the HW allocated resources
- * against the requested values.
- */
-static int
-tf_rm_hw_alloc_validate(enum tf_dir dir,
-			struct tf_rm_hw_alloc *hw_alloc,
-			struct tf_rm_entry *hw_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				hw_alloc->hw_num[i],
-				hw_entry[i].stride);
-			error = -1;
-		}
-	}
-
-	return error;
-}
-
-/**
- * Internal function used to validate the SRAM allocated resources
- * against the requested values.
- */
-static int
-tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
-			  struct tf_rm_sram_alloc *sram_alloc,
-			  struct tf_rm_entry *sram_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				sram_alloc->sram_num[i],
-				sram_entry[i].stride);
-			error = -1;
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+
 		}
 	}
 
-	return error;
+	*valid_count = cnt;
 }
 
 /**
- * Internal function used to mark all the HW resources allocated that
- * Truflow does not own.
+ * Resource Manager Adjust of base index definitions.
  */
-static void
-tf_rm_reserve_hw(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_l2_ctxt(tfs);
-	tf_rm_rsvd_prof(tfs);
-	tf_rm_rsvd_em_prof(tfs);
-	tf_rm_rsvd_wc(tfs);
-	tf_rm_rsvd_mirror(tfs);
-	tf_rm_rsvd_meter(tfs);
-	tf_rm_rsvd_upar(tfs);
-	tf_rm_rsvd_sp_tcam(tfs);
-	tf_rm_rsvd_l2_func(tfs);
-	tf_rm_rsvd_fkb(tfs);
-	tf_rm_rsvd_tbl_scope(tfs);
-	tf_rm_rsvd_epoch(tfs);
-	tf_rm_rsvd_metadata(tfs);
-	tf_rm_rsvd_ct_state(tfs);
-	tf_rm_rsvd_range(tfs);
-	tf_rm_rsvd_lag_entry(tfs);
-}
-
-/**
- * Internal function used to mark all the SRAM resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_reserve_sram(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_sram_full_action(tfs);
-	tf_rm_rsvd_sram_mcg(tfs);
-	tf_rm_rsvd_sram_encap(tfs);
-	tf_rm_rsvd_sram_sp(tfs);
-	tf_rm_rsvd_sram_stats(tfs);
-	tf_rm_rsvd_sram_nat(tfs);
-}
-
-/**
- * Internal function used to allocate and validate all HW resources.
- */
-static int
-tf_rm_allocate_validate_hw(struct tf *tfp,
-			   enum tf_dir dir)
-{
-	int rc;
-	int i;
-	struct tf_rm_hw_query hw_query;
-	struct tf_rm_hw_alloc hw_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *hw_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		hw_entries = tfs->resc.rx.hw_entry;
-	else
-		hw_entries = tfs->resc.tx.hw_entry;
-
-	/* Query for Session HW Resources */
-
-	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
-	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed,"
-			"error_flag:0x%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
-		goto cleanup;
-	}
-
-	/* Post process HW capability */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
-		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
-
-	/* Allocate Session HW Resources */
-	/* Perform HW allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-
- cleanup:
-
-	return -1;
-}
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
 
 /**
- * Internal function used to allocate and validate all SRAM resources.
+ * Adjust an index according to the allocation information.
  *
- * [in] tfp
- *   Pointer to TF handle
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
  *
  * Returns:
- *   0  - Success
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_allocate_validate_sram(struct tf *tfp,
-			     enum tf_dir dir)
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
 {
-	int rc;
-	int i;
-	struct tf_rm_sram_query sram_query;
-	struct tf_rm_sram_alloc sram_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *sram_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
-
-	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed,"
-			"error_flag:%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
-	}
+	int rc = 0;
+	uint32_t base_index;
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	base_index = db[db_index].alloc.entry.start;
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed,"
-			    " rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
 	}
 
-	return 0;
-
- cleanup:
-
-	return -1;
+	return rc;
 }
 
 /**
- * Helper function used to prune a HW resource array to only hold
- * elements that needs to be flushed.
- *
- * [in] tfs
- *   Session handle
+ * Logs an array of found residual entries to the console.
  *
  * [in] dir
  *   Receive or transmit direction
  *
- * [in] hw_entries
- *   Master HW Resource database
+ * [in] type
+ *   Type of Device Module
  *
- * [in/out] flush_entries
- *   Pruned HW Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master HW
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in] count
+ *   Number of entries in the residual array
  *
- * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
  */
-static int
-tf_rm_hw_to_flush(struct tf_session *tfs,
-		  enum tf_dir dir,
-		  struct tf_rm_entry *hw_entries,
-		  struct tf_rm_entry *flush_entries)
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
 {
-	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
+	int i;
 
-	/* Check all the hw resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
 	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_CTXT_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_INST_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_MIRROR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_UPAR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SP_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_FKB_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
-		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_TBL_SCOPE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
-	} else {
-		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
-			    tf_dir_2_str(dir),
-			    free_cnt,
-			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH0_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH1_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METADATA_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_CT_STATE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_LAG_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
 	}
-
-	return flush_rc;
 }
 
 /**
- * Helper function used to prune a SRAM resource array to only hold
- * elements that needs to be flushed.
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
  *
- * [in] tfs
- *   Session handle
- *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
  *
- * [in] hw_entries
- *   Master SRAM Resource data base
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
  *
- * [in/out] flush_entries
- *   Pruned SRAM Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master SRAM
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
  *
  * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_sram_to_flush(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    struct tf_rm_entry *sram_entries,
-		    struct tf_rm_entry *flush_entries)
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
 {
 	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
-
-	/* Check all the sram resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
-	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_FULL_ACTION_POOL_NAME,
-			rc);
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
-	} else {
-		flush_rc = 1;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
 	}
 
-	/* Only pools for RX direction */
-	if (dir == TF_DIR_RX) {
-		TF_RM_GET_POOLS_RX(tfs, &pool,
-				   TF_SRAM_MCG_POOL_NAME);
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
 		if (rc)
 			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
-		} else {
-			flush_rc = 1;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
 		}
-	} else {
-		/* Always prune TX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		*resv_size = found;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_8B_POOL_NAME,
-			rc);
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
+int
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
+{
+	int rc;
+	int i;
+	int j;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_16B_POOL_NAME,
-			rc);
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-	}
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_SP_SMAC_POOL_NAME,
-			rc);
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
-	}
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
+
+	/* Handle the case where a DB create request really ends up
+	 * being empty. Unsupported (if not rare) case but possible
+	 * that no resources are necessary for a 'direction'.
+	 */
+	if (hcapi_items == 0) {
+		TFP_DRV_LOG(ERR,
+			"%s: DB create request for Zero elements, DB Type:%s\n",
+			tf_dir_2_str(parms->dir),
+			tf_device_module_type_2_str(parms->type));
+
+		parms->rm_db = NULL;
+		return -ENOMEM;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_STATS_64B_POOL_NAME,
-			rc);
+	/* Alloc request, alignment already set */
+	cparms.nitems = (size_t)hcapi_items;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_SPORT_POOL_NAME,
-			rc);
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
-	} else {
-		flush_rc = 1;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_cnt[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		}
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_DPORT_POOL_NAME,
-			rc);
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       hcapi_items,
+				       req,
+				       resv);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_S_IPV4_POOL_NAME,
-			rc);
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db = (void *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_D_IPV4_POOL_NAME,
-			rc);
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
-	return flush_rc;
-}
+	db = rm_db->db;
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
 
-/**
- * Helper function used to generate an error log for the HW types that
- * needs to be flushed. The types should have been cleaned up ahead of
- * invoking tf_close_session.
- *
- * [in] hw_entries
- *   HW Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_hw_flush(enum tf_dir dir,
-		   struct tf_rm_entry *hw_entries)
-{
-	int i;
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
 
-	/* Walk the hw flush array and log the types that wasn't
-	 * cleaned up.
-	 */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entries[i].stride != 0)
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
+		 */
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_hw_2_str(i));
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    db[i].cfg_type);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
+			goto fail;
+		}
 	}
+
+	rm_db->num_entries = parms->num_elements;
+	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
+	*parms->rm_db = (void *)rm_db;
+
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
+	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
-/**
- * Helper function used to generate an error log for the SRAM types
- * that needs to be flushed. The types should have been cleaned up
- * ahead of invoking tf_close_session.
- *
- * [in] sram_entries
- *   SRAM Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_sram_flush(enum tf_dir dir,
-		     struct tf_rm_entry *sram_entries)
+int
+tf_rm_free_db(struct tf *tfp,
+	      struct tf_rm_free_db_parms *parms)
 {
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
+	 */
 
-	/* Walk the sram flush array and log the types that wasn't
-	 * cleaned up.
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
 	 */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entries[i].stride != 0)
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_sram_2_str(i));
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
 	}
-}
 
-void
-tf_rm_init(struct tf *tfp __rte_unused)
-{
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db[i].pool);
 
-	/* This version is host specific and should be checked against
-	 * when attaching as there is no guarantee that a secondary
-	 * would run from same image version.
-	 */
-	tfs->ver.major = TF_SESSION_VER_MAJOR;
-	tfs->ver.minor = TF_SESSION_VER_MINOR;
-	tfs->ver.update = TF_SESSION_VER_UPDATE;
-
-	tfs->session_id.id = 0;
-	tfs->ref_count = 0;
-
-	/* Initialization of Table Scopes */
-	/* ll_init(&tfs->tbl_scope_ll); */
-
-	/* Initialization of HW and SRAM resource DB */
-	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
-
-	/* Initialization of HW Resource Pools */
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records should not be controlled by way of a pool */
-
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Initialization of SRAM Resource Pools
-	 * These pools are set to the TFLIB defined MAX sizes not
-	 * AFM's HW max as to limit the memory consumption
-	 */
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		TF_RSVD_SRAM_FULL_ACTION_RX);
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		TF_RSVD_SRAM_FULL_ACTION_TX);
-	/* Only Multicast Group on RX is supported */
-	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
-		TF_RSVD_SRAM_MCG_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_8B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_8B_TX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_16B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_16B_TX);
-	/* Only Encap 64B on TX is supported */
-	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_64B_TX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		TF_RSVD_SRAM_SP_SMAC_RX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_TX);
-	/* Only SP SMAC IPv4 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-	/* Only SP SMAC IPv6 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
-		TF_RSVD_SRAM_COUNTER_64B_RX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_COUNTER_64B_TX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_SPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_SPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_DPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_DPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_S_IPV4_TX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/* Initialization of pools local to TF Core */
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate_validate(struct tf *tfp)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc;
-	int i;
+	int id;
+	uint32_t index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		rc = tf_rm_allocate_validate_hw(tfp, i);
-		if (rc)
-			return rc;
-		rc = tf_rm_allocate_validate_sram(tfp, i);
-		if (rc)
-			return rc;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	/* With both HW and SRAM allocated and validated we can
-	 * 'scrub' the reservation on the pools.
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
 	 */
-	tf_rm_reserve_hw(tfp);
-	tf_rm_reserve_sram(tfp);
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		rc = -ENOMEM;
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				&index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -EINVAL;
+	}
+
+	*parms->index = index;
 
 	return rc;
 }
 
 int
-tf_rm_close(struct tf *tfp)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
 	int rc;
-	int rc_close = 0;
-	int i;
-	struct tf_rm_entry *hw_entries;
-	struct tf_rm_entry *hw_flush_entries;
-	struct tf_rm_entry *sram_entries;
-	struct tf_rm_entry *sram_flush_entries;
-	struct tf_session *tfs __rte_unused =
-		(struct tf_session *)(tfp->session->core_data);
-
-	struct tf_rm_db flush_resc = tfs->resc;
-
-	/* On close it is assumed that the session has already cleaned
-	 * up all its resources, individually, while destroying its
-	 * flows. No checking is performed thus the behavior is as
-	 * follows.
-	 *
-	 * Session RM will signal FW to release session resources. FW
-	 * will perform invalidation of all the allocated entries
-	 * (assures any outstanding resources has been cleared, then
-	 * free the FW RM instance.
-	 *
-	 * Session will then be freed by tf_close_session() thus there
-	 * is no need to clean each resource pool as the whole session
-	 * is going away.
-	 */
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		if (i == TF_DIR_RX) {
-			hw_entries = tfs->resc.rx.hw_entry;
-			hw_flush_entries = flush_resc.rx.hw_entry;
-			sram_entries = tfs->resc.rx.sram_entry;
-			sram_flush_entries = flush_resc.rx.sram_entry;
-		} else {
-			hw_entries = tfs->resc.tx.hw_entry;
-			hw_flush_entries = flush_resc.tx.hw_entry;
-			sram_entries = tfs->resc.tx.sram_entry;
-			sram_flush_entries = flush_resc.tx.sram_entry;
-		}
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-		/* Check for any not previously freed HW resources and
-		 * flush if required.
-		 */
-		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering HW resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-			/* Log the entries to be flushed */
-			tf_rm_log_hw_flush(i, hw_flush_entries);
-		}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-		/* Check for any not previously freed SRAM resources
-		 * and flush if required.
-		 */
-		rc = tf_rm_sram_to_flush(tfs,
-					 i,
-					 sram_entries,
-					 sram_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
 
-			/* Log the entries to be flushed */
-			tf_rm_log_sram_flush(i, sram_flush_entries);
-		}
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	return rc_close;
-}
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
 
-#if (TF_SHADOW == 1)
-int
-tf_rm_shadow_db_init(struct tf_session *tfs)
-{
-	rc = 1;
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
 
 	return rc;
 }
-#endif /* TF_SHADOW */
 
 int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	int rc;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_L2_CTXT_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_PROF_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_WC_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
 		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
 		return rc;
 	}
 
-	return 0;
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_FULL_ACTION_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_TX)
-			break;
-		TF_RM_GET_POOLS_RX(tfs, pool,
-				   TF_SRAM_MCG_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_8B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_16B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		/* No pools for RX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_SP_SMAC_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_STATS_64B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_SPORT_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_S_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_D_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_INST_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_MIRROR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_UPAR:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_UPAR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH0_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH1_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METADATA:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METADATA_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_CT_STATE_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_ENTRY_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_LAG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_LAG_ENTRY_POOL_NAME,
-				rc);
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-		break;
-	/* No bitalloc pools for these types */
-	case TF_TBL_TYPE_EXT:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	}
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
 	return 0;
 }
 
 int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
-		break;
-	case TF_TBL_TYPE_UPAR:
-		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
-		break;
-	case TF_TBL_TYPE_METADATA:
-		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
-		break;
-	case TF_TBL_TYPE_LAG:
-		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		*hcapi_type = -1;
-		rc = -EOPNOTSUPP;
-	}
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	return rc;
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return 0;
 }
 
 int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index)
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 {
-	int rc;
-	struct tf_rm_resc *resc;
-	uint32_t hcapi_type;
-	uint32_t base_index;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (dir == TF_DIR_RX)
-		resc = &tfs->resc.rx;
-	else if (dir == TF_DIR_TX)
-		resc = &tfs->resc.tx;
-	else
-		return -EOPNOTSUPP;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
-	if (rc)
-		return -1;
-
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-	case TF_TBL_TYPE_MCAST_GROUPS:
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-	case TF_TBL_TYPE_ACT_STATS_64:
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		base_index = resc->sram_entry[hcapi_type].start;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-	case TF_TBL_TYPE_METER_PROF:
-	case TF_TBL_TYPE_METER_INST:
-	case TF_TBL_TYPE_UPAR:
-	case TF_TBL_TYPE_EPOCH0:
-	case TF_TBL_TYPE_EPOCH1:
-	case TF_TBL_TYPE_METADATA:
-	case TF_TBL_TYPE_CT_STATE:
-	case TF_TBL_TYPE_RANGE_PROF:
-	case TF_TBL_TYPE_RANGE_ENTRY:
-	case TF_TBL_TYPE_LAG:
-		base_index = resc->hw_entry[hcapi_type].start;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		return -EOPNOTSUPP;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	switch (c_type) {
-	case TF_RM_CONVERT_RM_BASE:
-		*convert_index = index - base_index;
-		break;
-	case TF_RM_CONVERT_ADD_BASE:
-		*convert_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
 	}
 
-	return 0;
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
+	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 1a09f13a7..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -3,301 +3,444 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
-#include "tf_resources.h"
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
-struct tf_session;
 
-/* Internal macro to determine appropriate allocation pools based on
- * DIRECTION parm, also performs error checking for DIRECTION parm. The
- * SESSION_POOL and SESSION pointers are set appropriately upon
- * successful return (the GLOBAL_POOL is used to globally manage
- * resource allocation and the SESSION_POOL is used to track the
- * resources that have been allocated to the session)
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
  *
- * parameters:
- *   struct tfp        *tfp
- *   enum tf_dir        direction
- *   struct bitalloc  **session_pool
- *   string             base_pool_name - used to form pointers to the
- *					 appropriate bit allocation
- *					 pools, both directions of the
- *					 session pools must have same
- *					 base name, for example if
- *					 POOL_NAME is feat_pool: - the
- *					 ptr's to the session pools
- *					 are feat_pool_rx feat_pool_tx
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
  *
- *  int                  rc - return code
- *			      0 - Success
- *			     -1 - invalid DIRECTION parm
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
-#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
-		(rc) = 0;						\
-		if ((direction) == TF_DIR_RX) {				\
-			*(session_pool) = (tfs)->pool_name ## _RX;	\
-		} else if ((direction) == TF_DIR_TX) {			\
-			*(session_pool) = (tfs)->pool_name ## _TX;	\
-		} else {						\
-			rc = -1;					\
-		}							\
-	} while (0)
 
-#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _RX)
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_new_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
 
-#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _TX)
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled', uses a Pool for internal storage */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
+	TF_RM_TYPE_MAX
+};
 
 /**
- * Resource query single entry
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
  */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
 };
 
 /**
- * Resource single entry
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
  */
-struct tf_rm_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
 };
 
 /**
- * Resource query array of HW entities
+ * Allocation information for a single element.
  */
-struct tf_rm_hw_query {
-	/** array of HW resource entries */
-	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_new_entry entry;
 };
 
 /**
- * Resource allocation array of HW entities
+ * Create RM DB parameters
  */
-struct tf_rm_hw_alloc {
-	/** array of HW resource entries */
-	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements.
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
+	 */
+	uint16_t *alloc_cnt;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void **rm_db;
 };
 
 /**
- * Resource query array of SRAM entities
+ * Free RM DB parameters
  */
-struct tf_rm_sram_query {
-	/** array of SRAM resource entries */
-	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
 };
 
 /**
- * Resource allocation array of SRAM entities
+ * Allocate RM parameters for a single element
  */
-struct tf_rm_sram_alloc {
-	/** array of SRAM resource entries */
-	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
- * Resource Manager arrays for a single direction
+ * Free RM parameters for a single element
  */
-struct tf_rm_resc {
-	/** array of HW resource entries */
-	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
-	/** array of SRAM resource entries */
-	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t index;
 };
 
 /**
- * Resource Manager Database
+ * Is Allocated parameters for a single element
  */
-struct tf_rm_db {
-	struct tf_rm_resc rx;
-	struct tf_rm_resc tx;
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	int *allocated;
 };
 
 /**
- * Helper function used to convert HW HCAPI resource type to a string.
+ * Get Allocation information for a single element
  */
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
 
 /**
- * Helper function used to convert SRAM HCAPI resource type to a string.
+ * Get HCAPI type parameters for a single element
  */
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
 
 /**
- * Initializes the Resource Manager and the associated database
- * entries for HW and SRAM resources. Must be called before any other
- * Resource Manager functions.
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
+/**
+ * @page rm Resource Manager
  *
- * [in] tfp
- *   Pointer to TF handle
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
-void tf_rm_init(struct tf *tfp);
 
 /**
- * Allocates and validates both HW and SRAM resources per the NVM
- * configuration. If any allocation fails all resources for the
- * session is deallocated.
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_rm_allocate_validate(struct tf *tfp);
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
 
 /**
- * Closes the Resource Manager and frees all allocated resources per
- * the associated database.
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
- *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
-int tf_rm_close(struct tf *tfp);
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
 
-#if (TF_SHADOW == 1)
 /**
- * Initializes Shadow DB of configuration elements
+ * Allocates a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to allocate parameters
  *
- * Returns:
- *  0  - Success
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
-int tf_rm_shadow_db_init(struct tf_session *tfs);
-#endif /* TF_SHADOW */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Tcam table type.
- *
- * Function will print error msg if tcam type is unsupported or lookup
- * failed.
+ * Free's a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to free parameters
  *
- * [in] type
- *   Type of the object
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool);
+int tf_rm_free(struct tf_rm_free_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Table type.
- *
- * Function will print error msg if table type is unsupported or
- * lookup failed.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] type
- *   Type of the object
+ * Performs an allocation verification check on a specified element.
  *
- * [in] dir
- *    Receive or transmit direction
+ * [in] parms
+ *   Pointer to is allocated parameters
  *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool);
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
 
 /**
- * Converts the TF Table Type to internal HCAPI_TYPE
- *
- * [in] type
- *   Type to be converted
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
  *
- * [in/out] hcapi_type
- *   Converted type
+ * [in] parms
+ *   Pointer to get info parameters
  *
- * Returns:
- *  0           - Success will set the *hcapi_type
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type);
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
- * TF RM Convert of index methods.
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-enum tf_rm_convert_type {
-	/** Adds the base of the Session Pool to the index */
-	TF_RM_CONVERT_ADD_BASE,
-	/** Removes the Session Pool base from the index */
-	TF_RM_CONVERT_RM_BASE
-};
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
 /**
- * Provides conversion of the Table Type index in relation to the
- * Session Pool base.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in] type
- *   Type of the object
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
  *
- * [in] c_type
- *   Type of conversion to perform
+ * [in] parms
+ *   Pointer to get inuse parameters
  *
- * [in] index
- *   Index to be converted
- *
- * [in/out]  convert_index
- *   Pointer to the converted index
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index);
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
deleted file mode 100644
index 2d9be654a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ /dev/null
@@ -1,907 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-
-#include <rte_common.h>
-
-#include <cfa_resource_types.h>
-
-#include "tf_rm_new.h"
-#include "tf_common.h"
-#include "tf_util.h"
-#include "tf_session.h"
-#include "tf_device.h"
-#include "tfp.h"
-#include "tf_msg.h"
-
-/**
- * Generic RM Element data type that an RM DB is build upon.
- */
-struct tf_rm_element {
-	/**
-	 * RM Element configuration type. If Private then the
-	 * hcapi_type can be ignored. If Null then the element is not
-	 * valid for the device.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/**
-	 * HCAPI RM Type for the element.
-	 */
-	uint16_t hcapi_type;
-
-	/**
-	 * HCAPI RM allocated range information for the element.
-	 */
-	struct tf_rm_alloc_info alloc;
-
-	/**
-	 * Bit allocator pool for the element. Pool size is controlled
-	 * by the struct tf_session_resources at time of session creation.
-	 * Null indicates that the element is not used for the device.
-	 */
-	struct bitalloc *pool;
-};
-
-/**
- * TF RM DB definition
- */
-struct tf_rm_new_db {
-	/**
-	 * Number of elements in the DB
-	 */
-	uint16_t num_entries;
-
-	/**
-	 * Direction this DB controls.
-	 */
-	enum tf_dir dir;
-
-	/**
-	 * Module type, used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-
-	/**
-	 * The DB consists of an array of elements
-	 */
-	struct tf_rm_element *db;
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] cfg
- *   Pointer to the DB configuration
- *
- * [in] reservations
- *   Pointer to the allocation values associated with the module
- *
- * [in] count
- *   Number of DB configuration elements
- *
- * [out] valid_count
- *   Number of HCAPI entries with a reservation value greater than 0
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static void
-tf_rm_count_hcapi_reservations(enum tf_dir dir,
-			       enum tf_device_module_type type,
-			       struct tf_rm_element_cfg *cfg,
-			       uint16_t *reservations,
-			       uint16_t count,
-			       uint16_t *valid_count)
-{
-	int i;
-	uint16_t cnt = 0;
-
-	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
-		    reservations[i] > 0)
-			cnt++;
-
-		/* Only log msg if a type is attempted reserved and
-		 * not supported. We ignore EM module as its using a
-		 * split configuration array thus it would fail for
-		 * this type of check.
-		 */
-		if (type != TF_DEVICE_MODULE_TYPE_EM &&
-		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
-		    reservations[i] > 0) {
-			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-		}
-	}
-
-	*valid_count = cnt;
-}
-
-/**
- * Resource Manager Adjust of base index definitions.
- */
-enum tf_rm_adjust_type {
-	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
-	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [in] action
- *   Adjust action
- *
- * [in] db_index
- *   DB index for the element type
- *
- * [in] index
- *   Index to convert
- *
- * [out] adj_index
- *   Adjusted index
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_adjust_index(struct tf_rm_element *db,
-		   enum tf_rm_adjust_type action,
-		   uint32_t db_index,
-		   uint32_t index,
-		   uint32_t *adj_index)
-{
-	int rc = 0;
-	uint32_t base_index;
-
-	base_index = db[db_index].alloc.entry.start;
-
-	switch (action) {
-	case TF_RM_ADJUST_RM_BASE:
-		*adj_index = index - base_index;
-		break;
-	case TF_RM_ADJUST_ADD_BASE:
-		*adj_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	return rc;
-}
-
-/**
- * Logs an array of found residual entries to the console.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] type
- *   Type of Device Module
- *
- * [in] count
- *   Number of entries in the residual array
- *
- * [in] residuals
- *   Pointer to an array of residual entries. Array is index same as
- *   the DB in which this function is used. Each entry holds residual
- *   value for that entry.
- */
-static void
-tf_rm_log_residuals(enum tf_dir dir,
-		    enum tf_device_module_type type,
-		    uint16_t count,
-		    uint16_t *residuals)
-{
-	int i;
-
-	/* Walk the residual array and log the types that wasn't
-	 * cleaned up to the console.
-	 */
-	for (i = 0; i < count; i++) {
-		if (residuals[i] != 0)
-			TFP_DRV_LOG(ERR,
-				"%s, %s was not cleaned up, %d outstanding\n",
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i),
-				residuals[i]);
-	}
-}
-
-/**
- * Performs a check of the passed in DB for any lingering elements. If
- * a resource type was found to not have been cleaned up by the caller
- * then its residual values are recorded, logged and passed back in an
- * allocate reservation array that the caller can pass to the FW for
- * cleanup.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [out] resv_size
- *   Pointer to the reservation size of the generated reservation
- *   array.
- *
- * [in/out] resv
- *   Pointer Pointer to a reservation array. The reservation array is
- *   allocated after the residual scan and holds any found residual
- *   entries. Thus it can be smaller than the DB that the check was
- *   performed on. Array must be freed by the caller.
- *
- * [out] residuals_present
- *   Pointer to a bool flag indicating if residual was present in the
- *   DB
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
-		      uint16_t *resv_size,
-		      struct tf_rm_resc_entry **resv,
-		      bool *residuals_present)
-{
-	int rc;
-	int i;
-	int f;
-	uint16_t count;
-	uint16_t found;
-	uint16_t *residuals = NULL;
-	uint16_t hcapi_type;
-	struct tf_rm_get_inuse_count_parms iparms;
-	struct tf_rm_get_alloc_info_parms aparms;
-	struct tf_rm_get_hcapi_parms hparms;
-	struct tf_rm_alloc_info info;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_entry *local_resv = NULL;
-
-	/* Create array to hold the entries that have residuals */
-	cparms.nitems = rm_db->num_entries;
-	cparms.size = sizeof(uint16_t);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	residuals = (uint16_t *)cparms.mem_va;
-
-	/* Traverse the DB and collect any residual elements */
-	iparms.rm_db = rm_db;
-	iparms.count = &count;
-	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
-		iparms.db_index = i;
-		rc = tf_rm_get_inuse_count(&iparms);
-		/* Not a device supported entry, just skip */
-		if (rc == -ENOTSUP)
-			continue;
-		if (rc)
-			goto cleanup_residuals;
-
-		if (count) {
-			found++;
-			residuals[i] = count;
-			*residuals_present = true;
-		}
-	}
-
-	if (*residuals_present) {
-		/* Populate a reduced resv array with only the entries
-		 * that have residuals.
-		 */
-		cparms.nitems = found;
-		cparms.size = sizeof(struct tf_rm_resc_entry);
-		cparms.alignment = 0;
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-
-		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-		aparms.rm_db = rm_db;
-		hparms.rm_db = rm_db;
-		hparms.hcapi_type = &hcapi_type;
-		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
-			if (residuals[i] == 0)
-				continue;
-			aparms.db_index = i;
-			aparms.info = &info;
-			rc = tf_rm_get_info(&aparms);
-			if (rc)
-				goto cleanup_all;
-
-			hparms.db_index = i;
-			rc = tf_rm_get_hcapi_type(&hparms);
-			if (rc)
-				goto cleanup_all;
-
-			local_resv[f].type = hcapi_type;
-			local_resv[f].start = info.entry.start;
-			local_resv[f].stride = info.entry.stride;
-			f++;
-		}
-		*resv_size = found;
-	}
-
-	tf_rm_log_residuals(rm_db->dir,
-			    rm_db->type,
-			    rm_db->num_entries,
-			    residuals);
-
-	tfp_free((void *)residuals);
-	*resv = local_resv;
-
-	return 0;
-
- cleanup_all:
-	tfp_free((void *)local_resv);
-	*resv = NULL;
- cleanup_residuals:
-	tfp_free((void *)residuals);
-
-	return rc;
-}
-
-int
-tf_rm_create_db(struct tf *tfp,
-		struct tf_rm_create_db_parms *parms)
-{
-	int rc;
-	int i;
-	int j;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	uint16_t max_types;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_req_entry *query;
-	enum tf_rm_resc_resv_strategy resv_strategy;
-	struct tf_rm_resc_req_entry *req;
-	struct tf_rm_resc_entry *resv;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_element *db;
-	uint32_t pool_size;
-	uint16_t hcapi_items;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc)
-		return rc;
-
-	/* Retrieve device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc)
-		return rc;
-
-	/* Need device max number of elements for the RM QCAPS */
-	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
-	if (rc)
-		return rc;
-
-	cparms.nitems = max_types;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Get Firmware Capabilities */
-	rc = tf_msg_session_resc_qcaps(tfp,
-				       parms->dir,
-				       max_types,
-				       query,
-				       &resv_strategy);
-	if (rc)
-		return rc;
-
-	/* Process capabilities against DB requirements. However, as a
-	 * DB can hold elements that are not HCAPI we can reduce the
-	 * req msg content by removing those out of the request yet
-	 * the DB holds them all as to give a fast lookup. We can also
-	 * remove entries where there are no request for elements.
-	 */
-	tf_rm_count_hcapi_reservations(parms->dir,
-				       parms->type,
-				       parms->cfg,
-				       parms->alloc_cnt,
-				       parms->num_elements,
-				       &hcapi_items);
-
-	/* Alloc request, alignment already set */
-	cparms.nitems = (size_t)hcapi_items;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Alloc reservation, alignment and nitems already set */
-	cparms.size = sizeof(struct tf_rm_resc_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-	/* Build the request */
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			/* Only perform reservation for entries that
-			 * has been requested
-			 */
-			if (parms->alloc_cnt[i] == 0)
-				continue;
-
-			/* Verify that we can get the full amount
-			 * allocated per the qcaps availability.
-			 */
-			if (parms->alloc_cnt[i] <=
-			    query[parms->cfg[i].hcapi_type].max) {
-				req[j].type = parms->cfg[i].hcapi_type;
-				req[j].min = parms->alloc_cnt[i];
-				req[j].max = parms->alloc_cnt[i];
-				j++;
-			} else {
-				TFP_DRV_LOG(ERR,
-					    "%s: Resource failure, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    parms->cfg[i].hcapi_type);
-				TFP_DRV_LOG(ERR,
-					"req:%d, avail:%d\n",
-					parms->alloc_cnt[i],
-					query[parms->cfg[i].hcapi_type].max);
-				return -EINVAL;
-			}
-		}
-	}
-
-	rc = tf_msg_session_resc_alloc(tfp,
-				       parms->dir,
-				       hcapi_items,
-				       req,
-				       resv);
-	if (rc)
-		return rc;
-
-	/* Build the RM DB per the request */
-	cparms.nitems = 1;
-	cparms.size = sizeof(struct tf_rm_new_db);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db = (void *)cparms.mem_va;
-
-	/* Build the DB within RM DB */
-	cparms.nitems = parms->num_elements;
-	cparms.size = sizeof(struct tf_rm_element);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
-
-	db = rm_db->db;
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-
-		/* Skip any non HCAPI types as we didn't include them
-		 * in the reservation request.
-		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
-			continue;
-
-		/* If the element didn't request an allocation no need
-		 * to create a pool nor verify if we got a reservation.
-		 */
-		if (parms->alloc_cnt[i] == 0)
-			continue;
-
-		/* If the element had requested an allocation and that
-		 * allocation was a success (full amount) then
-		 * allocate the pool.
-		 */
-		if (parms->alloc_cnt[i] == resv[j].stride) {
-			db[i].alloc.entry.start = resv[j].start;
-			db[i].alloc.entry.stride = resv[j].stride;
-
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			j++;
-		} else {
-			/* Bail out as we want what we requested for
-			 * all elements, not any less.
-			 */
-			TFP_DRV_LOG(ERR,
-				    "%s: Alloc failed, type:%d\n",
-				    tf_dir_2_str(parms->dir),
-				    db[i].cfg_type);
-			TFP_DRV_LOG(ERR,
-				    "req:%d, alloc:%d\n",
-				    parms->alloc_cnt[i],
-				    resv[j].stride);
-			goto fail;
-		}
-	}
-
-	rm_db->num_entries = parms->num_elements;
-	rm_db->dir = parms->dir;
-	rm_db->type = parms->type;
-	*parms->rm_db = (void *)rm_db;
-
-	printf("%s: type:%d num_entries:%d\n",
-	       tf_dir_2_str(parms->dir),
-	       parms->type,
-	       i);
-
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-
-	return 0;
-
- fail:
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-	tfp_free((void *)db->pool);
-	tfp_free((void *)db);
-	tfp_free((void *)rm_db);
-	parms->rm_db = NULL;
-
-	return -EINVAL;
-}
-
-int
-tf_rm_free_db(struct tf *tfp,
-	      struct tf_rm_free_db_parms *parms)
-{
-	int rc;
-	int i;
-	uint16_t resv_size = 0;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_resc_entry *resv;
-	bool residuals_found = false;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	/* Device unbind happens when the TF Session is closed and the
-	 * session ref count is 0. Device unbind will cleanup each of
-	 * its support modules, i.e. Identifier, thus we're ending up
-	 * here to close the DB.
-	 *
-	 * On TF Session close it is assumed that the session has already
-	 * cleaned up all its resources, individually, while
-	 * destroying its flows.
-	 *
-	 * To assist in the 'cleanup checking' the DB is checked for any
-	 * remaining elements and logged if found to be the case.
-	 *
-	 * Any such elements will need to be 'cleared' ahead of
-	 * returning the resources to the HCAPI RM.
-	 *
-	 * RM will signal FW to flush the DB resources. FW will
-	 * perform the invalidation. TF Session close will return the
-	 * previous allocated elements to the RM and then close the
-	 * HCAPI RM registration. That then saves several 'free' msgs
-	 * from being required.
-	 */
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-
-	/* Check for residuals that the client didn't clean up */
-	rc = tf_rm_check_residuals(rm_db,
-				   &resv_size,
-				   &resv,
-				   &residuals_found);
-	if (rc)
-		return rc;
-
-	/* Invalidate any residuals followed by a DB traversal for
-	 * pool cleanup.
-	 */
-	if (residuals_found) {
-		rc = tf_msg_session_resc_flush(tfp,
-					       parms->dir,
-					       resv_size,
-					       resv);
-		tfp_free((void *)resv);
-		/* On failure we still have to cleanup so we can only
-		 * log that FW failed.
-		 */
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s: Internal Flush error, module:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_device_module_type_2_str(rm_db->type));
-	}
-
-	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db[i].pool);
-
-	tfp_free((void *)parms->rm_db);
-
-	return rc;
-}
-
-int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority)
-		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
-	else
-		id = ba_alloc(rm_db->db[parms->db_index].pool);
-	if (id == BA_FAIL) {
-		rc = -ENOMEM;
-		TFP_DRV_LOG(ERR,
-			    "%s: Allocation failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_ADD_BASE,
-				parms->db_index,
-				id,
-				&index);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Alloc adjust of base index failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return -EINVAL;
-	}
-
-	*parms->index = index;
-
-	return rc;
-}
-
-int
-tf_rm_free(struct tf_rm_free_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
-	/* No logging direction matters and that is not available here */
-	if (rc)
-		return rc;
-
-	return rc;
-}
-
-int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
-				     adj_index);
-
-	return rc;
-}
-
-int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	memcpy(parms->info,
-	       &rm_db->db[parms->db_index].alloc,
-	       sizeof(struct tf_rm_alloc_info));
-
-	return 0;
-}
-
-int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
-
-	return 0;
-}
-
-int
-tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
-{
-	int rc = 0;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail silently (no logging), if the pool is not valid there
-	 * was no elements allocated for it.
-	 */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		*parms->count = 0;
-		return 0;
-	}
-
-	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
-
-	return rc;
-
-}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
deleted file mode 100644
index 5cb68892a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ /dev/null
@@ -1,446 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_RM_NEW_H_
-#define TF_RM_NEW_H_
-
-#include "tf_core.h"
-#include "bitalloc.h"
-#include "tf_device.h"
-
-struct tf;
-
-/**
- * The Resource Manager (RM) module provides basic DB handling for
- * internal resources. These resources exists within the actual device
- * and are controlled by the HCAPI Resource Manager running on the
- * firmware.
- *
- * The RM DBs are all intended to be indexed using TF types there for
- * a lookup requires no additional conversion. The DB configuration
- * specifies the TF Type to HCAPI Type mapping and it becomes the
- * responsibility of the DB initialization to handle this static
- * mapping.
- *
- * Accessor functions are providing access to the DB, thus hiding the
- * implementation.
- *
- * The RM DB will work on its initial allocated sizes so the
- * capability of dynamically growing a particular resource is not
- * possible. If this capability later becomes a requirement then the
- * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
- * structure and several new APIs would need to be added to allow for
- * growth of a single TF resource type.
- *
- * The access functions does not check for NULL pointers as it's a
- * support module, not called directly.
- */
-
-/**
- * Resource reservation single entry result. Used when accessing HCAPI
- * RM on the firmware.
- */
-struct tf_rm_new_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
-};
-
-/**
- * RM Element configuration enumeration. Used by the Device to
- * indicate how the RM elements the DB consists off, are to be
- * configured at time of DB creation. The TF may present types to the
- * ULP layer that is not controlled by HCAPI within the Firmware.
- */
-enum tf_rm_elem_cfg_type {
-	/** No configuration */
-	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
-	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
-	/**
-	 * Shared element thus it belongs to a shared FW Session and
-	 * is not controlled by the Host.
-	 */
-	TF_RM_ELEM_CFG_SHARED,
-	TF_RM_TYPE_MAX
-};
-
-/**
- * RM Reservation strategy enumeration. Type of strategy comes from
- * the HCAPI RM QCAPS handshake.
- */
-enum tf_rm_resc_resv_strategy {
-	TF_RM_RESC_RESV_STATIC_PARTITION,
-	TF_RM_RESC_RESV_STRATEGY_1,
-	TF_RM_RESC_RESV_STRATEGY_2,
-	TF_RM_RESC_RESV_STRATEGY_3,
-	TF_RM_RESC_RESV_MAX
-};
-
-/**
- * RM Element configuration structure, used by the Device to configure
- * how an individual TF type is configured in regard to the HCAPI RM
- * of same type.
- */
-struct tf_rm_element_cfg {
-	/**
-	 * RM Element config controls how the DB for that element is
-	 * processed.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/* If a HCAPI to TF type conversion is required then TF type
-	 * can be added here.
-	 */
-
-	/**
-	 * HCAPI RM Type for the element. Used for TF to HCAPI type
-	 * conversion.
-	 */
-	uint16_t hcapi_type;
-};
-
-/**
- * Allocation information for a single element.
- */
-struct tf_rm_alloc_info {
-	/**
-	 * HCAPI RM allocated range information.
-	 *
-	 * NOTE:
-	 * In case of dynamic allocation support this would have
-	 * to be changed to linked list of tf_rm_entry instead.
-	 */
-	struct tf_rm_new_entry entry;
-};
-
-/**
- * Create RM DB parameters
- */
-struct tf_rm_create_db_parms {
-	/**
-	 * [in] Device module type. Used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-	/**
-	 * [in] Receive or transmit direction.
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Number of elements.
-	 */
-	uint16_t num_elements;
-	/**
-	 * [in] Parameter structure array. Array size is num_elements.
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Resource allocation count array. This array content
-	 * originates from the tf_session_resources that is passed in
-	 * on session open.
-	 * Array size is num_elements.
-	 */
-	uint16_t *alloc_cnt;
-	/**
-	 * [out] RM DB Handle
-	 */
-	void **rm_db;
-};
-
-/**
- * Free RM DB parameters
- */
-struct tf_rm_free_db_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-};
-
-/**
- * Allocate RM parameters for a single element
- */
-struct tf_rm_allocate_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Pointer to the allocated index in normalized
-	 * form. Normalized means the index has been adjusted,
-	 * i.e. Full Action Record offsets.
-	 */
-	uint32_t *index;
-	/**
-	 * [in] Priority, indicates the prority of the entry
-	 * priority  0: allocate from top of the tcam (from index 0
-	 *              or lowest available index)
-	 * priority !0: allocate from bottom of the tcam (from highest
-	 *              available index)
-	 */
-	uint32_t priority;
-};
-
-/**
- * Free RM parameters for a single element
- */
-struct tf_rm_free_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint16_t index;
-};
-
-/**
- * Is Allocated parameters for a single element
- */
-struct tf_rm_is_allocated_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t index;
-	/**
-	 * [in] Pointer to flag that indicates the state of the query
-	 */
-	int *allocated;
-};
-
-/**
- * Get Allocation information for a single element
- */
-struct tf_rm_get_alloc_info_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the requested allocation information for
-	 * the specified db_index
-	 */
-	struct tf_rm_alloc_info *info;
-};
-
-/**
- * Get HCAPI type parameters for a single element
- */
-struct tf_rm_get_hcapi_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the hcapi type for the specified db_index
-	 */
-	uint16_t *hcapi_type;
-};
-
-/**
- * Get InUse count parameters for single element
- */
-struct tf_rm_get_inuse_count_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the inuse count for the specified db_index
-	 */
-	uint16_t *count;
-};
-
-/**
- * @page rm Resource Manager
- *
- * @ref tf_rm_create_db
- *
- * @ref tf_rm_free_db
- *
- * @ref tf_rm_allocate
- *
- * @ref tf_rm_free
- *
- * @ref tf_rm_is_allocated
- *
- * @ref tf_rm_get_info
- *
- * @ref tf_rm_get_hcapi_type
- *
- * @ref tf_rm_get_inuse_count
- */
-
-/**
- * Creates and fills a Resource Manager (RM) DB with requested
- * elements. The DB is indexed per the parms structure.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to create parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- * - Fail on parameter check
- * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
- * - Fail on DB creation if DB already exist
- *
- * - Allocs local DB
- * - Does hcapi qcaps
- * - Does hcapi reservation
- * - Populates the pool with allocated elements
- * - Returns handle to the created DB
- */
-int tf_rm_create_db(struct tf *tfp,
-		    struct tf_rm_create_db_parms *parms);
-
-/**
- * Closes the Resource Manager (RM) DB and frees all allocated
- * resources per the associated database.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free_db(struct tf *tfp,
-		  struct tf_rm_free_db_parms *parms);
-
-/**
- * Allocates a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to allocate parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- *   - (-ENOMEM) if pool is empty
- */
-int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
-
-/**
- * Free's a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free(struct tf_rm_free_parms *parms);
-
-/**
- * Performs an allocation verification check on a specified element.
- *
- * [in] parms
- *   Pointer to is allocated parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- *  - If pool is set to Chip MAX, then the query index must be checked
- *    against the allocated range and query index must be allocated as well.
- *  - If pool is allocated size only, then check if query index is allocated.
- */
-int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
-
-/**
- * Retrieves an elements allocation information from the Resource
- * Manager (RM) DB.
- *
- * [in] parms
- *   Pointer to get info parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type.
- *
- * [in] parms
- *   Pointer to get hcapi parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type inuse count.
- *
- * [in] parms
- *   Pointer to get inuse parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
-
-#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 705bb0955..e4472ed7f 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -14,6 +14,7 @@
 #include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "tf_resources.h"
 #include "stack.h"
 
 /**
@@ -43,7 +44,8 @@
 #define TF_SESSION_EM_POOL_SIZE \
 	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
 
-/** Session
+/**
+ * Session
  *
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
@@ -99,216 +101,6 @@ struct tf_session {
 	/** Device handle */
 	struct tf_dev_info dev;
 
-	/** Session HW and SRAM resources */
-	struct tf_rm_db resc;
-
-	/* Session HW resource pools */
-
-	/** RX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** RX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	/** TX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-
-	/** RX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	/** TX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-
-	/** RX EM Profile ID Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	/** TX EM Key Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/** RX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	/** TX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records are not controlled by way of a pool */
-
-	/** RX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	/** TX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-
-	/** RX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	/** TX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-
-	/** RX Meter Instance Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	/** TX Meter Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-
-	/** RX Mirror Configuration Pool*/
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	/** RX Mirror Configuration Pool */
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-
-	/** RX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	/** TX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	/** RX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	/** TX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	/** RX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	/** TX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	/** RX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	/** TX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-
-	/** RX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	/** TX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-
-	/** RX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	/** TX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-
-	/** RX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	/** TX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-
-	/** RX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	/** TX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-
-	/** RX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	/** TX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-
-	/** RX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	/** TX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-
-	/** RX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	/** TX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Session SRAM pools */
-
-	/** RX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		      TF_RSVD_SRAM_FULL_ACTION_RX);
-	/** TX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		      TF_RSVD_SRAM_FULL_ACTION_TX);
-
-	/** RX Multicast Group Pool, only RX is supported */
-	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
-		      TF_RSVD_SRAM_MCG_RX);
-
-	/** RX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_8B_RX);
-	/** TX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_8B_TX);
-
-	/** RX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_16B_RX);
-	/** TX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_16B_TX);
-
-	/** TX Encap 64B Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_64B_TX);
-
-	/** RX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		      TF_RSVD_SRAM_SP_SMAC_RX);
-	/** TX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_TX);
-
-	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-
-	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-
-	/** RX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_COUNTER_64B_RX);
-	/** TX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_COUNTER_64B_TX);
-
-	/** RX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_SPORT_RX);
-	/** TX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_SPORT_TX);
-
-	/** RX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_DPORT_RX);
-	/** TX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_DPORT_TX);
-
-	/** RX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	/** TX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
-
-	/** RX NAT Destination IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	/** TX NAT IPv4 Destination Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/**
-	 * Pools not allocated from HCAPI RM
-	 */
-
-	/** RX L2 Ctx Remap ID  Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 Ctx Remap ID Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** CRC32 seed table */
-#define TF_LKUP_SEED_MEM_SIZE 512
-	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
-
-	/** Lookup3 init values */
-	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d7f5de4c4..05e866dc6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -5,175 +5,413 @@
 
 /* Truflow Table APIs and supporting code */
 
-#include <stdio.h>
-#include <string.h>
-#include <stdbool.h>
-#include <math.h>
-#include <sys/param.h>
 #include <rte_common.h>
-#include <rte_errno.h>
-#include "hsi_struct_def_dpdk.h"
 
-#include "tf_core.h"
+#include "tf_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
 #include "tf_util.h"
-#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
-#include "hwrm_tf.h"
-#include "bnxt.h"
-#include "tf_resources.h"
-#include "tf_rm.h"
-#include "stack.h"
-#include "tf_common.h"
+
+
+struct tf;
+
+/**
+ * Table DBs.
+ */
+static void *tbl_db[TF_DIR_MAX];
+
+/**
+ * Table Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
 
 /**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
+ * Shadow init flag, set on bind and cleared on unbind
  */
-static int
-tf_bulk_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms)
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table DB already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_tbl_unbind(struct tf *tfp)
 {
 	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms)
+{
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
 	if (rc)
 		return rc;
 
-	index = parms->starting_idx;
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
 
-	/*
-	 * Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->starting_idx,
-			    &index);
+			    parms->idx);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
+{
+	int rc;
+	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
 		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   parms->type,
-		   index);
+		   parms->idx);
 		return -EINVAL;
 	}
 
-	/* Get the entry */
-	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
 }
 
-#if (TF_SHADOW == 1)
-/**
- * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is incremente and
- * returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count incremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
-			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+int
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Alloc with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	int rc;
+	uint16_t hcapi_type;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	return -EOPNOTSUPP;
-}
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
-/**
- * Free Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is decremente and
- * new ref_count returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_free_tbl_entry_shadow(struct tf_session *tfs,
-			 struct tf_free_tbl_entry_parms *parms)
-{
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Free with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
-	return -EOPNOTSUPP;
-}
-#endif /* TF_SHADOW */
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
 
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
 
- /* API defined in tf_core.h */
 int
-tf_bulk_get_tbl_entry(struct tf *tfp,
-		 struct tf_bulk_get_tbl_entry_parms *parms)
+tf_tbl_bulk_get(struct tf *tfp,
+		struct tf_tbl_get_bulk_parms *parms)
 {
-	int rc = 0;
+	int rc;
+	int i;
+	uint16_t hcapi_type;
+	uint32_t idx;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
+	if (!init) {
 		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
+			    "%s: No Table DBs created\n",
 			    tf_dir_2_str(parms->dir));
 
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
+		return -EINVAL;
+	}
+	/* Verify that the entries has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.allocated = &allocated;
+	idx = parms->starting_idx;
+	for (i = 0; i < parms->num_entries; i++) {
+		aparms.index = idx;
+		rc = tf_rm_is_allocated(&aparms);
 		if (rc)
+			return rc;
+
+		if (!allocated) {
 			TFP_DRV_LOG(ERR,
-				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    strerror(-rc));
+				    idx);
+			return -EINVAL;
+		}
+		idx++;
+	}
+
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entries */
+	rc = tf_msg_bulk_get_tbl_entry(tfp,
+				       parms->dir,
+				       hcapi_type,
+				       parms->starting_idx,
+				       parms->num_entries,
+				       parms->entry_sz_in_bytes,
+				       parms->physical_mem_addr);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index b17557345..eb560ffa7 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -3,17 +3,21 @@
  * All rights reserved.
  */
 
-#ifndef _TF_TBL_H_
-#define _TF_TBL_H_
-
-#include <stdint.h>
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
 
 #include "tf_core.h"
 #include "stack.h"
 
-struct tf_session;
+struct tf;
+
+/**
+ * The Table module provides processing of Internal TF table types.
+ */
 
-/** table scope control block content */
+/**
+ * Table scope control block content
+ */
 struct tf_em_caps {
 	uint32_t flags;
 	uint32_t supported;
@@ -35,66 +39,364 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
-	struct tf_em_caps          em_caps[TF_DIR_MAX];
-	struct stack               ext_act_pool[TF_DIR_MAX];
-	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps em_caps[TF_DIR_MAX];
+	struct stack ext_act_pool[TF_DIR_MAX];
+	uint32_t *ext_act_pool_mem[TF_DIR_MAX];
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+};
+
+/**
+ * Table allocation parameters
+ */
+struct tf_tbl_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t *idx;
+};
+
+/**
+ * Table free parameters
+ */
+struct tf_tbl_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
- */
-#define TF_EM_PAGE_SIZE_4K 12
-#define TF_EM_PAGE_SIZE_8K 13
-#define TF_EM_PAGE_SIZE_64K 16
-#define TF_EM_PAGE_SIZE_256K 18
-#define TF_EM_PAGE_SIZE_1M 20
-#define TF_EM_PAGE_SIZE_2M 21
-#define TF_EM_PAGE_SIZE_4M 22
-#define TF_EM_PAGE_SIZE_1G 30
-
-/* Set page size */
-#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
-
-#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
-#else
-#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
-#endif
-
-#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
-#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
-
-/**
- * Initialize table pool structure to indicate
- * no table scope has been associated with the
- * external pool of indexes.
- *
- * [in] session
- */
-void
-tf_init_tbl_pool(struct tf_session *session);
-
-#endif /* _TF_TBL_H_ */
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table set parameters
+ */
+struct tf_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get parameters
+ */
+struct tf_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get bulk parameters
+ */
+struct tf_tbl_get_bulk_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * @page tbl Table
+ *
+ * @ref tf_tbl_bind
+ *
+ * @ref tf_tbl_unbind
+ *
+ * @ref tf_tbl_alloc
+ *
+ * @ref tf_tbl_free
+ *
+ * @ref tf_tbl_alloc_search
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_bulk_get
+ */
+
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table allocation parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
+
+/**
+ * Retrieves bulk block of elements by sending a firmware request to
+ * get the elements.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get bulk parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bulk_get(struct tf *tfp,
+		    struct tf_tbl_get_bulk_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
deleted file mode 100644
index 2f5af6060..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ /dev/null
@@ -1,342 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <rte_common.h>
-
-#include "tf_tbl_type.h"
-#include "tf_common.h"
-#include "tf_rm_new.h"
-#include "tf_util.h"
-#include "tf_msg.h"
-#include "tfp.h"
-
-struct tf;
-
-/**
- * Table DBs.
- */
-static void *tbl_db[TF_DIR_MAX];
-
-/**
- * Table Shadow DBs
- */
-/* static void *shadow_tbl_db[TF_DIR_MAX]; */
-
-/**
- * Init flag, set on bind and cleared on unbind
- */
-static uint8_t init;
-
-/**
- * Shadow init flag, set on bind and cleared on unbind
- */
-/* static uint8_t shadow_init; */
-
-int
-tf_tbl_bind(struct tf *tfp,
-	    struct tf_tbl_cfg_parms *parms)
-{
-	int rc;
-	int i;
-	struct tf_rm_create_db_parms db_cfg = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (init) {
-		TFP_DRV_LOG(ERR,
-			    "Table already initialized\n");
-		return -EINVAL;
-	}
-
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.cfg = parms->cfg;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		db_cfg.dir = i;
-		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
-		db_cfg.rm_db = &tbl_db[i];
-		rc = tf_rm_create_db(tfp, &db_cfg);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Table DB creation failed\n",
-				    tf_dir_2_str(i));
-
-			return rc;
-		}
-	}
-
-	init = 1;
-
-	printf("Table Type - initialized\n");
-
-	return 0;
-}
-
-int
-tf_tbl_unbind(struct tf *tfp __rte_unused)
-{
-	int rc;
-	int i;
-	struct tf_rm_free_db_parms fparms = { 0 };
-
-	TF_CHECK_PARMS1(tfp);
-
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No Table DBs created\n");
-		return -EINVAL;
-	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		fparms.dir = i;
-		fparms.rm_db = tbl_db[i];
-		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
-
-		tbl_db[i] = NULL;
-	}
-
-	init = 0;
-
-	return 0;
-}
-
-int
-tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms)
-{
-	int rc;
-	uint32_t idx;
-	struct tf_rm_allocate_parms aparms = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Allocate requested element */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = &idx;
-	rc = tf_rm_allocate(&aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed allocate, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return rc;
-	}
-
-	*parms->idx = idx;
-
-	return 0;
-}
-
-int
-tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms)
-{
-	int rc;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_free_parms fparms = { 0 };
-	int allocated = 0;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Check if element is in use */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Entry already free, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	/* Free requested element */
-	fparms.rm_db = tbl_db[parms->dir];
-	fparms.db_index = parms->type;
-	fparms.index = parms->idx;
-	rc = tf_rm_free(&fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Free failed, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_alloc_search(struct tf *tfp __rte_unused,
-		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
-{
-	return 0;
-}
-
-int
-tf_tbl_set(struct tf *tfp,
-	   struct tf_tbl_set_parms *parms)
-{
-	int rc;
-	int allocated = 0;
-	uint16_t hcapi_type;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_get(struct tf *tfp,
-	   struct tf_tbl_get_parms *parms)
-{
-	int rc;
-	uint16_t hcapi_type;
-	int allocated = 0;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
deleted file mode 100644
index 3474489a6..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ /dev/null
@@ -1,318 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_TBL_TYPE_H_
-#define TF_TBL_TYPE_H_
-
-#include "tf_core.h"
-
-struct tf;
-
-/**
- * The Table module provides processing of Internal TF table types.
- */
-
-/**
- * Table configuration parameters
- */
-struct tf_tbl_cfg_parms {
-	/**
-	 * Number of table types in each of the configuration arrays
-	 */
-	uint16_t num_elements;
-	/**
-	 * Table Type element configuration array
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Shadow table type configuration array
-	 */
-	struct tf_shadow_tbl_cfg *shadow_cfg;
-	/**
-	 * Boolean controlling the request shadow copy.
-	 */
-	bool shadow_copy;
-	/**
-	 * Session resource allocations
-	 */
-	struct tf_session_resources *resources;
-};
-
-/**
- * Table allocation parameters
- */
-struct tf_tbl_alloc_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t *idx;
-};
-
-/**
- * Table free parameters
- */
-struct tf_tbl_free_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation type
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t idx;
-	/**
-	 * [out] Reference count after free, only valid if session has been
-	 * created with shadow_copy.
-	 */
-	uint16_t ref_cnt;
-};
-
-/**
- * Table allocate search parameters
- */
-struct tf_tbl_alloc_search_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
-	 */
-	uint32_t tbl_scope_id;
-	/**
-	 * [in] Enable search for matching entry. If the table type is
-	 * internal the shadow copy will be searched before
-	 * alloc. Session must be configured with shadow copy enabled.
-	 */
-	uint8_t search_enable;
-	/**
-	 * [in] Result data to search for (if search_enable)
-	 */
-	uint8_t *result;
-	/**
-	 * [in] Result data size in bytes (if search_enable)
-	 */
-	uint16_t result_sz_in_bytes;
-	/**
-	 * [out] If search_enable, set if matching entry found
-	 */
-	uint8_t hit;
-	/**
-	 * [out] Current ref count after allocation (if search_enable)
-	 */
-	uint16_t ref_cnt;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table set parameters
- */
-struct tf_tbl_set_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to set
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [in] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to write to
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table get parameters
- */
-struct tf_tbl_get_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to get
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [out] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to read
-	 */
-	uint32_t idx;
-};
-
-/**
- * @page tbl Table
- *
- * @ref tf_tbl_bind
- *
- * @ref tf_tbl_unbind
- *
- * @ref tf_tbl_alloc
- *
- * @ref tf_tbl_free
- *
- * @ref tf_tbl_alloc_search
- *
- * @ref tf_tbl_set
- *
- * @ref tf_tbl_get
- */
-
-/**
- * Initializes the Table module with the requested DBs. Must be
- * invoked as the first thing before any of the access functions.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table configuration parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_bind(struct tf *tfp,
-		struct tf_tbl_cfg_parms *parms);
-
-/**
- * Cleans up the private DBs and releases all the data.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_unbind(struct tf *tfp);
-
-/**
- * Allocates the requested table type from the internal RM DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table allocation parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc(struct tf *tfp,
-		 struct tf_tbl_alloc_parms *parms);
-
-/**
- * Free's the requested table type and returns it to the DB. If shadow
- * DB is enabled its searched first and if found the element refcount
- * is decremented. If refcount goes to 0 then its returned to the
- * table type DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_free(struct tf *tfp,
-		struct tf_tbl_free_parms *parms);
-
-/**
- * Supported if Shadow DB is configured. Searches the Shadow DB for
- * any matching element. If found the refcount in the shadow DB is
- * updated accordingly. If not found a new element is allocated and
- * installed into the shadow DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc_search(struct tf *tfp,
-			struct tf_tbl_alloc_search_parms *parms);
-
-/**
- * Configures the requested element by sending a firmware request which
- * then installs it into the device internal structures.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_set(struct tf *tfp,
-	       struct tf_tbl_set_parms *parms);
-
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table get parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_get(struct tf *tfp,
-	       struct tf_tbl_get_parms *parms);
-
-#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index a1761ad56..fc047f8f8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -9,7 +9,7 @@
 #include "tf_tcam.h"
 #include "tf_common.h"
 #include "tf_util.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_device.h"
 #include "tfp.h"
 #include "tf_session.h"
@@ -49,7 +49,7 @@ tf_tcam_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "TCAM already initialized\n");
+			    "TCAM DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -86,11 +86,12 @@ tf_tcam_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init)
-		return -EINVAL;
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No TCAM DBs created\n");
+		return 0;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 24/51] net/bnxt: update RM to support HCAPI only
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (22 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 23/51] net/bnxt: update table get to use new design Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 25/51] net/bnxt: remove table scope from session Ajit Khaparde
                           ` (26 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- For the EM Module there is a need to only allocate the EM Records in
  HCAPI RM but the storage control is requested to be outside of the RM DB.
- Add TF_RM_ELEM_CFG_HCAPI_BA.
- Return error when the number of reserved entries for wc tcam is odd
  number in tf_tcam_bind.
- Remove em_pool from session
- Use RM provided start offset and size
- HCAPI returns entry index instead of row index for WC TCAM.
- Move resource type conversion to hrwm set/free tcam functions.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   2 +
 drivers/net/bnxt/tf_core/tf_device_p4.h   |  54 ++++-----
 drivers/net/bnxt/tf_core/tf_em_internal.c | 131 ++++++++++++++--------
 drivers/net/bnxt/tf_core/tf_msg.c         |   6 +-
 drivers/net/bnxt/tf_core/tf_rm.c          |  81 ++++++-------
 drivers/net/bnxt/tf_core/tf_rm.h          |  14 ++-
 drivers/net/bnxt/tf_core/tf_session.h     |   5 -
 drivers/net/bnxt/tf_core/tf_tcam.c        |  21 ++++
 8 files changed, 190 insertions(+), 124 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index e3526672f..1eaf18212 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -68,6 +68,8 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
 		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
 			return -ENOTSUP;
+
+		*num_slices_per_row = 1;
 	} else { /* for other type of tcam */
 		*num_slices_per_row = 1;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 473e4eae5..8fae18012 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,19 +12,19 @@
 #include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
 	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_TCAM },
 	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
@@ -32,26 +32,26 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
 	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
 	/* CFA_RESOURCE_TYPE_P4_UPAR */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_EPOC */
@@ -79,7 +79,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EM_REC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
 };
 
 struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 1c514747d..3129fbe31 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -23,20 +23,28 @@
  */
 static void *em_db[TF_DIR_MAX];
 
+#define TF_EM_DB_EM_REC 0
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
 static uint8_t init;
 
+
+/**
+ * EM Pool
+ */
+static struct stack em_pool[TF_DIR_MAX];
+
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  * [in] num_entries
  *   number of entries to write
+ * [in] start
+ *   starting offset
  *
  * Return:
  *  0       - Success, entry allocated - no search support
@@ -44,54 +52,66 @@ static uint8_t init;
  *          - Failure, entry not allocated, out of resources
  */
 static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
+tf_create_em_pool(enum tf_dir dir,
+		  uint32_t num_entries,
+		  uint32_t start)
 {
 	struct tfp_calloc_parms parms;
 	uint32_t i, j;
 	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 
-	parms.nitems = num_entries;
+	/* Assumes that num_entries has been checked before we get here */
+	parms.nitems = num_entries / TF_SESSION_EM_ENTRY_SIZE;
 	parms.size = sizeof(uint32_t);
 	parms.alignment = 0;
 
 	rc = tfp_calloc(&parms);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool allocation failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+	rc = stack_init(num_entries / TF_SESSION_EM_ENTRY_SIZE,
+			(uint32_t *)parms.mem_va,
+			pool);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack init failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
 
 	/* Fill pool with indexes
 	 */
-	j = num_entries - 1;
+	j = start + num_entries - TF_SESSION_EM_ENTRY_SIZE;
 
-	for (i = 0; i < num_entries; i++) {
+	for (i = 0; i < (num_entries / TF_SESSION_EM_ENTRY_SIZE); i++) {
 		rc = stack_push(pool, j);
 		if (rc) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, EM pool stack push failure %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup;
 		}
-		j--;
+
+		j -= TF_SESSION_EM_ENTRY_SIZE;
 	}
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
@@ -105,18 +125,15 @@ tf_create_em_pool(struct tf_session *session,
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  *
  * Return:
  */
 static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
+tf_free_em_pool(enum tf_dir dir)
 {
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 	uint32_t *ptr;
 
 	ptr = stack_items(pool);
@@ -140,22 +157,19 @@ tf_em_insert_int_entry(struct tf *tfp,
 	uint16_t rptr_index = 0;
 	uint8_t rptr_entry = 0;
 	uint8_t num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 	uint32_t index;
 
 	rc = stack_pop(pool, &index);
 
 	if (rc) {
-		PMD_DRV_LOG
-		  (ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
+		PMD_DRV_LOG(ERR,
+			    "%s, EM entry index allocation failed\n",
+			    tf_dir_2_str(parms->dir));
 		return rc;
 	}
 
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rptr_index = index;
 	rc = tf_msg_insert_em_internal_entry(tfp,
 					     parms,
 					     &rptr_index,
@@ -166,8 +180,9 @@ tf_em_insert_int_entry(struct tf *tfp,
 
 	PMD_DRV_LOG
 		  (ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   "%s, Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   index,
 		   rptr_index,
 		   rptr_entry,
 		   num_of_entries);
@@ -204,15 +219,13 @@ tf_em_delete_int_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	int rc = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 
 	rc = tf_msg_delete_em_entry(tfp, parms);
 
 	/* Return resource to pool */
 	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+		stack_push(pool, parms->index);
 
 	return rc;
 }
@@ -224,8 +237,9 @@ tf_em_int_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
-	struct tf_session *session;
 	uint8_t db_exists = 0;
+	struct tf_rm_get_alloc_info_parms iparms;
+	struct tf_rm_alloc_info info;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -235,14 +249,6 @@ tf_em_int_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		tf_create_em_pool(session,
-				  i,
-				  TF_SESSION_EM_POOL_SIZE);
-	}
-
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -257,6 +263,18 @@ tf_em_int_bind(struct tf *tfp,
 		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
 			continue;
 
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] %
+		    TF_SESSION_EM_ENTRY_SIZE != 0) {
+			rc = -ENOMEM;
+			TFP_DRV_LOG(ERR,
+				    "%s, EM Allocation must be in blocks of %d, failure %s\n",
+				    tf_dir_2_str(i),
+				    TF_SESSION_EM_ENTRY_SIZE,
+				    strerror(-rc));
+
+			return rc;
+		}
+
 		db_cfg.rm_db = &em_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
@@ -272,6 +290,28 @@ tf_em_int_bind(struct tf *tfp,
 	if (db_exists)
 		init = 1;
 
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		iparms.rm_db = em_db[i];
+		iparms.db_index = TF_EM_DB_EM_REC;
+		iparms.info = &info;
+
+		rc = tf_rm_get_info(&iparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB get info failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+
+		rc = tf_create_em_pool(i,
+				       iparms.info->entry.stride,
+				       iparms.info->entry.start);
+		/* Logging handled in tf_create_em_pool */
+		if (rc)
+			return rc;
+	}
+
+
 	return 0;
 }
 
@@ -281,7 +321,6 @@ tf_em_int_unbind(struct tf *tfp)
 	int rc;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
-	struct tf_session *session;
 
 	TF_CHECK_PARMS1(tfp);
 
@@ -292,10 +331,8 @@ tf_em_int_unbind(struct tf *tfp)
 		return 0;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	for (i = 0; i < TF_DIR_MAX; i++)
-		tf_free_em_pool(session, i);
+		tf_free_em_pool(i);
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 02d8a4971..7fffb6baf 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -857,12 +857,12 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < size)
+	if (tfp_le_to_cpu_32(resp.size) != size)
 		return -EINVAL;
 
 	tfp_memcpy(data,
 		   &resp.data,
-		   resp.size);
+		   size);
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
@@ -919,7 +919,7 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < data_size)
+	if (tfp_le_to_cpu_32(resp.size) != data_size)
 		return -EINVAL;
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0469b653..e7af9eb84 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -106,7 +106,8 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 	uint16_t cnt = 0;
 
 	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		if ((cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		     cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) &&
 		    reservations[i] > 0)
 			cnt++;
 
@@ -467,7 +468,8 @@ tf_rm_create_db(struct tf *tfp,
 	/* Build the request */
 	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		    parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 			/* Only perform reservation for entries that
 			 * has been requested
 			 */
@@ -529,7 +531,8 @@ tf_rm_create_db(struct tf *tfp,
 		/* Skip any non HCAPI types as we didn't include them
 		 * in the reservation request.
 		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+		    parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 			continue;
 
 		/* If the element didn't request an allocation no need
@@ -551,29 +554,32 @@ tf_rm_create_db(struct tf *tfp,
 			       resv[j].start,
 			       resv[j].stride);
 
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
+			/* Only allocate BA pool if so requested */
+			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
+				/* Create pool */
+				pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+					     sizeof(struct bitalloc));
+				/* Alloc request, alignment already set */
+				cparms.nitems = pool_size;
+				cparms.size = sizeof(struct bitalloc);
+				rc = tfp_calloc(&cparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool alloc failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
+				db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+				rc = ba_init(db[i].pool, resv[j].stride);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool init failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
 			}
 			j++;
 		} else {
@@ -682,6 +688,9 @@ tf_rm_free_db(struct tf *tfp,
 				    tf_device_module_type_2_str(rm_db->type));
 	}
 
+	/* No need to check for configuration type, even if we do not
+	 * have a BA pool we just delete on a null ptr, no harm
+	 */
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -705,8 +714,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -770,8 +778,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -816,8 +823,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -857,9 +863,9 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	memcpy(parms->info,
@@ -880,9 +886,9 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
@@ -903,8 +909,7 @@ tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail silently (no logging), if the pool is not valid there
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5cb68892a..f44fcca70 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -56,12 +56,18 @@ struct tf_rm_new_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	/** No configuration */
+	/**
+	 * No configuration
+	 */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
+	/** HCAPI 'controlled', no RM storage thus the Device Module
+	 *  using the RM can chose to handle storage locally.
+	 */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
+	/** HCAPI 'controlled', uses a Bit Allocator Pool for internal
+	 *  storage in the RM.
+	 */
+	TF_RM_ELEM_CFG_HCAPI_BA,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
 	 * is not controlled by the Host.
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index e4472ed7f..ebee4db8c 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -103,11 +103,6 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
-
-	/**
-	 * EM Pools
-	 */
-	struct stack em_pool[TF_DIR_MAX];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index fc047f8f8..d5bb4eec1 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -43,6 +43,7 @@ tf_tcam_bind(struct tf *tfp,
 {
 	int rc;
 	int i;
+	struct tf_tcam_resources *tcam_cnt;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -53,6 +54,14 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	tcam_cnt = parms->resources->tcam_cnt;
+	if ((tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2) ||
+	    (tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2)) {
+		TFP_DRV_LOG(ERR,
+			    "Number of WC TCAM entries cannot be odd num\n");
+		return -EINVAL;
+	}
+
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -168,6 +177,18 @@ tf_tcam_alloc(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM &&
+	    (parms->idx % 2) != 0) {
+		rc = tf_rm_allocate(&aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Failed tcam, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type);
+			return rc;
+		}
+	}
+
 	parms->idx *= num_slice_per_row;
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 25/51] net/bnxt: remove table scope from session
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (23 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
                           ` (25 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Remove table scope data from session. Added to EEM.
- Complete move to RM of table scope base and range.
- Fix some err messaging strings.
- Fix the tcam logging message.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      |  2 +-
 drivers/net/bnxt/tf_core/tf_em.h        |  1 -
 drivers/net/bnxt/tf_core/tf_em_common.c | 16 +++++++----
 drivers/net/bnxt/tf_core/tf_em_common.h |  5 +---
 drivers/net/bnxt/tf_core/tf_em_host.c   | 38 ++++++++++---------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 12 +++-----
 drivers/net/bnxt/tf_core/tf_session.h   |  3 --
 drivers/net/bnxt/tf_core/tf_tcam.c      |  6 ++--
 8 files changed, 35 insertions(+), 48 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8727900c4..6410843f6 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -573,7 +573,7 @@ tf_free_tcam_entry(struct tf *tfp,
 	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: TCAM allocation failed, rc:%s\n",
+			    "%s: TCAM free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 6bfcbd59e..617b07587 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,7 +9,6 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
-#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index d0d80daeb..e31a63b46 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,6 +29,8 @@
  */
 void *eem_db[TF_DIR_MAX];
 
+#define TF_EEM_DB_TBL_SCOPE 1
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -39,10 +41,12 @@ static uint8_t init;
  */
 static enum tf_mem_type mem_type;
 
+/** Table scope array */
+struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
+tbl_scope_cb_find(uint32_t tbl_scope_id)
 {
 	int i;
 	struct tf_rm_is_allocated_parms parms;
@@ -50,8 +54,8 @@ tbl_scope_cb_find(struct tf_session *session,
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
@@ -60,8 +64,8 @@ tbl_scope_cb_find(struct tf_session *session,
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
+		if (tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &tbl_scopes[i];
 	}
 
 	return NULL;
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index 45699a7c3..bf01df9b8 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -14,8 +14,6 @@
  * Function to search for table scope control block structure
  * with specified table scope ID.
  *
- * [in] session
- *   Session to use for the search of the table scope control block
  * [in] tbl_scope_id
  *   Table scope ID to search for
  *
@@ -23,8 +21,7 @@
  *  Pointer to the found table scope control block struct or NULL if
  *   table scope control block struct not found
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+struct tf_tbl_scope_cb *tbl_scope_cb_find(uint32_t tbl_scope_id);
 
 /**
  * Create and initialize a stack to use for action entries
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 8be39afdd..543edb54a 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,6 +48,9 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
+#define TF_EEM_DB_TBL_SCOPE 1
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
 /**
  * Function to free a page table
@@ -934,14 +937,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_entry(struct tf *tfp,
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -957,14 +958,12 @@ tf_em_insert_ext_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_entry(struct tf *tfp,
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -981,16 +980,13 @@ tf_em_ext_host_alloc(struct tf *tfp,
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct hcapi_cfa_em_table *em_tables;
-	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 	struct tf_rm_allocate_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -999,8 +995,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 		return rc;
 	}
 
-	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
-	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
 	tbl_scope_cb->index = parms->tbl_scope_id;
 	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
 
@@ -1092,8 +1087,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
 }
@@ -1105,13 +1100,9 @@ tf_em_ext_host_free(struct tf *tfp,
 	int rc = 0;
 	enum tf_dir  dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
 	struct tf_rm_free_parms aparms = { 0 };
 
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Table scope error\n");
@@ -1120,8 +1111,8 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1142,5 +1133,6 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index ee18a0c70..6dd115470 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -63,14 +63,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_sys_entry(struct tf *tfp,
+tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -87,14 +85,12 @@ tf_em_insert_ext_sys_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_sys_entry(struct tf *tfp,
+tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index ebee4db8c..a303fde51 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -100,9 +100,6 @@ struct tf_session {
 
 	/** Device handle */
 	struct tf_dev_info dev;
-
-	/** Table scope array */
-	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index d5bb4eec1..b67159a54 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -287,7 +287,8 @@ tf_tcam_free(struct tf *tfp,
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
@@ -382,7 +383,8 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d set failed, rc:%s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 26/51] net/bnxt: add external action alloc and free
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (24 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 25/51] net/bnxt: remove table scope from session Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
                           ` (24 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Link external action alloc and free to new hcapi interface
- Add parameter range checking
- Fix issues with index allocation check

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 163 ++++++++++++++++-------
 drivers/net/bnxt/tf_core/tf_core.h       |   4 -
 drivers/net/bnxt/tf_core/tf_device.h     |  58 ++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   6 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   2 -
 drivers/net/bnxt/tf_core/tf_em.h         |  95 +++++++++++++
 drivers/net/bnxt/tf_core/tf_em_common.c  | 120 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  80 ++++++++++-
 drivers/net/bnxt/tf_core/tf_em_system.c  |   6 +
 drivers/net/bnxt/tf_core/tf_identifier.c |   4 +-
 drivers/net/bnxt/tf_core/tf_rm.h         |   5 +
 drivers/net/bnxt/tf_core/tf_tbl.c        |  10 +-
 drivers/net/bnxt/tf_core/tf_tbl.h        |  12 ++
 drivers/net/bnxt/tf_core/tf_tcam.c       |   8 +-
 drivers/net/bnxt/tf_core/tf_util.c       |   4 -
 15 files changed, 499 insertions(+), 78 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6410843f6..45accb0ab 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -617,25 +617,48 @@ tf_alloc_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_alloc_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	aparms.dir = parms->dir;
 	aparms.type = parms->type;
 	aparms.idx = &idx;
-	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table allocation failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	aparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_alloc_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_ext_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: External table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+
+	} else {
+		if (dev->ops->tf_dev_alloc_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	parms->idx = idx;
@@ -677,25 +700,47 @@ tf_free_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_free_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	fparms.dir = parms->dir;
 	fparms.type = parms->type;
 	fparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table free failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	fparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_free_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_ext_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_free_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return 0;
@@ -735,27 +780,49 @@ tf_set_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_set_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	sparms.dir = parms->dir;
 	sparms.type = parms->type;
 	sparms.data = parms->data;
 	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
 	sparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table set failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	sparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_set_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_ext_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_set_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index a7a7bd38a..e898f19a0 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -211,10 +211,6 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
 	/** Wh+/SR Action _Modify L4 Dest Port */
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
 	/** Meter Profiles */
 	TF_TBL_TYPE_METER_PROF,
 	/** Meter Instance */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 93f3627d4..58b7a4ab2 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -216,6 +216,26 @@ struct tf_dev_ops {
 	int (*tf_dev_alloc_tbl)(struct tf *tfp,
 				struct tf_tbl_alloc_parms *parms);
 
+	/**
+	 * Allocation of a external table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ext_tbl)(struct tf *tfp,
+				    struct tf_tbl_alloc_parms *parms);
+
 	/**
 	 * Free of a table type element.
 	 *
@@ -235,6 +255,25 @@ struct tf_dev_ops {
 	int (*tf_dev_free_tbl)(struct tf *tfp,
 			       struct tf_tbl_free_parms *parms);
 
+	/**
+	 * Free of a external table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ext_tbl)(struct tf *tfp,
+				   struct tf_tbl_free_parms *parms);
+
 	/**
 	 * Searches for the specified table type element in a shadow DB.
 	 *
@@ -276,6 +315,25 @@ struct tf_dev_ops {
 	int (*tf_dev_set_tbl)(struct tf *tfp,
 			      struct tf_tbl_set_parms *parms);
 
+	/**
+	 * Sets the specified external table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_ext_tbl)(struct tf *tfp,
+				  struct tf_tbl_set_parms *parms);
+
 	/**
 	 * Retrieves the specified table type element.
 	 *
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 1eaf18212..9a3230787 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -85,10 +85,13 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = NULL,
 	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_ext_tbl = NULL,
 	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_ext_tbl = NULL,
 	.tf_dev_free_tbl = NULL,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
+	.tf_dev_set_ext_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
 	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
@@ -113,9 +116,12 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_alloc_ext_tbl = tf_tbl_ext_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 8fae18012..298e100f3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -47,8 +47,6 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 617b07587..39a216341 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -456,4 +456,99 @@ int tf_em_ext_common_free(struct tf *tfp,
  */
 int tf_em_ext_common_alloc(struct tf *tfp,
 			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_system_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
+
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index e31a63b46..39a8412b3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,8 +29,6 @@
  */
 void *eem_db[TF_DIR_MAX];
 
-#define TF_EEM_DB_TBL_SCOPE 1
-
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -54,13 +52,13 @@ tbl_scope_cb_find(uint32_t tbl_scope_id)
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
 
-	if (i < 0 || !allocated)
+	if (i < 0 || allocated != TF_RM_ALLOCATED_ENTRY_IN_USE)
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
@@ -158,6 +156,111 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type);
+		return rc;
+	}
+
+	*parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms)
+{
+	int rc = 0;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
 uint32_t
 tf_em_get_key_mask(int num_entries)
 {
@@ -273,6 +376,15 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_tbl_ext_host_set(tfp, parms);
+	else
+		return tf_tbl_ext_system_set(tfp, parms);
+}
+
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 543edb54a..d7c147a15 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,7 +48,6 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
-#define TF_EEM_DB_TBL_SCOPE 1
 
 extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
@@ -986,7 +985,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -1087,7 +1086,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
@@ -1111,7 +1110,7 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
@@ -1133,6 +1132,77 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
-	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
+	return rc;
+}
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 6dd115470..10768df03 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -112,3 +112,9 @@ tf_em_ext_system_free(struct tf *tfp __rte_unused,
 {
 	return 0;
 }
+
+int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
+			  struct tf_tbl_set_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 211371081..219839272 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -159,13 +159,13 @@ tf_ident_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->id);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index f44fcca70..fd044801f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -12,6 +12,11 @@
 
 struct tf;
 
+/** RM return codes */
+#define TF_RM_ALLOCATED_ENTRY_FREE        0
+#define TF_RM_ALLOCATED_ENTRY_IN_USE      1
+#define TF_RM_ALLOCATED_NO_ENTRY_FOUND   -1
+
 /**
  * The Resource Manager (RM) module provides basic DB handling for
  * internal resources. These resources exists within the actual device
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 05e866dc6..3a3277329 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -172,13 +172,13 @@ tf_tbl_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -233,7 +233,7 @@ tf_tbl_set(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -301,7 +301,7 @@ tf_tbl_get(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -374,7 +374,7 @@ tf_tbl_bulk_get(struct tf *tfp,
 		if (rc)
 			return rc;
 
-		if (!allocated) {
+		if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 			TFP_DRV_LOG(ERR,
 				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index eb560ffa7..2a10b47ce 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -83,6 +83,10 @@ struct tf_tbl_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
@@ -101,6 +105,10 @@ struct tf_tbl_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Index to free
 	 */
@@ -168,6 +176,10 @@ struct tf_tbl_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Entry data
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b67159a54..b1092cd9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -252,13 +252,13 @@ tf_tcam_free(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -362,13 +362,13 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry is not allocated, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Convert TF type to HCAPI RM type */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 5472a9aac..85f6e25f4 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -92,10 +92,6 @@ tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 		return "NAT IPv4 Source";
 	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
 		return "NAT IPv4 Destination";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-		return "NAT IPv6 Source";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-		return "NAT IPv6 Destination";
 	case TF_TBL_TYPE_METER_PROF:
 		return "Meter Profile";
 	case TF_TBL_TYPE_METER_INST:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 27/51] net/bnxt: align CFA resources with RM
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (25 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
                           ` (23 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru

From: Randy Schacher <stuart.schacher@broadcom.com>

- HCAPI resources need to align for Resource Manager
- Clean up unnecessary debug messages

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 250 +++++++++---------
 drivers/net/bnxt/tf_core/tf_identifier.c      |   3 +-
 drivers/net/bnxt/tf_core/tf_msg.c             |  37 ++-
 drivers/net/bnxt/tf_core/tf_rm.c              |  21 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             |   3 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  28 +-
 6 files changed, 197 insertions(+), 145 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 6e79facec..6d6651fde 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -48,232 +48,246 @@
 #define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
 #define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
-/* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
+/* EPOCH 0 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH0          0xfUL
+/* EPOCH 1 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH1          0x10UL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x11UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x12UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x13UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x14UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x15UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x16UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P58_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6         0x6UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT           0x7UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT           0x8UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4          0x9UL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
-/* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
-/* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4          0xaUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER               0xbUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE          0xcUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION         0xdUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION     0xeUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P58_EXT_FORMAT_0_ACTION 0xfUL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_1_ACTION     0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION     0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION     0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION     0x13UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_5_ACTION     0x14UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_6_ACTION     0x15UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM        0x16UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP       0x17UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC           0x18UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM           0x19UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID     0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P58_EM_REC              0x1cUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM             0x1dUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF          0x1eUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR              0x1fUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM             0x20UL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB              0x21UL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB              0x22UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
-#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM            0x23UL
+#define CFA_RESOURCE_TYPE_P58_LAST               CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P45_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P45_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P45_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM             0x21UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM            0x22UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE           0x23UL
+#define CFA_RESOURCE_TYPE_P45_LAST               CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P4_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P4_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P4_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM             0x21UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE           0x22UL
+#define CFA_RESOURCE_TYPE_P4_LAST               CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 219839272..2cc43b40f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -59,7 +59,8 @@ tf_ident_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Identifier - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Identifier - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 7fffb6baf..659065de3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,9 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
+/* Logging defines */
+#define TF_RM_MSG_DEBUG  0
+
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -215,7 +218,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -225,31 +228,39 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nQCAPS\n");
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("type: %d(0x%x) %d %d\n",
 		       query[i].type,
 		       query[i].type,
 		       query[i].min,
 		       query[i].max);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	}
 
 	*resv_strategy = resp.flags &
 	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
 
+cleanup:
 	tf_msg_free_dma_buf(&qcaps_buf);
 
 	return rc;
@@ -293,8 +304,10 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	dma_size = size * sizeof(struct tf_rm_resc_entry);
 	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
-	if (rc)
+	if (rc) {
+		tf_msg_free_dma_buf(&req_buf);
 		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
@@ -320,7 +333,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -330,11 +343,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nRESV\n");
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
@@ -343,14 +359,17 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
 		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
 		       resv[i].type,
 		       resv[i].type,
 		       resv[i].start,
 		       resv[i].stride);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	}
 
+cleanup:
 	tf_msg_free_dma_buf(&req_buf);
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -412,8 +431,6 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	parms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp, &parms);
-	if (rc)
-		return rc;
 
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -434,7 +451,7 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
-	uint32_t flags;
+	uint16_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
@@ -480,7 +497,7 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
+	uint16_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
@@ -726,8 +743,6 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
-	if (rc)
-		goto cleanup;
 
 cleanup:
 	tf_msg_free_dma_buf(&buf);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e7af9eb84..30313e2ea 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -17,6 +17,9 @@
 #include "tfp.h"
 #include "tf_msg.h"
 
+/* Logging defines */
+#define TF_RM_DEBUG  0
+
 /**
  * Generic RM Element data type that an RM DB is build upon.
  */
@@ -120,16 +123,11 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
 		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
+				"%s, %s, %s allocation of %d not supported\n",
 				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-
+				tf_device_module_type_subtype_2_str(type, i),
+				reservations[i]);
 		}
 	}
 
@@ -549,11 +547,6 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
 			/* Only allocate BA pool if so requested */
 			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 				/* Create pool */
@@ -603,10 +596,12 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+#if (TF_RM_DEBUG == 1)
 	printf("%s: type:%d num_entries:%d\n",
 	       tf_dir_2_str(parms->dir),
 	       parms->type,
 	       i);
+#endif /* (TF_RM_DEBUG == 1) */
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 3a3277329..7d4daaf2d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -74,7 +74,8 @@ tf_tbl_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Table Type - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b1092cd9d..1c48b5363 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -81,7 +81,8 @@ tf_tcam_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("TCAM - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "TCAM - initialized\n");
 
 	return 0;
 }
@@ -275,6 +276,31 @@ tf_tcam_free(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		int i;
+
+		for (i = -1; i < 3; i += 3) {
+			aparms.index += i;
+			rc = tf_rm_is_allocated(&aparms);
+			if (rc)
+				return rc;
+
+			if (allocated == TF_RM_ALLOCATED_ENTRY_IN_USE) {
+				/* Free requested element */
+				fparms.index = aparms.index;
+				rc = tf_rm_free(&fparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+						    "%s: Free failed, type:%d, index:%d\n",
+						    tf_dir_2_str(parms->dir),
+						    parms->type,
+						    fparms.index);
+					return rc;
+				}
+			}
+		}
+	}
+
 	/* Convert TF type to HCAPI RM type */
 	hparms.rm_db = tcam_db[parms->dir];
 	hparms.db_index = parms->type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 28/51] net/bnxt: implement IF tables set and get
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (26 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
                           ` (22 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Implement set/get for PROF_SPIF_CTXT, LKUP_PF_DFLT_ARP,
  PROF_PF_ERR_ARP with tunneled HWRM messages
- Add IF table for PROF_PARIF_DFLT_ARP
- Fix page size offset in the HCAPI code
- Fix Entry offset calculation

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h     |  53 +++++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h  |  12 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c    |   8 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h    |  18 +-
 drivers/net/bnxt/meson.build             |   2 +-
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  63 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 116 +++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       | 104 ++++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  21 ++
 drivers/net/bnxt/tf_core/tf_device.h     |  39 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   5 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |  10 +
 drivers/net/bnxt/tf_core/tf_em_common.c  |   5 +-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  12 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |   3 +-
 drivers/net/bnxt/tf_core/tf_if_tbl.c     | 178 +++++++++++++++++
 drivers/net/bnxt/tf_core/tf_if_tbl.h     | 236 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 186 +++++++++++++++---
 drivers/net/bnxt/tf_core/tf_msg.h        |  30 +++
 drivers/net/bnxt/tf_core/tf_session.c    |  14 +-
 21 files changed, 1060 insertions(+), 57 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
index c30e4f49c..3243b3f2b 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -127,6 +127,11 @@ const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
 	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS},
 };
 
 const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
@@ -247,4 +252,52 @@ const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
 	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
 
 };
+
+const struct hcapi_cfa_field cfa_p40_mirror_tbl_layout[] = {
+	{CFA_P40_MIRROR_TBL_SP_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS,
+	 CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_COPY_BITPOS,
+	 CFA_P40_MIRROR_TBL_COPY_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_EN_BITPOS,
+	 CFA_P40_MIRROR_TBL_EN_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_AR_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS},
+};
+
+/* P45 Defines */
+
+const struct hcapi_cfa_field cfa_p45_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
 #endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
index ea8d99d01..53a284887 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -35,10 +35,6 @@
 
 #define CFA_GLOBAL_CFG_DATA_SZ (100)
 
-#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
-#define SUPPORT_CFA_HW_ALL (1)
-#endif
-
 #include "hcapi_cfa_p4.h"
 #define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
 #define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
@@ -121,6 +117,8 @@ struct hcapi_cfa_layout {
 	const struct hcapi_cfa_field *field_array;
 	/** [out] number of HW field entries in the HW layout field array */
 	uint32_t array_sz;
+	/** [out] layout_id - layout id associated with the layout */
+	uint16_t layout_id;
 };
 
 /**
@@ -247,6 +245,8 @@ struct hcapi_cfa_key_tbl {
 	 *  applicable for newer chip
 	 */
 	uint8_t *base1;
+	/** [in] Page size for EEM tables */
+	uint32_t page_size;
 };
 
 /**
@@ -267,7 +267,7 @@ struct hcapi_cfa_key_obj {
 struct hcapi_cfa_key_data {
 	/** [in] For on-chip key table, it is the offset in unit of smallest
 	 *  key. For off-chip key table, it is the byte offset relative
-	 *  to the key record memory base.
+	 *  to the key record memory base and adjusted for page and entry size.
 	 */
 	uint32_t offset;
 	/** [in] HW key data buffer pointer */
@@ -668,5 +668,5 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 			struct hcapi_cfa_key_loc *key_loc);
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset);
+			      uint32_t page);
 #endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index 42b37da0f..a01bbdbbb 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -13,7 +13,6 @@
 #include "hcapi_cfa_defs.h"
 
 #define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
-#define TF_EM_PAGE_SIZE (1 << 21)
 uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
 uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
 bool hcapi_cfa_lkup_init;
@@ -199,10 +198,9 @@ static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
 
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset)
+			      uint32_t page)
 {
 	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
 	uint64_t addr;
 
 	if (mem == NULL)
@@ -362,7 +360,9 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 	op->hw.base_addr =
 		hcapi_get_table_page((struct hcapi_cfa_em_table *)
 				     key_tbl->base0,
-				     key_obj->offset);
+				     key_obj->offset / key_tbl->page_size);
+	/* Offset is adjusted to be the offset into the page */
+	key_obj->offset = key_obj->offset % key_tbl->page_size;
 
 	if (op->hw.base_addr == 0)
 		return -1;
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
index 0661d6363..c6113707f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -21,6 +21,10 @@ enum cfa_p4_tbl_id {
 	CFA_P4_TBL_WC_TCAM_REMAP,
 	CFA_P4_TBL_VEB_TCAM,
 	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT,
+	CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR,
+	CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR,
+	CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR,
 	CFA_P4_TBL_MAX
 };
 
@@ -333,17 +337,29 @@ enum cfa_p4_action_sram_entry_type {
 	 */
 
 	/** SRAM Action Record */
-	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FULL_ACTION,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_0_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_1_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_2_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_3_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_4_ACTION,
+
 	/** SRAM Action Encap 8 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
 	/** SRAM Action Encap 16 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
 	/** SRAM Action Encap 64 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_SRC,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_DEST,
+
 	/** SRAM Action Modify IPv4 Source */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
 	/** SRAM Action Modify IPv4 Destination */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+
 	/** SRAM Action Source Properties SMAC */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
 	/** SRAM Action Source Properties SMAC IPv4 */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 7f3ec6204..f25a9448d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,7 +43,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_shadow_tcam.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm.c',
+	'tf_core/tf_if_tbl.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 9ba60e1c2..1924bef02 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -25,6 +25,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -33,3 +34,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 26836e488..32f152314 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -16,7 +16,9 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
+	HWRM_TFT_IF_TBL_SET = 827,
+	HWRM_TFT_IF_TBL_GET = 828,
+	TF_SUBTYPE_LAST = HWRM_TFT_IF_TBL_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -46,7 +48,17 @@ typedef enum tf_subtype {
 /* WC DMA Address Type */
 #define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
 /* WC Entry */
-#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY				0x30d1UL
+/* SPIF DFLT L2 CTXT Entry */
+#define TF_DEV_DATA_TYPE_SPIF_DFLT_L2_CTXT		  0x3131UL
+/* PARIF DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_DFLT_ACT_REC		0x3132UL
+/* PARIF ERR DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_ERR_DFLT_ACT_REC	 0x3133UL
+/* ILT Entry */
+#define TF_DEV_DATA_TYPE_ILT				0x3134UL
+/* VNIC SVIF entry */
+#define TF_DEV_DATA_TYPE_VNIC_SVIF			0x3135UL
 /* Action Data */
 #define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
 #define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
@@ -56,6 +68,9 @@ typedef enum tf_subtype {
 
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
+struct tf_if_tbl_set_input;
+struct tf_if_tbl_get_input;
+struct tf_if_tbl_get_output;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
@@ -85,4 +100,48 @@ typedef struct tf_tbl_type_bulk_get_output {
 	uint16_t			 size;
 } tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
+/* Input params for if tbl set */
+typedef struct tf_if_tbl_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* index of table entry */
+	uint16_t			 idx;
+	/* size of the data write to table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* data to write into table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_set_input_t, *ptf_if_tbl_set_input_t;
+
+/* Input params for if tbl get */
+typedef struct tf_if_tbl_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* size of the data get from table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* index of table entry */
+	uint16_t			 idx;
+} tf_if_tbl_get_input_t, *ptf_if_tbl_get_input_t;
+
+/* output params for if tbl get */
+typedef struct tf_if_tbl_get_output {
+	/* Value read from table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_get_output_t, *ptf_if_tbl_get_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 45accb0ab..a980a2056 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -1039,3 +1039,119 @@ tf_free_tbl_scope(struct tf *tfp,
 
 	return rc;
 }
+
+int
+tf_set_if_tbl_entry(struct tf *tfp,
+		    struct tf_set_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_set_parms sparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.idx = parms->idx;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_set_if_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_get_if_tbl_entry(struct tf *tfp,
+		    struct tf_get_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_get_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.idx = parms->idx;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_get_if_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e898f19a0..e3d46bd45 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1556,4 +1556,108 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page if_tbl Interface Table Access
+ *
+ * @ref tf_set_if_tbl_entry
+ *
+ * @ref tf_get_if_tbl_entry
+ *
+ * @ref tf_restore_if_tbl_entry
+ */
+/**
+ * Enumeration of TruFlow interface table types.
+ */
+enum tf_if_tbl_type {
+	/** Default Profile L2 Context Entry */
+	TF_IF_TBL_TYPE_PROF_SPIF_DFLT_L2_CTXT,
+	/** Default Profile TCAM/Lookup Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	/** Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	/** Default Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_LKUP_PARIF_DFLT_ACT_REC_PTR,
+	/** SR2 Ingress lookup table */
+	TF_IF_TBL_TYPE_ILT,
+	/** SR2 VNIC/SVIF Table */
+	TF_IF_TBL_TYPE_VNIC_SVIF,
+	TF_IF_TBL_TYPE_MAX
+};
+
+/**
+ * tf_set_if_tbl_entry parameter definition
+ */
+struct tf_set_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Interface to write
+	 */
+	uint32_t idx;
+};
+
+/**
+ * set interface table entry
+ *
+ * Used to set an interface table. This API is used for managing tables indexed
+ * by SVIF/SPIF/PARIF interfaces. In current implementation only the value is
+ * set.
+ * Returns success or failure code.
+ */
+int tf_set_if_tbl_entry(struct tf *tfp,
+			struct tf_set_if_tbl_entry_parms *parms);
+
+/**
+ * tf_get_if_tbl_entry parameter definition
+ */
+struct tf_get_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of table to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * get interface table entry
+ *
+ * Used to retrieve an interface table entry.
+ *
+ * Reads the interface table entry value
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_if_tbl_entry(struct tf *tfp,
+			struct tf_get_if_tbl_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 20b0c5948..a3073c826 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -44,6 +44,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
+	struct tf_if_tbl_cfg_parms if_tbl_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -114,6 +115,19 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * IF_TBL
+	 */
+	if_tbl_cfg.num_elements = TF_IF_TBL_TYPE_MAX;
+	if_tbl_cfg.cfg = tf_if_tbl_p4;
+	if_tbl_cfg.shadow_copy = shadow_copy;
+	rc = tf_if_tbl_bind(tfp, &if_tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "IF Table initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -186,6 +200,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_if_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, IF Table Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 58b7a4ab2..5a0943ad7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl.h"
 #include "tf_tcam.h"
+#include "tf_if_tbl.h"
 
 struct tf;
 struct tf_session;
@@ -567,6 +568,44 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
 				     struct tf_free_tbl_scope_parms *parms);
+
+	/**
+	 * Sets the specified interface table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to interface table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_set_parms *parms);
+
+	/**
+	 * Retrieves the specified interface table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_get_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9a3230787..2dc34b853 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
+#include "tf_if_tbl.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -105,6 +106,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_delete_ext_em_entry = NULL,
 	.tf_dev_alloc_tbl_scope = NULL,
 	.tf_dev_free_tbl_scope = NULL,
+	.tf_dev_set_if_tbl = NULL,
+	.tf_dev_get_if_tbl = NULL,
 };
 
 /**
@@ -135,4 +138,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
 	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
+	.tf_dev_set_if_tbl = tf_if_tbl_set,
+	.tf_dev_get_if_tbl = tf_if_tbl_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 298e100f3..3b03a7c4e 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -10,6 +10,7 @@
 
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_if_tbl.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -86,4 +87,13 @@ struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 };
 
+struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 39a8412b3..23a7fc9c2 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -337,11 +337,10 @@ tf_em_ext_common_bind(struct tf *tfp,
 		db_exists = 1;
 	}
 
-	if (db_exists) {
-		mem_type = parms->mem_type;
+	if (db_exists)
 		init = 1;
-	}
 
+	mem_type = parms->mem_type;
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index d7c147a15..2626a59fe 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -831,7 +831,8 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	op.opcode = HCAPI_CFA_HWOPS_ADD;
 	key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = (uint8_t *)&key_entry;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -847,8 +848,7 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 		key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 
 		rc = hcapi_cfa_key_hw_op(&op,
 					 &key_tbl,
@@ -914,7 +914,8 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
 							  TF_KEY0_TABLE :
 							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = NULL;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -1195,7 +1196,8 @@ int tf_tbl_ext_host_set(struct tf *tfp,
 	op.opcode = HCAPI_CFA_HWOPS_PUT;
 	key_tbl.base0 =
 		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
 	key_obj.data = parms->data;
 	key_obj.size = parms->data_sz_in_bytes;
 
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 2cc43b40f..90aeaa468 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -68,7 +68,7 @@ tf_ident_bind(struct tf *tfp,
 int
 tf_ident_unbind(struct tf *tfp)
 {
-	int rc;
+	int rc = 0;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
 
@@ -89,7 +89,6 @@ tf_ident_unbind(struct tf *tfp)
 			TFP_DRV_LOG(ERR,
 				    "rm free failed on unbind\n");
 		}
-
 		ident_db[i] = NULL;
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.c b/drivers/net/bnxt/tf_core/tf_if_tbl.c
new file mode 100644
index 000000000..dc73ba2d0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.c
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_if_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+
+/**
+ * IF Table DBs.
+ */
+static void *if_tbl_db[TF_DIR_MAX];
+
+/**
+ * IF Table Shadow DBs
+ */
+/* static void *shadow_if_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+/**
+ * Convert if_tbl_type to hwrm type.
+ *
+ * [in] if_tbl_type
+ *   Interface table type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_if_tbl_get_hcapi_type(struct tf_if_tbl_get_hcapi_parms *parms)
+{
+	struct tf_if_tbl_cfg *tbl_cfg;
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	tbl_cfg = (struct tf_if_tbl_cfg *)parms->tbl_db;
+	cfg_type = tbl_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_IF_TBL_CFG)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = tbl_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_if_tbl_bind(struct tf *tfp __rte_unused,
+	       struct tf_if_tbl_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "IF TBL DB already initialized\n");
+		return -EINVAL;
+	}
+
+	if_tbl_db[TF_DIR_RX] = parms->cfg;
+	if_tbl_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_if_tbl_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	if_tbl_db[TF_DIR_RX] = NULL;
+	if_tbl_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_if_tbl_set(struct tf *tfp,
+	      struct tf_if_tbl_set_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	rc = tf_msg_set_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_if_tbl_get(struct tf *tfp,
+	      struct tf_if_tbl_get_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	/* Get the entry */
+	rc = tf_msg_get_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
new file mode 100644
index 000000000..54d4c37f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_IF_TBL_TYPE_H_
+#define TF_IF_TBL_TYPE_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/*
+ * This is the constant used to define invalid CFA
+ * types across all devices.
+ */
+#define CFA_IF_TBL_TYPE_INVALID 65535
+
+struct tf;
+
+/**
+ * The IF Table module provides processing of Internal TF interface table types.
+ */
+
+/**
+ * IF table configuration enumeration.
+ */
+enum tf_if_tbl_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_IF_TBL_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_IF_TBL_CFG,
+};
+
+/**
+ * IF table configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI type.
+ */
+struct tf_if_tbl_cfg {
+	/**
+	 * IF table config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_if_tbl_get_hcapi_parms {
+	/**
+	 * [in] IF Tbl DB Handle
+	 */
+	void *tbl_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_if_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_if_tbl_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_if_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+};
+
+/**
+ * IF Table set parameters
+ */
+struct tf_if_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * IF Table get parameters
+ */
+struct tf_if_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page if tbl Table
+ *
+ * @ref tf_if_tbl_bind
+ *
+ * @ref tf_if_tbl_unbind
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_restore
+ */
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_bind(struct tf *tfp,
+		   struct tf_if_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_unbind(struct tf *tfp);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Interface Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_set(struct tf *tfp,
+		  struct tf_if_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_get(struct tf *tfp,
+		  struct tf_if_tbl_get_parms *parms);
+
+#endif /* TF_IF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 659065de3..6600a14c8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -125,12 +125,19 @@ tf_msg_session_close(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_close_input req = { 0 };
 	struct hwrm_tf_session_close_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_CLOSE;
 	parms.req_data = (uint32_t *)&req;
@@ -150,12 +157,19 @@ tf_msg_session_qcfg(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_QCFG,
 	parms.req_data = (uint32_t *)&req;
@@ -448,13 +462,22 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_insert_input req = { 0 };
 	struct hwrm_tf_em_insert_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
 	uint16_t flags;
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	tfp_memcpy(req.em_key,
 		   em_parms->key,
 		   ((em_parms->key_sz_in_bits + 7) / 8));
@@ -498,11 +521,19 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
 	uint16_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
@@ -789,21 +820,19 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_set_input req = { 0 };
 	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
@@ -840,21 +869,19 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_get_input req = { 0 };
 	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
@@ -897,22 +924,20 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.start_index = tfp_cpu_to_le_32(starting_idx);
@@ -939,3 +964,102 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+int
+tf_msg_get_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_get_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_get_input_t req = { 0 };
+	tf_if_tbl_get_output_t resp;
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_16(params->data_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_IF_TBL_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	if (parms.tf_resp_code != 0)
+		return tfp_le_to_cpu_32(parms.tf_resp_code);
+
+	tfp_memcpy(&params->data[0], resp.data, req.data_sz_in_bytes);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_set_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_set_input_t req = { 0 };
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_32(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_32(params->data_sz_in_bytes);
+	tfp_memcpy(&req.data[0], params->data, params->data_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_IF_TBL_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 7432873d7..37f291016 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -428,4 +428,34 @@ int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			      uint16_t entry_sz_in_bytes,
 			      uint64_t physical_mem_addr);
 
+/**
+ * Sends Set message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_set_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_set_parms *params);
+
+/**
+ * Sends get message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table get parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_get_parms *params);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index b08d06306..529ea5083 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -70,14 +70,24 @@ tf_session_open_session(struct tf *tfp,
 		goto cleanup;
 	}
 	tfp->session->core_data = cparms.mem_va;
+	session_id = &parms->open_cfg->session_id;
+
+	/* Update Session Info, which is what is visible to the caller */
+	tfp->session->ver.major = 0;
+	tfp->session->ver.minor = 0;
+	tfp->session->ver.update = 0;
 
-	/* Initialize Session and Device */
+	tfp->session->session_id.internal.domain = session_id->internal.domain;
+	tfp->session->session_id.internal.bus = session_id->internal.bus;
+	tfp->session->session_id.internal.device = session_id->internal.device;
+	tfp->session->session_id.internal.fw_session_id = fw_session_id;
+
+	/* Initialize Session and Device, which is private */
 	session = (struct tf_session *)tfp->session->core_data;
 	session->ver.major = 0;
 	session->ver.minor = 0;
 	session->ver.update = 0;
 
-	session_id = &parms->open_cfg->session_id;
 	session->session_id.internal.domain = session_id->internal.domain;
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 29/51] net/bnxt: add TF register and unregister
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (27 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
                           ` (21 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TF register/unregister support. Session got session clients to
  keep track of the ctrl-channels/function.
- Add support code to tfp layer

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_core/Makefile     |   1 +
 drivers/net/bnxt/tf_core/ll.c         |  52 +++
 drivers/net/bnxt/tf_core/ll.h         |  46 +++
 drivers/net/bnxt/tf_core/tf_core.c    |  26 +-
 drivers/net/bnxt/tf_core/tf_core.h    | 105 +++--
 drivers/net/bnxt/tf_core/tf_msg.c     |  84 +++-
 drivers/net/bnxt/tf_core/tf_msg.h     |  42 +-
 drivers/net/bnxt/tf_core/tf_rm.c      |   2 +-
 drivers/net/bnxt/tf_core/tf_session.c | 569 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h | 201 ++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.c     |   2 +
 drivers/net/bnxt/tf_core/tf_tcam.c    |   8 +-
 drivers/net/bnxt/tf_core/tfp.c        |  17 +
 drivers/net/bnxt/tf_core/tfp.h        |  15 +
 15 files changed, 1075 insertions(+), 96 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index f25a9448d..54564e02e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -44,6 +44,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
+	'tf_core/ll.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 1924bef02..6210bc70e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -8,6 +8,7 @@
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/ll.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/ll.c b/drivers/net/bnxt/tf_core/ll.c
new file mode 100644
index 000000000..6f58662f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Functions */
+
+#include <stdio.h>
+#include "ll.h"
+
+/* init linked list */
+void ll_init(struct ll *ll)
+{
+	ll->head = NULL;
+	ll->tail = NULL;
+}
+
+/* insert entry in linked list */
+void ll_insert(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == NULL) {
+		ll->head = entry;
+		ll->tail = entry;
+		entry->next = NULL;
+		entry->prev = NULL;
+	} else {
+		entry->next = ll->head;
+		entry->prev = NULL;
+		entry->next->prev = entry;
+		ll->head = entry->next->prev;
+	}
+}
+
+/* delete entry from linked list */
+void ll_delete(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == entry && ll->tail == entry) {
+		ll->head = NULL;
+		ll->tail = NULL;
+	} else if (ll->head == entry) {
+		ll->head = entry->next;
+		ll->head->prev = NULL;
+	} else if (ll->tail == entry) {
+		ll->tail = entry->prev;
+		ll->tail->next = NULL;
+	} else {
+		entry->prev->next = entry->next;
+		entry->next->prev = entry->prev;
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/ll.h b/drivers/net/bnxt/tf_core/ll.h
new file mode 100644
index 000000000..d70917850
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Header File */
+
+#ifndef _LL_H_
+#define _LL_H_
+
+/* linked list entry */
+struct ll_entry {
+	struct ll_entry *prev;
+	struct ll_entry *next;
+};
+
+/* linked list */
+struct ll {
+	struct ll_entry *head;
+	struct ll_entry *tail;
+};
+
+/**
+ * Linked list initialization.
+ *
+ * [in] ll, linked list to be initialized
+ */
+void ll_init(struct ll *ll);
+
+/**
+ * Linked list insert
+ *
+ * [in] ll, linked list where element is inserted
+ * [in] entry, entry to be added
+ */
+void ll_insert(struct ll *ll, struct ll_entry *entry);
+
+/**
+ * Linked list delete
+ *
+ * [in] ll, linked list where element is removed
+ * [in] entry, entry to be deleted
+ */
+void ll_delete(struct ll *ll, struct ll_entry *entry);
+
+#endif /* _LL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a980a2056..489c461d1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -58,21 +58,20 @@ tf_open_session(struct tf *tfp,
 	parms->session_id.internal.device = device;
 	oparms.open_cfg = parms;
 
+	/* Session vs session client is decided in
+	 * tf_session_open_session()
+	 */
+	printf("TF_OPEN, %s\n", parms->ctrl_chan_name);
 	rc = tf_session_open_session(tfp, &oparms);
 	/* Logging handled by tf_session_open_session */
 	if (rc)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
+		    parms->session_id.internal.device);
 
 	return 0;
 }
@@ -152,6 +151,9 @@ tf_close_session(struct tf *tfp)
 
 	cparms.ref_count = &ref_count;
 	cparms.session_id = &session_id;
+	/* Session vs session client is decided in
+	 * tf_session_close_session()
+	 */
 	rc = tf_session_close_session(tfp,
 				      &cparms);
 	/* Logging handled by tf_session_close_session */
@@ -159,16 +161,10 @@ tf_close_session(struct tf *tfp)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Closed session, session_id:%d, ref_count:%d\n",
-		    cparms.session_id->id,
-		    *cparms.ref_count);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    cparms.session_id->internal.domain,
 		    cparms.session_id->internal.bus,
-		    cparms.session_id->internal.device,
-		    cparms.session_id->internal.fw_session_id);
+		    cparms.session_id->internal.device);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e3d46bd45..fea222bee 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -72,7 +72,6 @@ enum tf_mem {
  * @ref tf_close_session
  */
 
-
 /**
  * Session Version defines
  *
@@ -113,6 +112,21 @@ union tf_session_id {
 	} internal;
 };
 
+/**
+ * Session Client Identifier
+ *
+ * Unique identifier for a client within a session. Session Client ID
+ * is constructed from the passed in session and a firmware allocated
+ * fw_session_client_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_client_id {
+	uint16_t id;
+	struct {
+		uint8_t fw_session_id;
+		uint8_t fw_session_client_id;
+	} internal;
+};
+
 /**
  * Session Version
  *
@@ -368,8 +382,8 @@ struct tf_session_info {
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
- * session info at that time. Additional 'opens' can be done using
- * same session_info by using tf_attach_session().
+ * session info at that time. A TruFlow Session can be used by more
+ * than one PF/VF by using the tf_open_session().
  *
  * It is expected that ULP allocates this memory as shared memory.
  *
@@ -506,36 +520,62 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
+	/**
+	 * [in/out] session_client_id
+	 *
+	 * Session_client_id is unique per client.
+	 *
+	 * Session_client_id is composed of session_id and the
+	 * fw_session_client_id fw_session_id. The construction is
+	 * done by parsing the ctrl_chan_name together with allocation
+	 * of a fw_session_client_id during tf_open_session().
+	 *
+	 * A reference count will be incremented in the session on
+	 * which a client is created.
+	 *
+	 * A session can first be closed if there is one Session
+	 * Client left. Session Clients should closed using
+	 * tf_close_session().
+	 */
+	union tf_session_client_id session_client_id;
 	/**
 	 * [in] device type
 	 *
-	 * Device type is passed, one of Wh+, SR, Thor, SR2
+	 * Device type for the session.
 	 */
 	enum tf_device_type device_type;
-	/** [in] resources
+	/**
+	 * [in] resources
 	 *
-	 * Resource allocation
+	 * Resource allocation for the session.
 	 */
 	struct tf_session_resources resources;
 };
 
 /**
- * Opens a new TruFlow management session.
+ * Opens a new TruFlow Session or session client.
+ *
+ * What gets created depends on the passed in tfp content. If the tfp
+ * does not have prior session data a new session with associated
+ * session client. If tfp has a session already a session client will
+ * be created. In both cases the session client is created using the
+ * provided ctrl_chan_name.
  *
- * TruFlow will allocate session specific memory, shared memory, to
- * hold its session data. This data is private to TruFlow.
+ * In case of session creation TruFlow will allocate session specific
+ * memory, shared memory, to hold its session data. This data is
+ * private to TruFlow.
  *
- * Multiple PFs can share the same session. An association, refcount,
- * between session and PFs is maintained within TruFlow. Thus, a PF
- * can attach to an existing session, see tf_attach_session().
+ * No other TruFlow APIs will succeed unless this API is first called
+ * and succeeds.
  *
- * No other TruFlow APIs will succeed unless this API is first called and
- * succeeds.
+ * tf_open_session() returns a session id and session client id that
+ * is used on all other TF APIs.
  *
- * tf_open_session() returns a session id that can be used on attach.
+ * A Session or session client can be closed using tf_close_session().
  *
  * [in] tfp
  *   Pointer to TF handle
+ *
  * [in] parms
  *   Pointer to open parameters
  *
@@ -546,6 +586,11 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+/**
+ * Experimental
+ *
+ * tf_attach_session parameters definition.
+ */
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -595,15 +640,18 @@ struct tf_attach_session_parms {
 };
 
 /**
- * Attaches to an existing session. Used when more than one PF wants
- * to share a single session. In that case all TruFlow management
- * traffic will be sent to the TruFlow firmware using the 'PF' that
- * did the attach not the session ctrl channel.
+ * Experimental
+ *
+ * Allows a 2nd application instance to attach to an existing
+ * session. Used when a session is to be shared between two processes.
  *
  * Attach will increment a ref count as to manage the shared session data.
  *
- * [in] tfp, pointer to TF handle
- * [in] parms, pointer to attach parameters
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to attach parameters
  *
  * Returns
  *   - (0) if successful.
@@ -613,9 +661,15 @@ int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
 
 /**
- * Closes an existing session. Cleans up all hardware and firmware
- * state associated with the TruFlow application session when the last
- * PF associated with the session results in refcount to be zero.
+ * Closes an existing session client or the session it self. The
+ * session client is default closed and if the session reference count
+ * is 0 then the session is closed as well.
+ *
+ * On session close all hardware and firmware state associated with
+ * the TruFlow application is cleaned up.
+ *
+ * The session client is extracted from the tfp. Thus tf_close_session()
+ * cannot close a session client on behalf of another function.
  *
  * Returns success or failure code.
  */
@@ -1056,9 +1110,10 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_set_tbl_entry
  *
  * @ref tf_get_tbl_entry
+ *
+ * @ref tf_bulk_get_tbl_entry
  */
 
-
 /**
  * tf_alloc_tbl_entry parameter definition
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 6600a14c8..8c2dff8ad 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -84,7 +84,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
-		    uint8_t *fw_session_id)
+		    uint8_t *fw_session_id,
+		    uint8_t *fw_session_client_id)
 {
 	int rc;
 	struct hwrm_tf_session_open_input req = { 0 };
@@ -106,7 +107,8 @@ tf_msg_session_open(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	*fw_session_id = resp.fw_session_id;
+	*fw_session_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
+	*fw_session_client_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
 
 	return rc;
 }
@@ -119,6 +121,84 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_msg_session_client_register(struct tf *tfp,
+			       char *ctrl_channel_name,
+			       uint8_t *fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_register_input req = { 0 };
+	struct hwrm_tf_session_register_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	tfp_memcpy(&req.session_client_name,
+		   ctrl_channel_name,
+		   TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_REGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_client_id =
+		(uint8_t)tfp_le_to_cpu_32(resp.fw_session_client_id);
+
+	return rc;
+}
+
+int
+tf_msg_session_client_unregister(struct tf *tfp,
+				 uint8_t fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_unregister_input req = { 0 };
+	struct hwrm_tf_session_unregister_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.fw_session_client_id = tfp_cpu_to_le_32(fw_session_client_id);
+
+	parms.tf_type = HWRM_TF_SESSION_UNREGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+
+	return rc;
+}
+
 int
 tf_msg_session_close(struct tf *tfp)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 37f291016..c02a5203c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -34,7 +34,8 @@ struct tf;
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
-			uint8_t *fw_session_id);
+			uint8_t *fw_session_id,
+			uint8_t *fw_session_client_id);
 
 /**
  * Sends session close request to Firmware
@@ -42,6 +43,9 @@ int tf_msg_session_open(struct tf *tfp,
  * [in] session
  *   Pointer to session handle
  *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
  * [in] fw_session_id
  *   Pointer to the fw_session_id that is assigned to the session at
  *   time of session open
@@ -53,6 +57,42 @@ int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
 			  uint8_t tf_fw_session_id);
 
+/**
+ * Sends session client register request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_register(struct tf *tfp,
+				   char *ctrl_channel_name,
+				   uint8_t *fw_session_client_id);
+
+/**
+ * Sends session client unregister request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_unregister(struct tf *tfp,
+				     uint8_t fw_session_client_id);
+
 /**
  * Sends session close request to Firmware
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 30313e2ea..fdb87ecb8 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -389,7 +389,7 @@ tf_rm_create_db(struct tf *tfp,
 	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 529ea5083..3b355f64e 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -12,14 +12,49 @@
 #include "tf_msg.h"
 #include "tfp.h"
 
-int
-tf_session_open_session(struct tf *tfp,
-			struct tf_session_open_session_parms *parms)
+struct tf_session_client_create_parms {
+	/**
+	 * [in] Pointer to the control channel name string
+	 */
+	char *ctrl_chan_name;
+
+	/**
+	 * [out] Firmware Session Client ID
+	 */
+	union tf_session_client_id *session_client_id;
+};
+
+struct tf_session_client_destroy_parms {
+	/**
+	 * FW Session Client Identifier
+	 */
+	union tf_session_client_id session_client_id;
+};
+
+/**
+ * Creates a Session and the associated client.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_create(struct tf *tfp,
+		  struct tf_session_open_session_parms *parms)
 {
 	int rc;
 	struct tf_session *session = NULL;
+	struct tf_session_client *client;
 	struct tfp_calloc_parms cparms;
 	uint8_t fw_session_id;
+	uint8_t fw_session_client_id;
 	union tf_session_id *session_id;
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -27,7 +62,8 @@ tf_session_open_session(struct tf *tfp,
 	/* Open FW session and get a new session_id */
 	rc = tf_msg_session_open(tfp,
 				 parms->open_cfg->ctrl_chan_name,
-				 &fw_session_id);
+				 &fw_session_id,
+				 &fw_session_client_id);
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
@@ -92,15 +128,46 @@ tf_session_open_session(struct tf *tfp,
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
 	session->session_id.internal.fw_session_id = fw_session_id;
-	/* Return the allocated fw session id */
-	session_id->internal.fw_session_id = fw_session_id;
+	/* Return the allocated session id */
+	session_id->id = session->session_id.id;
 
 	session->shadow_copy = parms->open_cfg->shadow_copy;
 
-	tfp_memcpy(session->ctrl_chan_name,
+	/* Init session client list */
+	ll_init(&session->client_ll);
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	client->session_client_id.internal.fw_session_id = fw_session_id;
+	client->session_client_id.internal.fw_session_client_id =
+		fw_session_client_id;
+
+	tfp_memcpy(client->ctrl_chan_name,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
+	ll_insert(&session->client_ll, &client->ll_entry);
+	session->ref_count++;
+
 	rc = tf_dev_bind(tfp,
 			 parms->open_cfg->device_type,
 			 session->shadow_copy,
@@ -110,7 +177,7 @@ tf_session_open_session(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	session->ref_count++;
+	session->dev_init = true;
 
 	return 0;
 
@@ -121,6 +188,235 @@ tf_session_open_session(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Creates a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_client_create(struct tf *tfp,
+			 struct tf_session_client_create_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tf_session_client *client;
+	struct tfp_calloc_parms cparms;
+	union tf_session_client_id session_client_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &session);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	client = tf_session_find_session_client_by_name(session,
+							parms->ctrl_chan_name);
+	if (client) {
+		TFP_DRV_LOG(ERR,
+			    "Client %s, already registered with this session\n",
+			    parms->ctrl_chan_name);
+		return -EOPNOTSUPP;
+	}
+
+	rc = tf_msg_session_client_register
+		    (tfp,
+		    parms->ctrl_chan_name,
+		    &session_client_id.internal.fw_session_client_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to create client on session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	/* Build the Session Client ID by adding the fw_session_id */
+	rc = tf_session_get_fw_session_id
+			(tfp,
+			&session_client_id.internal.fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session Firmware id lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tfp_memcpy(client->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	client->session_client_id.id = session_client_id.id;
+
+	ll_insert(&session->client_ll, &client->ll_entry);
+
+	session->ref_count++;
+
+	/* Build the return value */
+	parms->session_client_id->id = session_client_id.id;
+
+ cleanup:
+	/* TBD - Add code to unregister newly create client from fw */
+
+	return rc;
+}
+
+
+/**
+ * Destroys a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session client destroy parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTFOUND) error, client not owned by the session.
+ *   - (-ENOTSUPP) error, unable to destroy client as its the last
+ *                 client. Please use the tf_session_close().
+ */
+static int
+tf_session_client_destroy(struct tf *tfp,
+			  struct tf_session_client_destroy_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_session_client *client;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Check session owns this client and that we're not the last client */
+	client = tf_session_get_session_client(tfs,
+					       parms->session_client_id);
+	if (client == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "Client %d, not found within this session\n",
+			    parms->session_client_id.id);
+		return -EINVAL;
+	}
+
+	/* If last client the request is rejected and cleanup should
+	 * be done by session close.
+	 */
+	if (tfs->ref_count == 1)
+		return -EOPNOTSUPP;
+
+	rc = tf_msg_session_client_unregister
+			(tfp,
+			parms->session_client_id.internal.fw_session_client_id);
+
+	/* Log error, but continue. If FW fails we do not really have
+	 * a way to fix this but the client would no longer be valid
+	 * thus we remove from the session.
+	 */
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Client destroy on FW Failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+
+	/* Decrement the session ref_count */
+	tfs->ref_count--;
+
+	tfp_free(client);
+
+	return rc;
+}
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session_client_create_parms scparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Decide if we're creating a new session or session client */
+	if (tfp->session == NULL) {
+		rc = tf_session_create(tfp, parms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to create session, ctrl_chan_name:%s, rc:%s\n",
+				    parms->open_cfg->ctrl_chan_name,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+		       "Session created, session_client_id:%d, session_id:%d\n",
+		       parms->open_cfg->session_client_id.id,
+		       parms->open_cfg->session_id.id);
+	} else {
+		scparms.ctrl_chan_name = parms->open_cfg->ctrl_chan_name;
+		scparms.session_client_id = &parms->open_cfg->session_client_id;
+
+		/* Create the new client and get it associated with
+		 * the session.
+		 */
+		rc = tf_session_client_create(tfp, &scparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+			      "Failed to create client on session %d, rc:%s\n",
+			      parms->open_cfg->session_id.id,
+			      strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Session Client:%d created on session:%d\n",
+			    parms->open_cfg->session_client_id.id,
+			    parms->open_cfg->session_id.id);
+	}
+
+	return 0;
+}
+
 int
 tf_session_attach_session(struct tf *tfp __rte_unused,
 			  struct tf_session_attach_session_parms *parms __rte_unused)
@@ -141,7 +437,10 @@ tf_session_close_session(struct tf *tfp,
 {
 	int rc;
 	struct tf_session *tfs = NULL;
+	struct tf_session_client *client;
 	struct tf_dev_info *tfd;
+	struct tf_session_client_destroy_parms scdparms;
+	uint16_t fid;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -161,7 +460,49 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	tfs->ref_count--;
+	/* Get the client, we need it independently of the closure
+	 * type (client or session closure).
+	 *
+	 * We find the client by way of the fid. Thus one cannot close
+	 * a client on behalf of someone else.
+	 */
+	rc = tfp_get_fid(tfp, &fid);
+	if (rc)
+		return rc;
+
+	client = tf_session_find_session_client_by_fid(tfs,
+						       fid);
+	/* In case multiple clients we chose to close those first */
+	if (tfs->ref_count > 1) {
+		/* Linaro gcc can't static init this structure */
+		memset(&scdparms,
+		       0,
+		       sizeof(struct tf_session_client_destroy_parms));
+
+		scdparms.session_client_id = client->session_client_id;
+		/* Destroy requested client so its no longer
+		 * registered with this session.
+		 */
+		rc = tf_session_client_destroy(tfp, &scdparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to unregister Client %d, rc:%s\n",
+				    client->session_client_id.id,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Closed session client, session_client_id:%d\n",
+			    client->session_client_id.id);
+
+		TFP_DRV_LOG(INFO,
+			    "session_id:%d, ref_count:%d\n",
+			    tfs->session_id.id,
+			    tfs->ref_count);
+
+		return 0;
+	}
 
 	/* Record the session we're closing so the caller knows the
 	 * details.
@@ -176,23 +517,6 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	if (tfs->ref_count > 0) {
-		/* In case we're attached only the session client gets
-		 * closed.
-		 */
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "FW Session close failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		return 0;
-	}
-
-	/* Final cleanup as we're last user of the session */
-
 	/* Unbind the device */
 	rc = tf_dev_unbind(tfp, tfd);
 	if (rc) {
@@ -202,7 +526,6 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
 		/* Log error */
@@ -211,6 +534,21 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
+	/* Final cleanup as we're last user of the session thus we
+	 * also delete the last client.
+	 */
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+	tfp_free(client);
+
+	tfs->ref_count--;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    tfs->session_id.id,
+		    tfs->ref_count);
+
+	tfs->dev_init = false;
+
 	tfp_free(tfp->session->core_data);
 	tfp_free(tfp->session);
 	tfp->session = NULL;
@@ -218,12 +556,31 @@ tf_session_close_session(struct tf *tfp,
 	return 0;
 }
 
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return true;
+	}
+
+	return false;
+}
+
 int
-tf_session_get_session(struct tf *tfp,
-		       struct tf_session **tfs)
+tf_session_get_session_internal(struct tf *tfp,
+				struct tf_session **tfs)
 {
-	int rc;
+	int rc = 0;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -234,7 +591,113 @@ tf_session_get_session(struct tf *tfp,
 
 	*tfs = (struct tf_session *)(tfp->session->core_data);
 
-	return 0;
+	return rc;
+}
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session **tfs)
+{
+	int rc;
+	uint16_t fw_fid;
+	bool supported = false;
+
+	rc = tf_session_get_session_internal(tfp,
+					     tfs);
+	/* Logging done by tf_session_get_session_internal */
+	if (rc)
+		return rc;
+
+	/* As session sharing among functions aka 'individual clients'
+	 * is supported we have to assure that the client is indeed
+	 * registered before we get deep in the TruFlow api stack.
+	 */
+	rc = tfp_get_fid(tfp, &fw_fid);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Internal FID lookup\n, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	supported = tf_session_is_fid_supported(*tfs, fw_fid);
+	if (!supported) {
+		TFP_DRV_LOG
+			(ERR,
+			"Ctrl channel not registered with session\n, rc:%s\n",
+			strerror(-rc));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->session_client_id.id == session_client_id.id)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL || ctrl_chan_name == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (strncmp(client->ctrl_chan_name,
+			    ctrl_chan_name,
+			    TF_SESSION_NAME_MAX) == 0)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return client;
+	}
+
+	return NULL;
 }
 
 int
@@ -253,6 +716,7 @@ tf_session_get_fw_session_id(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs = NULL;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -261,7 +725,15 @@ tf_session_get_fw_session_id(struct tf *tfp,
 		return rc;
 	}
 
-	rc = tf_session_get_session(tfp, &tfs);
+	if (fw_session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -269,3 +741,36 @@ tf_session_get_fw_session_id(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_session_get_session_id(struct tf *tfp,
+			  union tf_session_id *session_id)
+{
+	int rc;
+	struct tf_session *tfs;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*session_id = tfs->session_id;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index a303fde51..aa7a27877 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -16,6 +16,7 @@
 #include "tf_tbl.h"
 #include "tf_resources.h"
 #include "stack.h"
+#include "ll.h"
 
 /**
  * The Session module provides session control support. A session is
@@ -29,7 +30,6 @@
 
 /** Session defines
  */
-#define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
 /**
@@ -50,7 +50,7 @@
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
  * allocations and (if so configured) any shadow copy of flow
- * information.
+ * information. It also holds info about Session Clients.
  *
  * Memory is assigned to the Truflow instance by way of
  * tf_open_session. Memory is allocated and owned by i.e. ULP.
@@ -65,17 +65,10 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Session ID, allocated by FW on tf_open_session() */
-	union tf_session_id session_id;
-
 	/**
-	 * String containing name of control channel interface to be
-	 * used for this session to communicate with firmware.
-	 *
-	 * ctrl_chan_name will be used as part of a name for any
-	 * shared memory allocation.
+	 * Session ID, allocated by FW on tf_open_session()
 	 */
-	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	union tf_session_id session_id;
 
 	/**
 	 * Boolean controlling the use and availability of shadow
@@ -92,14 +85,67 @@ struct tf_session {
 
 	/**
 	 * Session Reference Count. To keep track of functions per
-	 * session the ref_count is incremented. There is also a
+	 * session the ref_count is updated. There is also a
 	 * parallel TruFlow Firmware ref_count in case the TruFlow
 	 * Core goes away without informing the Firmware.
 	 */
 	uint8_t ref_count;
 
-	/** Device handle */
+	/**
+	 * Session Reference Count for attached sessions. To keep
+	 * track of application sharing of a session the
+	 * ref_count_attach is updated.
+	 */
+	uint8_t ref_count_attach;
+
+	/**
+	 * Device handle
+	 */
 	struct tf_dev_info dev;
+	/**
+	 * Device init flag. False if Device is not fully initialized,
+	 * else true.
+	 */
+	bool dev_init;
+
+	/**
+	 * Linked list of clients registered for this session
+	 */
+	struct ll client_ll;
+};
+
+/**
+ * Session Client
+ *
+ * Shared memory for each of the Session Clients. A session can have
+ * one or more clients.
+ */
+struct tf_session_client {
+	/**
+	 * Linked list of clients
+	 */
+	struct ll_entry ll_entry; /* For inserting in link list, must be
+				   * first field of struct.
+				   */
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Firmware FID, learned at time of Session Client create.
+	 */
+	uint16_t fw_fid;
+
+	/**
+	 * Session Client ID, allocated by FW on tf_register_session()
+	 */
+	union tf_session_client_id session_client_id;
 };
 
 /**
@@ -126,7 +172,13 @@ struct tf_session_attach_session_parms {
  * Session close parameter definition
  */
 struct tf_session_close_session_parms {
+	/**
+	 * []
+	 */
 	uint8_t *ref_count;
+	/**
+	 * []
+	 */
 	union tf_session_id *session_id;
 };
 
@@ -139,11 +191,23 @@ struct tf_session_close_session_parms {
  *
  * @ref tf_session_close_session
  *
+ * @ref tf_session_is_fid_supported
+ *
+ * @ref tf_session_get_session_internal
+ *
  * @ref tf_session_get_session
  *
+ * @ref tf_session_get_session_client
+ *
+ * @ref tf_session_find_session_client_by_name
+ *
+ * @ref tf_session_find_session_client_by_fid
+ *
  * @ref tf_session_get_device
  *
  * @ref tf_session_get_fw_session_id
+ *
+ * @ref tf_session_get_session_id
  */
 
 /**
@@ -179,7 +243,8 @@ int tf_session_attach_session(struct tf *tfp,
 			      struct tf_session_attach_session_parms *parms);
 
 /**
- * Closes a previous created session.
+ * Closes a previous created session. Only possible if previous
+ * registered Clients had been unregistered first.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -189,13 +254,53 @@ int tf_session_attach_session(struct tf *tfp,
  *
  * Returns
  *   - (0) if successful.
+ *   - (-EUSERS) if clients are still registered with the session.
  *   - (-EINVAL) on failure.
  */
 int tf_session_close_session(struct tf *tfp,
 			     struct tf_session_close_session_parms *parms);
 
 /**
- * Looks up the private session information from the TF session info.
+ * Verifies that the fid is supported by the session. Used to assure
+ * that a function i.e. client/control channel is registered with the
+ * session.
+ *
+ * [in] tfs
+ *   Pointer to TF Session handle
+ *
+ * [in] fid
+ *   FID value to check
+ *
+ * Returns
+ *   - (true) if successful, else false
+ *   - (-EINVAL) on failure.
+ */
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Does not perform a fid check against the registered
+ * clients. Should be used if tf_session_get_session() was used
+ * previously i.e. at the TF API boundary.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_internal(struct tf *tfp,
+				    struct tf_session **tfs);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Performs a fid check against the clients on the session.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -210,6 +315,53 @@ int tf_session_close_session(struct tf *tfp,
 int tf_session_get_session(struct tf *tfp,
 			   struct tf_session **tfs);
 
+/**
+ * Looks up client within the session.
+ *
+ * [in] tfs
+ *   Pointer pointer to the session
+ *
+ * [in] session_client_id
+ *   Client id to look for within the session
+ *
+ * Returns
+ *   client if successful.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id);
+
+/**
+ * Looks up client using name within the session.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] session_client_name, name of the client to lookup in the session
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name);
+
+/**
+ * Looks up client using the fid.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] fid, fid of the client to find
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid);
+
 /**
  * Looks up the device information from the TF Session.
  *
@@ -227,8 +379,7 @@ int tf_session_get_device(struct tf_session *tfs,
 			  struct tf_dev_info **tfd);
 
 /**
- * Looks up the FW session id of the firmware connection for the
- * requested TF handle.
+ * Looks up the FW Session id the requested TF handle.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -243,4 +394,20 @@ int tf_session_get_device(struct tf_session *tfs,
 int tf_session_get_fw_session_id(struct tf *tfp,
 				 uint8_t *fw_session_id);
 
+/**
+ * Looks up the Session id the requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_id(struct tf *tfp,
+			      union tf_session_id *session_id);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 7d4daaf2d..2b4a7c561 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -269,6 +269,7 @@ tf_tbl_set(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -338,6 +339,7 @@ tf_tbl_get(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 1c48b5363..cbfaa94ee 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -138,7 +138,7 @@ tf_tcam_alloc(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -218,7 +218,7 @@ tf_tcam_free(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -319,6 +319,7 @@ tf_tcam_free(struct tf *tfp,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -353,7 +354,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -415,6 +416,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 69d1c9a1f..426a182a9 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -161,3 +161,20 @@ tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
 {
 	rte_spinlock_unlock(&parms->slock);
 }
+
+int
+tfp_get_fid(struct tf *tfp, uint16_t *fw_fid)
+{
+	struct bnxt *bp = NULL;
+
+	if (tfp == NULL || fw_fid == NULL)
+		return -EINVAL;
+
+	bp = container_of(tfp, struct bnxt, tfp);
+	if (bp == NULL)
+		return -EINVAL;
+
+	*fw_fid = bp->fw_fid;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index fe49b6304..8789eba1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -238,4 +238,19 @@ int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
 #define tfp_bswap_32(val) rte_bswap32(val)
 #define tfp_bswap_64(val) rte_bswap64(val)
 
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 30/51] net/bnxt: add global config set and get APIs
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (28 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
                           ` (20 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Add support to update global configuration for ACT_TECT
  and ACT_ABCR.
- Add support to allow Tunnel and Action global configuration.
- Remove register read and write operations.
- Remove the register read and write support.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/hcapi_cfa.h       |   3 +
 drivers/net/bnxt/meson.build             |   1 +
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  54 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 137 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       |  77 +++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  20 +++
 drivers/net/bnxt/tf_core/tf_device.h     |  33 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   4 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   5 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c | 199 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_global_cfg.h | 170 +++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 109 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |  31 ++++
 14 files changed, 840 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h

diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index 7a67493bd..3d895f088 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -245,6 +245,9 @@ int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
 			     struct hcapi_cfa_data *mirror);
+int hcapi_cfa_p4_global_cfg_hwop(struct hcapi_cfa_hwop *op,
+				 uint32_t type,
+				 struct hcapi_cfa_data *config);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 54564e02e..ace7353be 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -45,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
+	'tf_core/tf_global_cfg.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 6210bc70e..202db4150 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -27,6 +27,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_global_cfg.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -36,3 +37,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_global_cfg.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 32f152314..7ade9927a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -13,8 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_REG_GET = 821,
-	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_GET_GLOBAL_CFG = 821,
+	HWRM_TFT_SET_GLOBAL_CFG = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	HWRM_TFT_IF_TBL_SET = 827,
 	HWRM_TFT_IF_TBL_GET = 828,
@@ -66,18 +66,66 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
+struct tf_set_global_cfg_input;
+struct tf_get_global_cfg_input;
+struct tf_get_global_cfg_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
 struct tf_if_tbl_set_input;
 struct tf_if_tbl_get_input;
 struct tf_if_tbl_get_output;
+/* Input params for global config set */
+typedef struct tf_set_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config type */
+	uint32_t			 type;
+	/* Offset of the type */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_set_global_cfg_input_t, *ptf_set_global_cfg_input_t;
+
+/* Input params for global config to get */
+typedef struct tf_get_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config to retrieve */
+	uint32_t			 type;
+	/* Offset to retrieve */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+} tf_get_global_cfg_input_t, *ptf_get_global_cfg_input_t;
+
+/* Output params for global config */
+typedef struct tf_get_global_cfg_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+	/* Data to get */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_get_global_cfg_output_t, *ptf_get_global_cfg_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
-	uint16_t			 flags;
+	uint32_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
 #define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 489c461d1..0f119b45f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_em.h"
 #include "tf_rm.h"
+#include "tf_global_cfg.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "bitalloc.h"
@@ -277,6 +278,142 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
+/** Get global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_get_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_get_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+/** Set global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_set_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_set_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_identifier(struct tf *tfp,
 		    struct tf_alloc_identifier_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index fea222bee..3f54ab16b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1611,6 +1611,83 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page global Global Configuration
+ *
+ * @ref tf_set_global_cfg
+ *
+ * @ref tf_get_global_cfg
+ */
+/**
+ * Tunnel Encapsulation Offsets
+ */
+enum tf_tunnel_encap_offsets {
+	TF_TUNNEL_ENCAP_L2,
+	TF_TUNNEL_ENCAP_NAT,
+	TF_TUNNEL_ENCAP_MPLS,
+	TF_TUNNEL_ENCAP_VXLAN,
+	TF_TUNNEL_ENCAP_GENEVE,
+	TF_TUNNEL_ENCAP_NVGRE,
+	TF_TUNNEL_ENCAP_GRE,
+	TF_TUNNEL_ENCAP_FULL_GENERIC
+};
+/**
+ * Global Configuration Table Types
+ */
+enum tf_global_config_type {
+	TF_TUNNEL_ENCAP,  /**< Tunnel Encap Config(TECT) */
+	TF_ACTION_BLOCK,  /**< Action Block Config(ABCR) */
+	TF_GLOBAL_CFG_TYPE_MAX
+};
+
+/**
+ * tf_global_cfg parameter definition
+ */
+struct tf_global_cfg_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * Get global configuration
+ *
+ * Retrieve the configuration
+ *
+ * Returns success or failure code.
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
+/**
+ * Update the global configuration table
+ *
+ * Read, modify write the value.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
 /**
  * @page if_tbl Interface Table Access
  *
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index a3073c826..ead958418 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -45,6 +45,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
 	struct tf_if_tbl_cfg_parms if_tbl_cfg;
+	struct tf_global_cfg_cfg_parms global_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -128,6 +129,18 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * GLOBAL_CFG
+	 */
+	global_cfg.num_elements = TF_GLOBAL_CFG_TYPE_MAX;
+	global_cfg.cfg = tf_global_cfg_p4;
+	rc = tf_global_cfg_bind(tfp, &global_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -207,6 +220,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_global_cfg_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Global Cfg Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 5a0943ad7..1740a271f 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 struct tf_session;
@@ -606,6 +607,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_if_tbl)(struct tf *tfp,
 				 struct tf_if_tbl_get_parms *parms);
+
+	/**
+	 * Update global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_set_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
+
+	/**
+	 * Get global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_get_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 2dc34b853..652608264 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -108,6 +108,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_free_tbl_scope = NULL,
 	.tf_dev_set_if_tbl = NULL,
 	.tf_dev_get_if_tbl = NULL,
+	.tf_dev_set_global_cfg = NULL,
+	.tf_dev_get_global_cfg = NULL,
 };
 
 /**
@@ -140,4 +142,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 	.tf_dev_set_if_tbl = tf_if_tbl_set,
 	.tf_dev_get_if_tbl = tf_if_tbl_get,
+	.tf_dev_set_global_cfg = tf_global_cfg_set,
+	.tf_dev_get_global_cfg = tf_global_cfg_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 3b03a7c4e..7fabb4ba8 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -11,6 +11,7 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -96,4 +97,8 @@ struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
 	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
 };
 
+struct tf_global_cfg_cfg tf_global_cfg_p4[TF_GLOBAL_CFG_TYPE_MAX] = {
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_TUNNEL_ENCAP },
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_ACTION_BLOCK },
+};
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.c b/drivers/net/bnxt/tf_core/tf_global_cfg.c
new file mode 100644
index 000000000..4ed4039db
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.c
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_global_cfg.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+/**
+ * Global Cfg DBs.
+ */
+static void *global_cfg_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_global_cfg_get_hcapi_parms {
+	/**
+	 * [in] Global Cfg DB Handle
+	 */
+	void *global_cfg_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Check global_cfg_type and return hwrm type.
+ *
+ * [in] global_cfg_type
+ *   Global Cfg type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_global_cfg_get_hcapi_type(struct tf_global_cfg_get_hcapi_parms *parms)
+{
+	struct tf_global_cfg_cfg *global_cfg;
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	global_cfg = (struct tf_global_cfg_cfg *)parms->global_cfg_db;
+	cfg_type = global_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_GLOBAL_CFG_CFG_HCAPI)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = global_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_global_cfg_bind(struct tf *tfp __rte_unused,
+		   struct tf_global_cfg_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg DB already initialized\n");
+		return -EINVAL;
+	}
+
+	global_cfg_db[TF_DIR_RX] = parms->cfg;
+	global_cfg_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Global Cfg - initialized\n");
+
+	return 0;
+}
+
+int
+tf_global_cfg_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Global Cfg DBs created\n");
+		return 0;
+	}
+
+	global_cfg_db[TF_DIR_RX] = NULL;
+	global_cfg_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_global_cfg_set(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_msg_set_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_global_cfg_get(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.h b/drivers/net/bnxt/tf_core/tf_global_cfg.h
new file mode 100644
index 000000000..5c73bb115
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_GLOBAL_CFG_H_
+#define TF_GLOBAL_CFG_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/**
+ * The global cfg module provides processing of global cfg types.
+ */
+
+struct tf;
+
+/**
+ * Global cfg configuration enumeration.
+ */
+enum tf_global_cfg_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_GLOBAL_CFG_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_GLOBAL_CFG_CFG_HCAPI,
+};
+
+/**
+ * Global cfg configuration structure, used by the Device to configure
+ * how an individual global cfg type is configured in regard to the HCAPI type.
+ */
+struct tf_global_cfg_cfg {
+	/**
+	 * Global cfg config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Global Cfg configuration parameters
+ */
+struct tf_global_cfg_cfg_parms {
+	/**
+	 * Number of table types in the configuration array
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_global_cfg_cfg *cfg;
+};
+
+/**
+ * global cfg parameters
+ */
+struct tf_dev_global_cfg_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * @page global cfg
+ *
+ * @ref tf_global_cfg_bind
+ *
+ * @ref tf_global_cfg_unbind
+ *
+ * @ref tf_global_cfg_set
+ *
+ * @ref tf_global_cfg_get
+ *
+ */
+/**
+ * Initializes the Global Cfg module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_bind(struct tf *tfp,
+		   struct tf_global_cfg_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_unbind(struct tf *tfp);
+
+/**
+ * Updates the global configuration table
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_set(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+/**
+ * Get global configuration
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_get(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+#endif /* TF_GLOBAL_CFG_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 8c2dff8ad..035c0948d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -991,6 +991,111 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+int
+tf_msg_get_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_get_global_cfg_input_t req = { 0 };
+	tf_get_global_cfg_output_t resp = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+	uint16_t resp_size = 0;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_GET_GLOBAL_CFG,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	resp_size = tfp_le_to_cpu_16(resp.size);
+	if (resp_size < params->config_sz_in_bytes)
+		return -EINVAL;
+
+	if (params->config)
+		tfp_memcpy(params->config,
+			   resp.data,
+			   resp_size);
+	else
+		return -EFAULT;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_set_global_cfg_input_t req = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	tfp_memcpy(req.data, params->config,
+		   params->config_sz_in_bytes);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SET_GLOBAL_CFG,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  enum tf_dir dir,
@@ -1066,8 +1171,8 @@ tf_msg_get_if_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
-		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX);
 
 	/* Populate the request */
 	req.fw_session_id =
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index c02a5203c..195710eb8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_tcam.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 
@@ -448,6 +449,36 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+/**
+ * Sends global cfg read request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to read parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_get_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
+/**
+ * Sends global cfg update request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to write parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_set_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 31/51] net/bnxt: add support for EEM System memory
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (29 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
                           ` (19 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Select EEM Host or System memory via config parameter
- Add EEM system memory support for kernel memory
- Dependent on DPDK changes that add support for the HWRM_OEM_CMD.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 config/common_base                      |   1 +
 drivers/net/bnxt/Makefile               |   3 +
 drivers/net/bnxt/bnxt.h                 |   8 +
 drivers/net/bnxt/bnxt_hwrm.c            |  27 +
 drivers/net/bnxt/bnxt_hwrm.h            |   1 +
 drivers/net/bnxt/meson.build            |   2 +-
 drivers/net/bnxt/tf_core/Makefile       |   5 +-
 drivers/net/bnxt/tf_core/tf_core.c      |  13 +-
 drivers/net/bnxt/tf_core/tf_core.h      |   4 +-
 drivers/net/bnxt/tf_core/tf_device.c    |   5 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |   2 +-
 drivers/net/bnxt/tf_core/tf_em.h        | 113 +---
 drivers/net/bnxt/tf_core/tf_em_common.c | 683 ++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_common.h |  30 ++
 drivers/net/bnxt/tf_core/tf_em_host.c   | 689 +-----------------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 541 ++++++++++++++++---
 drivers/net/bnxt/tf_core/tf_if_tbl.h    |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c       |  24 +
 drivers/net/bnxt/tf_core/tf_tbl.h       |   7 +
 drivers/net/bnxt/tf_core/tfp.c          |  12 +
 drivers/net/bnxt/tf_core/tfp.h          |  15 +
 21 files changed, 1319 insertions(+), 870 deletions(-)

diff --git a/config/common_base b/config/common_base
index fe30c515e..370a48f02 100644
--- a/config/common_base
+++ b/config/common_base
@@ -220,6 +220,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
 # Compile burst-oriented Broadcom BNXT PMD driver
 #
 CONFIG_RTE_LIBRTE_BNXT_PMD=y
+CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM=n
 
 #
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 349b09c36..6b9544b5d 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,9 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
 include $(SRCDIR)/hcapi/Makefile
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), y)
+CFLAGS += -DTF_USE_SYSTEM_MEM
+endif
 endif
 
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 65862abdc..43e5e7162 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -563,6 +563,13 @@ struct bnxt_rep_info {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define  MAX_TABLE_SUPPORT 4
+#define  MAX_DIR_SUPPORT   2
+struct bnxt_dmabuf_info {
+	uint32_t entry_num;
+	int      fd[MAX_DIR_SUPPORT][MAX_TABLE_SUPPORT];
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -780,6 +787,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_dmabuf_info dmabuf;
 	struct bnxt_ulp_context	*ulp_ctx;
 	struct bnxt_flow_stat_info *flow_stat;
 	uint8_t			flow_xstat;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index e6a28d07c..2605ef039 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -5506,3 +5506,30 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 
 	return 0;
 }
+
+#ifdef RTE_LIBRTE_BNXT_PMD_SYSTEM
+int
+bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num)
+{
+	struct hwrm_oem_cmd_input req = {0};
+	struct hwrm_oem_cmd_output *resp = bp->hwrm_cmd_resp_addr;
+	struct bnxt_dmabuf_info oem_data;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_OEM_CMD, BNXT_USE_CHIMP_MB);
+	req.IANA = 0x14e4;
+
+	memset(&oem_data, 0, sizeof(struct bnxt_dmabuf_info));
+	oem_data.entry_num = (entry_num);
+	memcpy(&req.oem_data[0], &oem_data, sizeof(struct bnxt_dmabuf_info));
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+	HWRM_CHECK_RESULT();
+
+	bp->dmabuf.entry_num = entry_num;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+#endif /* RTE_LIBRTE_BNXT_PMD_SYSTEM */
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 87cd40779..9e0b79904 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -276,4 +276,5 @@ int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
+int bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num);
 #endif
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index ace7353be..8f6ed419e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -31,7 +31,6 @@ sources = files('bnxt_cpr.c',
         'tf_core/tf_em_common.c',
         'tf_core/tf_em_host.c',
         'tf_core/tf_em_internal.c',
-        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
@@ -46,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
+	'tf_core/tf_em_host.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 202db4150..750c25c5e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -16,8 +16,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), n)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
+else
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM) += tf_core/tf_em_system.c
+endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0f119b45f..00b2775ed 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -540,10 +540,12 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_alloc_parms aparms = { 0 };
+	struct tf_tcam_alloc_parms aparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&aparms, 0, sizeof(struct tf_tcam_alloc_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -598,10 +600,13 @@ tf_set_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_set_parms sparms = { 0 };
+	struct tf_tcam_set_parms sparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&sparms, 0, sizeof(struct tf_tcam_set_parms));
+
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -667,10 +672,12 @@ tf_free_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_free_parms fparms = { 0 };
+	struct tf_tcam_free_parms fparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&fparms, 0, sizeof(struct tf_tcam_free_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3f54ab16b..9e8042606 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1731,7 +1731,7 @@ struct tf_set_if_tbl_entry_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -1768,7 +1768,7 @@ struct tf_get_if_tbl_entry_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index ead958418..f08f7eba7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -92,8 +92,11 @@ tf_dev_bind_p4(struct tf *tfp,
 	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
 	em_cfg.cfg = tf_em_ext_p4;
 	em_cfg.resources = resources;
+#ifdef TF_USE_SYSTEM_MEM
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_SYSTEM;
+#else
 	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
-
+#endif
 	rc = tf_em_ext_common_bind(tfp, &em_cfg);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 652608264..dfe626c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -126,7 +126,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
-	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_common_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 39a216341..089026178 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -16,6 +16,9 @@
 
 #include "hcapi/hcapi_cfa_defs.h"
 
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -69,8 +72,16 @@
 #error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
+/*
+ * System memory always uses 4K pages
+ */
+#ifdef TF_USE_SYSTEM_MEM
+#define TF_EM_PAGE_SIZE (1 << TF_EM_PAGE_SIZE_4K)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SIZE_4K)
+#else
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
 #define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+#endif
 
 /*
  * Used to build GFID:
@@ -168,39 +179,6 @@ struct tf_em_cfg_parms {
  * @ref tf_em_ext_common_alloc
  */
 
-/**
- * Allocates EEM Table scope
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- *   -ENOMEM - Out of memory
- */
-int tf_alloc_eem_tbl_scope(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free's EEM Table scope control block
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			     struct tf_free_tbl_scope_parms *parms);
-
 /**
  * Insert record in to internal EM table
  *
@@ -374,8 +352,8 @@ int tf_em_ext_common_unbind(struct tf *tfp);
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+int tf_em_ext_alloc(struct tf *tfp,
+		    struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using host memory
@@ -390,40 +368,8 @@ int tf_em_ext_host_alloc(struct tf *tfp,
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
-
-/**
- * Alloc for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_alloc(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_free(struct tf *tfp,
-			  struct tf_free_tbl_scope_parms *parms);
+int tf_em_ext_free(struct tf *tfp,
+		   struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
@@ -510,8 +456,8 @@ tf_tbl_ext_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
 
 /**
  * Sets the specified external table type element.
@@ -529,26 +475,11 @@ int tf_tbl_ext_set(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
 
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data by invoking the
- * firmware.
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_system_set(struct tf *tfp,
-			  struct tf_tbl_set_parms *parms);
+int
+tf_em_ext_system_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms);
 
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 23a7fc9c2..8b02b8ba3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -23,6 +23,8 @@
 
 #include "bnxt.h"
 
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
 /**
  * EM DBs.
@@ -281,19 +283,602 @@ tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
 		       struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
+	key_entry->hdr.pointer = result->pointer;
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
 
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
 
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		 uint32_t page_size)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(page_size,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    page_size);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, page_size,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 \
+		    " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * page_size,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    parms->rx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    parms->tx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
 
 #ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
 #endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 =
+			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables
+			[(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
 }
 
+
 int
 tf_em_ext_common_bind(struct tf *tfp,
 		      struct tf_em_cfg_parms *parms)
@@ -341,6 +926,7 @@ tf_em_ext_common_bind(struct tf *tfp,
 		init = 1;
 
 	mem_type = parms->mem_type;
+
 	return 0;
 }
 
@@ -375,31 +961,88 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms)
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_tbl_ext_host_set(tfp, parms);
-	else
-		return tf_tbl_ext_system_set(tfp, parms);
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	return rc;
 }
 
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_alloc(tfp, parms);
-	else
-		return tf_em_ext_system_alloc(tfp, parms);
+	return tf_em_ext_alloc(tfp, parms);
 }
 
 int
 tf_em_ext_common_free(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_free(tfp, parms);
-	else
-		return tf_em_ext_system_free(tfp, parms);
+	return tf_em_ext_free(tfp, parms);
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index bf01df9b8..fa313c458 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -101,4 +101,34 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   uint32_t offset,
 			   enum hcapi_cfa_em_table_type table_type);
 
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		     uint32_t page_size);
+
 #endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 2626a59fe..8cc92c438 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -22,7 +22,6 @@
 
 #include "bnxt.h"
 
-
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
 #define PTU_PTE_NEXT_TO_LAST   0x4UL
@@ -30,20 +29,6 @@
 /* Number of pointers per page_size */
 #define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
 /**
  * EM DBs.
  */
@@ -294,203 +279,6 @@ tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
 /**
  * Unregisters EM Ctx in Firmware
  *
@@ -552,7 +340,7 @@ tf_em_ctx_reg(struct tf *tfp,
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
+			rc = tf_em_size_table(tbl, TF_EM_PAGE_SIZE);
 
 			if (rc)
 				goto cleanup;
@@ -578,403 +366,8 @@ tf_em_ctx_reg(struct tf *tfp,
 	return rc;
 }
 
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	return 0;
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t mask;
-	uint32_t key0_hash;
-	uint32_t key1_hash;
-	uint32_t key0_index;
-	uint32_t key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_insert_eem_entry(tbl_scope_cb, parms);
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
 int
-tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
-}
-
-int
-tf_em_ext_host_alloc(struct tf *tfp,
-		     struct tf_alloc_tbl_scope_parms *parms)
+tf_em_ext_alloc(struct tf *tfp, struct tf_alloc_tbl_scope_parms *parms)
 {
 	int rc;
 	enum tf_dir dir;
@@ -1081,7 +474,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 cleanup_full:
 	free_parms.tbl_scope_id = parms->tbl_scope_id;
-	tf_em_ext_host_free(tfp, &free_parms);
+	tf_em_ext_free(tfp, &free_parms);
 	return -EINVAL;
 
 cleanup:
@@ -1094,8 +487,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 }
 
 int
-tf_em_ext_host_free(struct tf *tfp,
-		    struct tf_free_tbl_scope_parms *parms)
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
 {
 	int rc = 0;
 	enum tf_dir  dir;
@@ -1136,75 +529,3 @@ tf_em_ext_host_free(struct tf *tfp,
 	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
 	return rc;
 }
-
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	uint32_t tbl_scope_id;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	tbl_scope_id = parms->tbl_scope_id;
-
-	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Get the table scope control block associated with the
-	 * external pool
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	op.opcode = HCAPI_CFA_HWOPS_PUT;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = parms->idx;
-	key_obj.data = parms->data;
-	key_obj.size = parms->data_sz_in_bytes;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 10768df03..c47c8b93f 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -4,11 +4,24 @@
  */
 
 #include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <unistd.h>
+#include <string.h>
+
 #include <rte_common.h>
 #include <rte_errno.h>
 #include <rte_log.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
 #include "tf_em.h"
 #include "tf_em_common.h"
 #include "tf_msg.h"
@@ -18,103 +31,503 @@
 
 #include "bnxt.h"
 
+enum tf_em_req_type {
+	TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ = 5,
+};
 
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
+struct tf_em_bnxt_lfc_req_hdr {
+	uint32_t ver;
+	uint32_t bus;
+	uint32_t devfn;
+	enum tf_em_req_type req_type;
+};
+
+struct tf_em_bnxt_lfc_cfa_eem_std_hdr {
+	uint16_t version;
+	uint16_t size;
+	uint32_t flags;
+	#define TF_EM_BNXT_LFC_EEM_CFG_PRIMARY_FUNC     (1 << 0)
+};
+
+struct tf_em_bnxt_lfc_dmabuf_fd {
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+};
+
+#ifndef __user
+#define __user
+#endif
+
+struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req {
+	struct tf_em_bnxt_lfc_cfa_eem_std_hdr std;
+	uint8_t dir;
+	uint32_t flags;
+	void __user *dma_fd;
+};
+
+struct tf_em_bnxt_lfc_req {
+	struct tf_em_bnxt_lfc_req_hdr hdr;
+	union {
+		struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req
+		       eem_dmabuf_export_req;
+		uint64_t hreq;
+	} req;
+};
+
+#define TF_EEM_BNXT_LFC_IOCTL_MAGIC     0x98
+#define BNXT_LFC_REQ    \
+	_IOW(TF_EEM_BNXT_LFC_IOCTL_MAGIC, 1, struct tf_em_bnxt_lfc_req)
+
+/**
+ * EM DBs.
  */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_insert_em_entry_parms *parms __rte_unused)
+extern void *eem_db[TF_DIR_MAX];
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+static void
+tf_em_dmabuf_mem_unmap(struct hcapi_cfa_em_table *tbl)
 {
-	return 0;
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no, pg_count;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			if (tp->pg_va_tbl != NULL &&
+			    tp->pg_va_tbl[page_no] != NULL &&
+			    tp->pg_size != 0) {
+				(void)munmap(tp->pg_va_tbl[page_no],
+					     tp->pg_size);
+			}
+		}
+
+		tfp_free((void *)tp->pg_va_tbl);
+		tfp_free((void *)tp->pg_pa_tbl);
+	}
 }
 
-/** delete EEM hash entry API
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
  *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
  *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
+ * [in] dir
+ *   Receive or transmit direction
  */
+static void
+tf_em_ctx_unreg(struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+			tf_em_dmabuf_mem_unmap(tbl);
+	}
+}
+
+static int tf_export_tbl_scope(int lfc_fd,
+			       int *fd,
+			       int bus,
+			       int devfn)
+{
+	struct tf_em_bnxt_lfc_req tf_lfc_req;
+	struct tf_em_bnxt_lfc_dmabuf_fd *dma_fd;
+	struct tfp_calloc_parms  mparms;
+	int rc;
+
+	memset(&tf_lfc_req, 0, sizeof(struct tf_em_bnxt_lfc_req));
+	tf_lfc_req.hdr.ver = 1;
+	tf_lfc_req.hdr.bus = bus;
+	tf_lfc_req.hdr.devfn = devfn;
+	tf_lfc_req.hdr.req_type = TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ;
+	tf_lfc_req.req.eem_dmabuf_export_req.flags = O_ACCMODE;
+	tf_lfc_req.req.eem_dmabuf_export_req.std.version = 1;
+
+	mparms.nitems = 1;
+	mparms.size = sizeof(struct tf_em_bnxt_lfc_dmabuf_fd);
+	mparms.alignment = 0;
+	tfp_calloc(&mparms);
+	dma_fd = (struct tf_em_bnxt_lfc_dmabuf_fd *)mparms.mem_va;
+	tf_lfc_req.req.eem_dmabuf_export_req.dma_fd = dma_fd;
+
+	rc = ioctl(lfc_fd, BNXT_LFC_REQ, &tf_lfc_req);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EXT EEM export chanel_fd %d, rc=%d\n",
+			    lfc_fd,
+			    rc);
+		tfp_free(dma_fd);
+		return rc;
+	}
+
+	memcpy(fd, dma_fd->fd, sizeof(dma_fd->fd));
+	tfp_free(dma_fd);
+
+	return rc;
+}
+
 static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_delete_em_entry_parms *parms __rte_unused)
+tf_em_dmabuf_mem_map(struct hcapi_cfa_em_table *tbl,
+		     int dmabuf_fd)
 {
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no;
+	uint32_t pg_count;
+	uint32_t offset;
+	struct tfp_calloc_parms parms;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		offset = 0;
+
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0)
+			return -ENOMEM;
+
+		tp->pg_va_tbl = parms.mem_va;
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0) {
+			tfp_free((void *)tp->pg_va_tbl);
+			return -ENOMEM;
+		}
+
+		tp->pg_pa_tbl = parms.mem_va;
+		tp->pg_count = 0;
+		tp->pg_size =  TF_EM_PAGE_SIZE;
+
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			tp->pg_va_tbl[page_no] = mmap(NULL,
+						      TF_EM_PAGE_SIZE,
+						      PROT_READ | PROT_WRITE,
+						      MAP_SHARED,
+						      dmabuf_fd,
+						      offset);
+			if (tp->pg_va_tbl[page_no] == (void *)-1) {
+				TFP_DRV_LOG(ERR,
+		"MMap memory error. level:%d page:%d pg_count:%d - %s\n",
+					    level,
+				     page_no,
+					    pg_count,
+					    strerror(errno));
+				return -ENOMEM;
+			}
+			offset += tp->pg_size;
+			tp->pg_count++;
+		}
+	}
+
 	return 0;
 }
 
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_insert_em_entry_parms *parms)
+static int tf_mmap_tbl_scope(struct tf_tbl_scope_cb *tbl_scope_cb,
+			     enum tf_dir dir,
+			     int tbl_type,
+			     int dmabuf_fd)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *tbl;
+
+	if (tbl_type == TF_EFC_TABLE)
+		return 0;
+
+	tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[tbl_type];
+	return tf_em_dmabuf_mem_map(tbl, dmabuf_fd);
+}
+
+#define TF_LFC_DEVICE "/dev/bnxt_lfc"
+
+static int
+tf_prepare_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	int lfc_fd;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	lfc_fd = open(TF_LFC_DEVICE, O_RDWR);
+	if (!lfc_fd) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: open %s device error\n",
+			    TF_LFC_DEVICE);
+		return -ENOENT;
 	}
 
-	return tf_insert_eem_entry
-		(tbl_scope_cb, parms);
+	tbl_scope_cb->lfc_fd = lfc_fd;
+
+	return 0;
 }
 
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_delete_em_entry_parms *parms)
+static int
+offload_system_mmap(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int rc;
+	int dmabuf_fd;
+	enum tf_dir dir;
+	enum hcapi_cfa_em_table_type tbl_type;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	rc = tf_prepare_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EEM: Prepare bnxt_lfc channel failed\n");
+		return rc;
 	}
 
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
+	rc = tf_export_tbl_scope(tbl_scope_cb->lfc_fd,
+				 (int *)tbl_scope_cb->fd,
+				 tbl_scope_cb->bus,
+				 tbl_scope_cb->devfn);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "export dmabuf fd failed\n");
+		return rc;
+	}
+
+	tbl_scope_cb->valid = true;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (tbl_type = TF_KEY0_TABLE; tbl_type <
+			     TF_MAX_TABLE; tbl_type++) {
+			if (tbl_type == TF_EFC_TABLE)
+				continue;
+
+			dmabuf_fd = tbl_scope_cb->fd[(dir ? 0 : 1)][tbl_type];
+			rc = tf_mmap_tbl_scope(tbl_scope_cb,
+					       dir,
+					       tbl_type,
+					       dmabuf_fd);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "dir:%d tbl:%d mmap failed rc %d\n",
+					    dir,
+					    tbl_type,
+					    rc);
+				break;
+			}
+		}
+	}
+	return 0;
 }
 
-int
-tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
-		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+static int
+tf_destroy_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	close(tbl_scope_cb->lfc_fd);
+
 	return 0;
 }
 
-int
-tf_em_ext_system_free(struct tf *tfp __rte_unused,
-		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+static int
+tf_dmabuf_alloc(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp,
+		tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries);
+	if (rc)
+		PMD_DRV_LOG(ERR, "EEM: Failed to prepare system memory rc:%d\n",
+			    rc);
+
 	return 0;
 }
 
-int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
-			  struct tf_tbl_set_parms *parms __rte_unused)
+static int
+tf_dmabuf_free(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp, 0);
+	if (rc)
+		TFP_DRV_LOG(ERR, "EEM: Failed to cleanup system memory\n");
+
+	tf_destroy_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+
 	return 0;
 }
+
+int
+tf_em_ext_alloc(struct tf *tfp,
+		struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_free_parms fparms = { 0 };
+	int dir;
+	int i;
+	struct hcapi_cfa_em_table *em_tables;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+	tbl_scope_cb->bus = tfs->session_id.internal.bus;
+	tbl_scope_cb->devfn = tfs->session_id.internal.device;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	rc = tf_dmabuf_alloc(tfp, tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System DMA buff alloc failed\n");
+		return -EIO;
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+			if (i == TF_EFC_TABLE)
+				continue;
+
+			em_tables =
+				&tbl_scope_cb->em_ctx_info[dir].em_tables[i];
+
+			rc = tf_em_size_table(em_tables, TF_EM_PAGE_SIZE);
+			if (rc) {
+				TFP_DRV_LOG(ERR, "Size table failed\n");
+				goto cleanup;
+			}
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_create_tbl_pool_external(dir,
+					tbl_scope_cb,
+					em_tables[TF_RECORD_TABLE].num_entries,
+					em_tables[TF_RECORD_TABLE].entry_size);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	rc = offload_system_mmap(tbl_scope_cb);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System alloc mmap failed\n");
+		goto cleanup_full;
+	}
+
+	return rc;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int dir;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+
+		/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+
+		/* Unmap memory */
+		tf_em_ctx_unreg(tbl_scope_cb, dir);
+
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+	}
+
+	tf_dmabuf_free(tfp, tbl_scope_cb);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
index 54d4c37f5..7eb72bd42 100644
--- a/drivers/net/bnxt/tf_core/tf_if_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -113,7 +113,7 @@ struct tf_if_tbl_set_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -143,7 +143,7 @@ struct tf_if_tbl_get_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [out] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 035c0948d..ed506defa 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -813,7 +813,19 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = parms->hcapi_type;
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
@@ -869,7 +881,19 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct hwrm_tf_tcam_free_input req =  { 0 };
 	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(in_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = in_parms->hcapi_type;
 	req.count = 1;
 	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 2a10b47ce..f20e8d729 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -38,6 +38,13 @@ struct tf_em_caps {
  */
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
+#ifdef TF_USE_SYSTEM_MEM
+	int lfc_fd;
+	uint32_t bus;
+	uint32_t devfn;
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+	bool valid;
+#endif
 	int index;
 	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps em_caps[TF_DIR_MAX];
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 426a182a9..3eade3127 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -87,6 +87,18 @@ tfp_send_msg_tunneled(struct tf *tfp,
 	return rc;
 }
 
+#ifdef TF_USE_SYSTEM_MEM
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows)
+{
+	return bnxt_hwrm_oem_cmd(container_of(tfp,
+					      struct bnxt,
+					      tfp),
+				 max_flows);
+}
+#endif /* TF_USE_SYSTEM_MEM */
+
 /**
  * Allocates zero'ed memory from the heap.
  *
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8789eba1f..421a7d9f7 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -170,6 +170,21 @@ int
 tfp_msg_hwrm_oem_cmd(struct tf *tfp,
 		     uint32_t max_flows);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 32/51] net/bnxt: integrate with the latest tf core changes
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (30 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
                           ` (18 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

ULP changes to integrate with the latest session open
interface in tf_core

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 46 ++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index c7281ab9a..a9ed5d92a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -68,6 +68,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 	struct rte_eth_dev		*ethdev = bp->eth_dev;
 	int32_t				rc = 0;
 	struct tf_open_session_parms	params;
+	struct tf_session_resources	*resources;
 
 	memset(&params, 0, sizeof(params));
 
@@ -79,6 +80,51 @@ ulp_ctx_session_open(struct bnxt *bp,
 		return rc;
 	}
 
+	params.shadow_copy = false;
+	params.device_type = TF_DEVICE_TYPE_WH;
+	resources = &params.resources;
+	/** RX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_L2_CTXT] = 16;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 720;
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 720;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 16;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 416;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 2048;
+
+	/** TX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_L2_CTXT] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 8;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 8;
+
+	/* EEM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+
 	rc = tf_open_session(&bp->tfp, &params);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 33/51] net/bnxt: add support for internal encap records
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (31 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 34/51] net/bnxt: add support for if table processing Ajit Khaparde
                           ` (17 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Somnath Kotur, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

Modifications to allow internal encap records to be supported:
- Modified the mapper index table processing to handle encap without an
  action record
- Modified the session open code to reserve some 64 Byte internal encap
  records on tx
- Modified the blob encap swap to support encap without action record

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c   |  3 +++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c | 29 +++++++++++++---------------
 drivers/net/bnxt/tf_ulp/ulp_utils.c  |  2 +-
 3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index a9ed5d92a..4c1a1c44c 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -113,6 +113,9 @@ ulp_ctx_session_open(struct bnxt *bp,
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
 
+	/* ENCAP */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 734db7c6c..a9a625f9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1473,7 +1473,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
 
-	if (!flds || !num_flds) {
+	if (!flds || (!num_flds && !encap_flds)) {
 		BNXT_TF_DBG(ERR, "template undefined for the index table\n");
 		return -EINVAL;
 	}
@@ -1482,7 +1482,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	for (i = 0; i < (num_flds + encap_flds); i++) {
 		/* set the swap index if encap swap bit is enabled */
 		if (parms->device_params->encap_byte_swap && encap_flds &&
-		    ((i + 1) == num_flds))
+		    (i == num_flds))
 			ulp_blob_encap_swap_idx_set(&data);
 
 		/* Process the result fields */
@@ -1495,18 +1495,15 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			BNXT_TF_DBG(ERR, "data field failed\n");
 			return rc;
 		}
+	}
 
-		/* if encap bit swap is enabled perform the bit swap */
-		if (parms->device_params->encap_byte_swap && encap_flds) {
-			if ((i + 1) == (num_flds + encap_flds))
-				ulp_blob_perform_encap_swap(&data);
+	/* if encap bit swap is enabled perform the bit swap */
+	if (parms->device_params->encap_byte_swap && encap_flds) {
+		ulp_blob_perform_encap_swap(&data);
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-			if ((i + 1) == (num_flds + encap_flds)) {
-				BNXT_TF_DBG(INFO, "Dump fter encap swap\n");
-				ulp_mapper_blob_dump(&data);
-			}
+		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
+		ulp_mapper_blob_dump(&data);
 #endif
-		}
 	}
 
 	/* Perform the tf table allocation by filling the alloc params */
@@ -1817,6 +1814,11 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+					    tbl->resource_func);
+				return rc;
+			}
 			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected action resource %d\n",
@@ -1824,11 +1826,6 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 			return -EINVAL;
 		}
 	}
-	if (rc) {
-		BNXT_TF_DBG(ERR, "Resource type %d failed\n",
-			    tbl->resource_func);
-		return rc;
-	}
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
index 3a4157f22..3afaac647 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -478,7 +478,7 @@ ulp_blob_perform_encap_swap(struct ulp_blob *blob)
 		BNXT_TF_DBG(ERR, "invalid argument\n");
 		return; /* failure */
 	}
-	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx);
 	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
 
 	while (idx <= end_idx) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 34/51] net/bnxt: add support for if table processing
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (32 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
                           ` (16 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for if table processing in the ulp mapper
layer. This enables support for the default partition action
record pointer interface table.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   2 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 141 +++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   1 +
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 117 ++++++++-------
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   8 +-
 6 files changed, 187 insertions(+), 83 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4c1a1c44c..4835b951e 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -115,6 +115,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 
 	/* ENCAP */
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_16B] = 16;
 
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 22996e50e..384dc5b2c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -933,7 +933,7 @@ ulp_default_flow_db_cfa_action_get(struct bnxt_ulp_context *ulp_ctx,
 				   uint32_t flow_id,
 				   uint32_t *cfa_action)
 {
-	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX;
+	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION;
 	uint64_t hndl;
 	int32_t rc;
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index a9a625f9f..42bb98557 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -184,7 +184,8 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
 	return &ulp_act_tbl_list[idx];
 }
 
-/** Get a list of classifier tables that implement the flow
+/*
+ * Get a list of classifier tables that implement the flow
  * Gets a device dependent list of tables that implement the class template id
  *
  * dev_id [in] The device id of the forwarding element
@@ -193,13 +194,16 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
  *
  * num_tbls [out] The number of classifier tables in the returned array
  *
+ * fdb_tbl_idx [out] The flow database index Regular or default
+ *
  * returns An array of classifier tables to implement the flow, or NULL on
  * error
  */
 static struct bnxt_ulp_mapper_tbl_info *
 ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 			      uint32_t tid,
-			      uint32_t *num_tbls)
+			      uint32_t *num_tbls,
+			      uint32_t *fdb_tbl_idx)
 {
 	uint32_t idx;
 	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
@@ -212,7 +216,7 @@ ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 	 */
 	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
 	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
-
+	*fdb_tbl_idx = ulp_class_tmpl_list[tidx].flow_db_table_type;
 	return &ulp_class_tbl_list[idx];
 }
 
@@ -256,7 +260,8 @@ ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
-			     uint32_t *num_flds)
+			     uint32_t *num_flds,
+			     uint32_t *num_encap_flds)
 {
 	uint32_t idx;
 
@@ -265,6 +270,7 @@ ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
 
 	idx		= tbl->result_start_idx;
 	*num_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
 
 	/* NOTE: Need template to provide range checking define */
 	return &ulp_class_result_field_list[idx];
@@ -1146,6 +1152,7 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		struct bnxt_ulp_mapper_result_field_info *dflds;
 		struct bnxt_ulp_mapper_ident_info *idents;
 		uint32_t num_dflds, num_idents;
+		uint32_t encap_flds = 0;
 
 		/*
 		 * Since the cache entry is responsible for allocating
@@ -1166,8 +1173,9 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 
 		/* Create the result data blob */
-		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-		if (!dflds || !num_dflds) {
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds,
+						     &encap_flds);
+		if (!dflds || !num_dflds || encap_flds) {
 			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 			rc = -EINVAL;
 			goto error;
@@ -1293,6 +1301,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	int32_t	trc;
 	enum bnxt_ulp_flow_mem_type mtype = parms->device_params->flow_mem_type;
 	int32_t rc = 0;
+	uint32_t encap_flds = 0;
 
 	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
 	if (!kflds || !num_kflds) {
@@ -1327,8 +1336,8 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	 */
 
 	/* Create the result data blob */
-	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-	if (!dflds || !num_dflds) {
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds, &encap_flds);
+	if (!dflds || !num_dflds || encap_flds) {
 		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 		return -EINVAL;
 	}
@@ -1468,7 +1477,8 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Get the result fields list */
 	if (is_class_tbl)
-		flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+		flds = ulp_mapper_result_fields_get(tbl, &num_flds,
+						    &encap_flds);
 	else
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
@@ -1761,6 +1771,76 @@ ulp_mapper_cache_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0;
+	struct tf_set_if_tbl_entry_parms iftbl_params = { 0 };
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	uint32_t encap_flds;
+
+	/* Initialize the blob data */
+	if (!ulp_blob_init(&data, tbl->result_bit_size,
+			   parms->device_params->byte_order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	/* Get the result fields list */
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds, &encap_flds);
+
+	if (!flds || !num_flds || encap_flds) {
+		BNXT_TF_DBG(ERR, "template undefined for the IF table\n");
+		return -EINVAL;
+	}
+
+	/* process the result fields, loop through them */
+	for (i = 0; i < num_flds; i++) {
+		/* Process the result fields */
+		rc = ulp_mapper_result_field_process(parms,
+						     tbl->direction,
+						     &flds[i],
+						     &data,
+						     "IFtable Result");
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	/* Get the index details from computed field */
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+
+	/* Perform the tf table set by filling the set params */
+	iftbl_params.dir = tbl->direction;
+	iftbl_params.type = tbl->resource_type;
+	iftbl_params.data = ulp_blob_data_get(&data, &tmplen);
+	iftbl_params.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+	iftbl_params.idx = idx;
+
+	rc = tf_set_if_tbl_entry(tfp, &iftbl_params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+			    iftbl_params.type,
+			    (iftbl_params.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iftbl_params.idx,
+			    rc);
+		return rc;
+	}
+
+	/*
+	 * TBD: Need to look at the need to store idx in flow db for restore
+	 * the table to its orginial state on deletion of this entry.
+	 */
+	return rc;
+}
+
 static int32_t
 ulp_mapper_glb_resource_info_init(struct tf *tfp,
 				  struct bnxt_ulp_mapper_data *mapper_data)
@@ -1862,6 +1942,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE:
 			rc = ulp_mapper_cache_tbl_process(parms, tbl);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_IF_TABLE:
+			rc = ulp_mapper_if_tbl_process(parms, tbl);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
 				    tbl->resource_func);
@@ -2064,20 +2147,29 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Get the action table entry from device id and act context id */
 	parms.act_tid = cparms->act_tid;
-	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
-						     parms.act_tid,
-						     &parms.num_atbls);
-	if (!parms.atbls || !parms.num_atbls) {
-		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		return -EINVAL;
+
+	/*
+	 * Perform the action table get only if act template is not zero
+	 * for act template zero like for default rules ignore the action
+	 * table processing.
+	 */
+	if (parms.act_tid) {
+		parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+							     parms.act_tid,
+							     &parms.num_atbls);
+		if (!parms.atbls || !parms.num_atbls) {
+			BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			return -EINVAL;
+		}
 	}
 
 	/* Get the class table entry from device id and act context id */
 	parms.class_tid = cparms->class_tid;
 	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
 						    parms.class_tid,
-						    &parms.num_ctbls);
+						    &parms.num_ctbls,
+						    &parms.tbl_idx);
 	if (!parms.ctbls || !parms.num_ctbls) {
 		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
 			    parms.dev_id, parms.class_tid);
@@ -2111,7 +2203,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	 * free each of them.
 	 */
 	rc = ulp_flow_db_fid_alloc(ulp_ctx,
-				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   parms.tbl_idx,
 				   cparms->func_id,
 				   &parms.fid);
 	if (rc) {
@@ -2120,11 +2212,14 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	}
 
 	/* Process the action template list from the selected action table*/
-	rc = ulp_mapper_action_tbls_process(&parms);
-	if (rc) {
-		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		goto flow_error;
+	if (parms.act_tid) {
+		rc = ulp_mapper_action_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "action tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			goto flow_error;
+		}
 	}
 
 	/* All good. Now process the class template */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 89c08ab25..517422338 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -256,6 +256,7 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 			BNXT_TF_DBG(ERR, "Mark index greater than allocated\n");
 			return -EINVAL;
 		}
+		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
 	}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index ac84f88e9..66343b918 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -88,35 +88,36 @@ enum bnxt_ulp_byte_order {
 };
 
 enum bnxt_ulp_cf_idx {
-	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 0,
-	BNXT_ULP_CF_IDX_O_VTAG_NUM = 1,
-	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 2,
-	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 3,
-	BNXT_ULP_CF_IDX_I_VTAG_NUM = 4,
-	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 5,
-	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 6,
-	BNXT_ULP_CF_IDX_INCOMING_IF = 7,
-	BNXT_ULP_CF_IDX_DIRECTION = 8,
-	BNXT_ULP_CF_IDX_SVIF_FLAG = 9,
-	BNXT_ULP_CF_IDX_O_L3 = 10,
-	BNXT_ULP_CF_IDX_I_L3 = 11,
-	BNXT_ULP_CF_IDX_O_L4 = 12,
-	BNXT_ULP_CF_IDX_I_L4 = 13,
-	BNXT_ULP_CF_IDX_DEV_PORT_ID = 14,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 15,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 16,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 17,
-	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 18,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 19,
-	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 20,
-	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 21,
-	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 22,
-	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 23,
-	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 24,
-	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 25,
-	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 26,
-	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 27,
-	BNXT_ULP_CF_IDX_LAST = 28
+	BNXT_ULP_CF_IDX_NOT_USED = 0,
+	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 1,
+	BNXT_ULP_CF_IDX_O_VTAG_NUM = 2,
+	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 3,
+	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 4,
+	BNXT_ULP_CF_IDX_I_VTAG_NUM = 5,
+	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 6,
+	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 7,
+	BNXT_ULP_CF_IDX_INCOMING_IF = 8,
+	BNXT_ULP_CF_IDX_DIRECTION = 9,
+	BNXT_ULP_CF_IDX_SVIF_FLAG = 10,
+	BNXT_ULP_CF_IDX_O_L3 = 11,
+	BNXT_ULP_CF_IDX_I_L3 = 12,
+	BNXT_ULP_CF_IDX_O_L4 = 13,
+	BNXT_ULP_CF_IDX_I_L4 = 14,
+	BNXT_ULP_CF_IDX_DEV_PORT_ID = 15,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 16,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 17,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 18,
+	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 19,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 20,
+	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 21,
+	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 22,
+	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 23,
+	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 24,
+	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 25,
+	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
+	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
+	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
+	BNXT_ULP_CF_IDX_LAST = 29
 };
 
 enum bnxt_ulp_critical_resource {
@@ -133,11 +134,6 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
-enum bnxt_ulp_df_param_type {
-	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
-	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
-};
-
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
@@ -154,7 +150,8 @@ enum bnxt_ulp_flow_mem_type {
 enum bnxt_ulp_glb_regfile_index {
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 2
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
 };
 
 enum bnxt_ulp_hdr_type {
@@ -204,22 +201,22 @@ enum bnxt_ulp_priority {
 };
 
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 0,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 1,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 2,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 3,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 4,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 5,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 6,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 7,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 8,
-	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 9,
-	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 10,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 11,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 12,
-	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 13,
-	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 14,
-	BNXT_ULP_REGFILE_INDEX_NOT_USED = 15,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 1,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 8,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 9,
+	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 10,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 11,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 12,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
+	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
+	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
 	BNXT_ULP_REGFILE_INDEX_LAST = 16
 };
 
@@ -265,10 +262,10 @@ enum bnxt_ulp_resource_func {
 enum bnxt_ulp_resource_sub_type {
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT_INDEX = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT_INDEX = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX = 1,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
 	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
 };
 
@@ -282,7 +279,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
 	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
@@ -398,7 +394,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
 	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
 	BNXT_ULP_SYM_NO = 0,
@@ -489,6 +484,11 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_wh_plus {
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+};
+
 enum bnxt_ulp_act_prop_sz {
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
@@ -588,4 +588,9 @@ enum bnxt_ulp_act_hid {
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
 	BNXT_ULP_ACT_HID_0040 = 0x0040
 };
+
+enum bnxt_ulp_df_tpl {
+	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5c4335847..1188223aa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -150,9 +150,10 @@ struct bnxt_ulp_device_params {
 
 /* Flow Mapper */
 struct bnxt_ulp_mapper_tbl_list_info {
-	uint32_t	device_name;
-	uint32_t	start_tbl_idx;
-	uint32_t	num_tbls;
+	uint32_t		device_name;
+	uint32_t		start_tbl_idx;
+	uint32_t		num_tbls;
+	enum bnxt_ulp_fdb_type	flow_db_table_type;
 };
 
 struct bnxt_ulp_mapper_tbl_info {
@@ -183,6 +184,7 @@ struct bnxt_ulp_mapper_tbl_info {
 
 	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
+	uint32_t			comp_field_idx;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 35/51] net/bnxt: disable Tx vector mode if truflow is enabled
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (33 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 34/51] net/bnxt: add support for if table processing Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
                           ` (15 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The vector mode in the tx handler is disabled when truflow is
enabled since truflow now requires bd action record support.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 697cd6651..355025741 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1116,12 +1116,15 @@ bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
 {
 #ifdef RTE_ARCH_X86
 #ifndef RTE_LIBRTE_IEEE1588
+	struct bnxt *bp = eth_dev->data->dev_private;
+
 	/*
 	 * Vector mode transmit can be enabled only if not using scatter rx
 	 * or tx offloads.
 	 */
 	if (!eth_dev->data->scattered_rx &&
-	    !eth_dev->data->dev_conf.txmode.offloads) {
+	    !eth_dev->data->dev_conf.txmode.offloads &&
+	    !BNXT_TRUFLOW_EN(bp)) {
 		PMD_DRV_LOG(INFO, "Using vector mode transmit for port %d\n",
 			    eth_dev->data->port_id);
 		return bnxt_xmit_pkts_vec;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 36/51] net/bnxt: add index opcode and operand to mapper table
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (34 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
                           ` (14 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Extended the regfile and computed field operations to a common
index opcode operation and added globlal resource operations are
also part of the index opcode operation.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 56 ++++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  9 ++-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 45 +++++----------
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  8 +++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  4 +-
 5 files changed, 80 insertions(+), 42 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 42bb98557..7b3b3d698 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1443,7 +1443,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	struct bnxt_ulp_mapper_result_field_info *flds;
 	struct ulp_flow_db_res_params	fid_parms;
 	struct ulp_blob	data;
-	uint64_t idx;
+	uint64_t idx = 0;
 	uint16_t tmplen;
 	uint32_t i, num_flds;
 	int32_t rc = 0, trc = 0;
@@ -1516,6 +1516,42 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 	}
 
+	/*
+	 * Check for index opcode, if it is Global then
+	 * no need to allocate the table, just set the table
+	 * and exit since it is not maintained in the flow db.
+	 */
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_GLOBAL) {
+		/* get the index from index operand */
+		if (tbl->index_operand < BNXT_ULP_GLB_REGFILE_INDEX_LAST &&
+		    ulp_mapper_glb_resource_read(parms->mapper_data,
+						 tbl->direction,
+						 tbl->index_operand,
+						 &idx)) {
+			BNXT_TF_DBG(ERR, "Glbl regfile[%d] read failed.\n",
+				    tbl->index_operand);
+			return -EINVAL;
+		}
+		/* set the Tf index table */
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->resource_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+		sparms.idx		= tfp_be_to_cpu_64(idx);
+		sparms.tbl_scope_id	= tbl_scope_id;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Glbl Set table[%d][%s][%d] failed rc=%d\n",
+				    sparms.type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+			return rc;
+		}
+		return 0; /* success */
+	}
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1546,11 +1582,13 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Always storing values in Regfile in BE */
 	idx = tfp_cpu_to_be_64(idx);
-	rc = ulp_regfile_write(parms->regfile, tbl->regfile_idx, idx);
-	if (!rc) {
-		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
-			    tbl->regfile_idx);
-		goto error;
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_ALLOCATE) {
+		rc = ulp_regfile_write(parms->regfile, tbl->index_operand, idx);
+		if (!rc) {
+			BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+				    tbl->index_operand);
+			goto error;
+		}
 	}
 
 	/* Perform the tf table set by filling the set params */
@@ -1815,7 +1853,11 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	}
 
 	/* Get the index details from computed field */
-	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+	if (tbl->index_opcode != BNXT_ULP_INDEX_OPCODE_COMP_FIELD) {
+		BNXT_TF_DBG(ERR, "Invalid tbl index opcode\n");
+		return -EINVAL;
+	}
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->index_operand);
 
 	/* Perform the tf table set by filling the set params */
 	iftbl_params.dir = tbl->direction;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 8af23eff1..9b14fa0bd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -76,7 +76,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -90,7 +91,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -104,7 +106,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	}
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index e773afd60..d4c7bfa4d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -113,8 +113,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 0,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -135,8 +134,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -157,8 +155,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -179,8 +176,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -201,8 +197,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -223,8 +218,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -245,8 +239,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -267,8 +260,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -289,8 +281,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -311,8 +302,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -333,8 +323,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -355,8 +344,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -377,8 +365,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -399,8 +386,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -421,8 +407,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 66343b918..0215a5dde 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -161,6 +161,14 @@ enum bnxt_ulp_hdr_type {
 	BNXT_ULP_HDR_TYPE_LAST = 3
 };
 
+enum bnxt_ulp_index_opcode {
+	BNXT_ULP_INDEX_OPCODE_NOT_USED = 0,
+	BNXT_ULP_INDEX_OPCODE_ALLOCATE = 1,
+	BNXT_ULP_INDEX_OPCODE_GLOBAL = 2,
+	BNXT_ULP_INDEX_OPCODE_COMP_FIELD = 3,
+	BNXT_ULP_INDEX_OPCODE_LAST = 4
+};
+
 enum bnxt_ulp_mapper_opc {
 	BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 1188223aa..a3ddd33fd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -182,9 +182,9 @@ struct bnxt_ulp_mapper_tbl_info {
 	uint32_t	ident_start_idx;
 	uint16_t	ident_nums;
 
-	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
-	uint32_t			comp_field_idx;
+	enum bnxt_ulp_index_opcode	index_opcode;
+	uint32_t			index_operand;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 37/51] net/bnxt: add support for global resource templates
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (35 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
                           ` (13 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the global resource templates, so that they
can be reused by the other regular templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 178 +++++++++++++++++-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   1 +
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   3 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   6 +
 4 files changed, 181 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 7b3b3d698..6fd55b2a2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -80,15 +80,20 @@ ulp_mapper_glb_resource_write(struct bnxt_ulp_mapper_data *data,
  * returns 0 on success
  */
 static int32_t
-ulp_mapper_resource_ident_allocate(struct tf *tfp,
+ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 				   struct bnxt_ulp_mapper_data *mapper_data,
 				   struct bnxt_ulp_glb_resource_info *glb_res)
 {
 	struct tf_alloc_identifier_parms iparms = { 0 };
 	struct tf_free_identifier_parms fparms;
 	uint64_t regval;
+	struct tf *tfp;
 	int32_t rc = 0;
 
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
 	iparms.ident_type = glb_res->resource_type;
 	iparms.dir = glb_res->direction;
 
@@ -115,13 +120,76 @@ ulp_mapper_resource_ident_allocate(struct tf *tfp,
 		return rc;
 	}
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res[%s][%d][%d] = 0x%04x\n",
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
 		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
 		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
 #endif
 	return rc;
 }
 
+/*
+ * Internal function to allocate index tbl resource and store it in mapper data.
+ *
+ * returns 0 on success
+ */
+static int32_t
+ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
+				    struct bnxt_ulp_mapper_data *mapper_data,
+				    struct bnxt_ulp_glb_resource_info *glb_res)
+{
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+	uint64_t regval;
+	struct tf *tfp;
+	uint32_t tbl_scope_id;
+	int32_t rc = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
+	/* Get the scope id */
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	aparms.type = glb_res->resource_type;
+	aparms.dir = glb_res->direction;
+	aparms.search_enable = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO;
+	aparms.tbl_scope_id = tbl_scope_id;
+
+	/* Allocate the index tbl using tf api */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to alloc identifier [%s][%d]\n",
+			    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    aparms.type);
+		return rc;
+	}
+
+	/* entries are stored as big-endian format */
+	regval = tfp_cpu_to_be_64((uint64_t)aparms.idx);
+	/* write to the mapper global resource */
+	rc = ulp_mapper_glb_resource_write(mapper_data, glb_res, regval);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to write to global resource id\n");
+		/* Free the identifer when update failed */
+		free_parms.dir = aparms.dir;
+		free_parms.type = aparms.type;
+		free_parms.idx = aparms.idx;
+		tf_free_tbl_entry(tfp, &free_parms);
+		return rc;
+	}
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
+		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
+#endif
+	return rc;
+}
+
 /* Retrieve the cache initialization parameters for the tbl_idx */
 static struct bnxt_ulp_cache_tbl_params *
 ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
@@ -132,6 +200,16 @@ ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
 	return &ulp_cache_tbl_params[tbl_idx];
 }
 
+/* Retrieve the global template table */
+static uint32_t *
+ulp_mapper_glb_template_table_get(uint32_t *num_entries)
+{
+	if (!num_entries)
+		return NULL;
+	*num_entries = BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ;
+	return ulp_glb_template_tbl;
+}
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -659,7 +737,10 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 			return -EINVAL;
 		}
 		act_bit = tfp_be_to_cpu_64(act_bit);
-		act_val = ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit);
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit))
+			act_val = 1;
+		else
+			act_val = 0;
 		if (fld->field_bit_size > ULP_BYTE_2_BITS(sizeof(act_val))) {
 			BNXT_TF_DBG(ERR, "%s field size is incorrect\n", name);
 			return -EINVAL;
@@ -1552,6 +1633,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 		return 0; /* success */
 	}
+
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1616,6 +1698,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	fid_parms.direction	= tbl->direction;
 	fid_parms.resource_func	= tbl->resource_func;
 	fid_parms.resource_type	= tbl->resource_type;
+	fid_parms.resource_sub_type = tbl->resource_sub_type;
 	fid_parms.resource_hndl	= aparms.idx;
 	fid_parms.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO;
 
@@ -1884,7 +1967,7 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 }
 
 static int32_t
-ulp_mapper_glb_resource_info_init(struct tf *tfp,
+ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 				  struct bnxt_ulp_mapper_data *mapper_data)
 {
 	struct bnxt_ulp_glb_resource_info *glb_res;
@@ -1901,15 +1984,23 @@ ulp_mapper_glb_resource_info_init(struct tf *tfp,
 	for (idx = 0; idx < num_glb_res_ids; idx++) {
 		switch (glb_res[idx].resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
-			rc = ulp_mapper_resource_ident_allocate(tfp,
+			rc = ulp_mapper_resource_ident_allocate(ulp_ctx,
 								mapper_data,
 								&glb_res[idx]);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_resource_index_tbl_alloc(ulp_ctx,
+								 mapper_data,
+								 &glb_res[idx]);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Global resource %x not supported\n",
 				    glb_res[idx].resource_func);
+			rc = -EINVAL;
 			break;
 		}
+		if (rc)
+			return rc;
 	}
 	return rc;
 }
@@ -2125,7 +2216,9 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 			res.resource_func = ent->resource_func;
 			res.direction = dir;
 			res.resource_type = ent->resource_type;
-			res.resource_hndl = ent->resource_hndl;
+			/*convert it from BE to cpu */
+			res.resource_hndl =
+				tfp_be_to_cpu_64(ent->resource_hndl);
 			ulp_mapper_resource_free(ulp_ctx, &res);
 		}
 	}
@@ -2144,6 +2237,71 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
 
+/* Function to handle the default global templates that are allocated during
+ * the startup and reused later.
+ */
+static int32_t
+ulp_mapper_glb_template_table_init(struct bnxt_ulp_context *ulp_ctx)
+{
+	uint32_t *glbl_tmpl_list;
+	uint32_t num_glb_tmpls, idx, dev_id;
+	struct bnxt_ulp_mapper_parms parms;
+	struct bnxt_ulp_mapper_data *mapper_data;
+	int32_t rc = 0;
+
+	glbl_tmpl_list = ulp_mapper_glb_template_table_get(&num_glb_tmpls);
+	if (!glbl_tmpl_list || !num_glb_tmpls)
+		return rc; /* No global templates to process */
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mapper_data = bnxt_ulp_cntxt_ptr2_mapper_data_get(ulp_ctx);
+	if (!mapper_data) {
+		BNXT_TF_DBG(ERR, "Failed to get the ulp mapper data\n");
+		return -EINVAL;
+	}
+
+	/* Iterate the global resources and process each one */
+	for (idx = 0; idx < num_glb_tmpls; idx++) {
+		/* Initialize the parms structure */
+		memset(&parms, 0, sizeof(parms));
+		parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+		parms.ulp_ctx = ulp_ctx;
+		parms.dev_id = dev_id;
+		parms.mapper_data = mapper_data;
+
+		/* Get the class table entry from dev id and class id */
+		parms.class_tid = glbl_tmpl_list[idx];
+		parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+							    parms.class_tid,
+							    &parms.num_ctbls,
+							    &parms.tbl_idx);
+		if (!parms.ctbls || !parms.num_ctbls) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		parms.device_params = bnxt_ulp_device_params_get(parms.dev_id);
+		if (!parms.device_params) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		rc = ulp_mapper_class_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "class tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return rc;
+		}
+	}
+	return rc;
+}
+
 /* Function to handle the mapping of the Flow to be compatible
  * with the underlying hardware.
  */
@@ -2316,7 +2474,7 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 	}
 
 	/* Allocate the global resource ids */
-	rc = ulp_mapper_glb_resource_info_init(tfp, data);
+	rc = ulp_mapper_glb_resource_info_init(ulp_ctx, data);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to initialize global resource ids\n");
 		goto error;
@@ -2344,6 +2502,12 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 		}
 	}
 
+	rc = ulp_mapper_glb_template_table_init(ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize global templates\n");
+		goto error;
+	}
+
 	return 0;
 error:
 	/* Ignore the return code in favor of returning the original error. */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 0215a5dde..7c0dc5ee4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -26,6 +26,7 @@
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
 #define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 2efd11447..beca3baa7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -546,3 +546,6 @@ uint32_t bnxt_ulp_encap_vtag_map[] = {
 	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
 	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
+
+uint32_t ulp_glb_template_tbl[] = {
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index a3ddd33fd..4bcd02ba2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -299,4 +299,10 @@ extern struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[];
  */
 extern struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[];
 
+/*
+ * The ulp_global template table is used to initialize default entries
+ * that could be reused by other templates.
+ */
+extern uint32_t ulp_glb_template_tbl[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 38/51] net/bnxt: add support for internal exact match entries
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (36 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
                           ` (12 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the internal exact match entries.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 38 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 13 +++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 21 ++++++----
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  4 ++
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   |  6 +--
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 13 ++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |  7 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  5 +++
 8 files changed, 85 insertions(+), 22 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4835b951e..1b52861d4 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -213,8 +213,27 @@ static int32_t
 ulp_eem_tbl_scope_init(struct bnxt *bp)
 {
 	struct tf_alloc_tbl_scope_parms params = {0};
+	uint32_t dev_id;
+	struct bnxt_ulp_device_params *dparms;
 	int rc;
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope alloc is not required\n");
+		return 0;
+	}
+
 	bnxt_init_tbl_scope_parms(bp, &params);
 
 	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
@@ -240,6 +259,8 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 	struct tf_free_tbl_scope_parms	params = {0};
 	struct tf			*tfp;
 	int32_t				rc = 0;
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id;
 
 	if (!ulp_ctx || !ulp_ctx->cfg_data)
 		return -EINVAL;
@@ -254,6 +275,23 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 		return -EINVAL;
 	}
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope free is not required\n");
+		return 0;
+	}
+
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 384dc5b2c..7696de2a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -114,7 +114,8 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info *resource_info,
 	}
 
 	/* Store the handle as 64bit only for EM table entries */
-	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE &&
+	    params->resource_func != BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
 		resource_info->resource_type = params->resource_type;
 		resource_info->resource_sub_type = params->resource_sub_type;
@@ -145,7 +146,8 @@ ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info *resource_info,
 	/* use the helper function to get the resource func */
 	params->resource_func = ulp_flow_db_resource_func_get(resource_info);
 
-	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    params->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		params->resource_hndl = resource_info->resource_em_handle;
 	} else if (params->resource_func & ULP_FLOW_DB_RES_FUNC_NEED_LOWER) {
 		params->resource_hndl = resource_info->resource_hndl;
@@ -908,7 +910,9 @@ ulp_flow_db_resource_hndl_get(struct bnxt_ulp_context *ulp_ctx,
 				}
 
 			} else if (resource_func ==
-				   BNXT_ULP_RESOURCE_FUNC_EM_TABLE){
+				   BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+				   resource_func ==
+				   BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 				*res_hndl = fid_res->resource_em_handle;
 				return 0;
 			}
@@ -966,7 +970,8 @@ static void ulp_flow_db_res_dump(struct ulp_fdb_resource_info	*r,
 
 	BNXT_TF_DBG(DEBUG, "Resource func = %x, nxt_resource_idx = %x\n",
 		    res_func, (ULP_FLOW_DB_RES_NXT_MASK & r->nxt_resource_idx));
-	if (res_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE)
+	if (res_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    res_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		BNXT_TF_DBG(DEBUG, "EM Handle = 0x%016" PRIX64 "\n",
 			    r->resource_em_handle);
 	else
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 6fd55b2a2..e2b771c9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -556,15 +556,18 @@ ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp,
 }
 
 static inline int32_t
-ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
-			  struct tf *tfp,
-			  struct ulp_flow_db_res_params *res)
+ulp_mapper_em_entry_free(struct bnxt_ulp_context *ulp,
+			 struct tf *tfp,
+			 struct ulp_flow_db_res_params *res)
 {
 	struct tf_delete_em_entry_parms fparms = { 0 };
 	int32_t rc;
 
 	fparms.dir		= res->direction;
-	fparms.mem		= TF_MEM_EXTERNAL;
+	if (res->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE)
+		fparms.mem = TF_MEM_EXTERNAL;
+	else
+		fparms.mem = TF_MEM_INTERNAL;
 	fparms.flow_handle	= res->resource_hndl;
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
@@ -1443,7 +1446,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 
 	/* do the transpose for the internal EM keys */
-	if (tbl->resource_type == TF_MEM_INTERNAL)
+	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		ulp_blob_perform_byte_reverse(&key);
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
@@ -2066,7 +2069,8 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
 			break;
-		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
 			rc = ulp_mapper_em_tbl_process(parms, tbl);
 			break;
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
@@ -2119,8 +2123,9 @@ ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
 	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
 		break;
-	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
-		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+	case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+	case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
+		rc = ulp_mapper_em_entry_free(ulp, tfp, res);
 		break;
 	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 517422338..b3527eccb 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -87,6 +87,9 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	/* Need to allocate 2 * Num flows to account for hash type bit */
 	mark_tbl->gfid_num_entries = dparms->mark_db_gfid_entries;
+	if (!mark_tbl->gfid_num_entries)
+		goto gfid_not_required;
+
 	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
 					 mark_tbl->gfid_num_entries *
 					 sizeof(struct bnxt_gfid_mark_info),
@@ -109,6 +112,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_num_entries - 1,
 		    mark_tbl->gfid_mask);
 
+gfid_not_required:
 	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 	return 0;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index d4c7bfa4d..8eb559050 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -179,7 +179,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -284,7 +284,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -389,7 +389,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 7c0dc5ee4..3168d29a9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -149,10 +149,11 @@ enum bnxt_ulp_flow_mem_type {
 };
 
 enum bnxt_ulp_glb_regfile_index {
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
+	BNXT_ULP_GLB_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 1,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR = 3,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 4
 };
 
 enum bnxt_ulp_hdr_type {
@@ -257,8 +258,8 @@ enum bnxt_ulp_match_type_bitmask {
 
 enum bnxt_ulp_resource_func {
 	BNXT_ULP_RESOURCE_FUNC_INVALID = 0x00,
-	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 0x20,
-	BNXT_ULP_RESOURCE_FUNC_RSVD1 = 0x40,
+	BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE = 0x20,
+	BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE = 0x40,
 	BNXT_ULP_RESOURCE_FUNC_RSVD2 = 0x60,
 	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0x80,
 	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 0x81,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index beca3baa7..7c440e3a4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -321,7 +321,12 @@ struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	.mark_db_gfid_entries   = 65536,
 	.flow_count_db_entries  = 16384,
 	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2
+	.num_phy_ports          = 2,
+	.ext_cntr_table_type    = 0,
+	.byte_count_mask        = 0x00000003ffffffff,
+	.packet_count_mask      = 0xfffffffc00000000,
+	.byte_count_shift       = 0,
+	.packet_count_shift     = 36
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4bcd02ba2..5a7a7b910 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -146,6 +146,11 @@ struct bnxt_ulp_device_params {
 	uint64_t			flow_db_num_entries;
 	uint32_t			flow_count_db_entries;
 	uint32_t			num_resources_per_flow;
+	uint32_t			ext_cntr_table_type;
+	uint64_t			byte_count_mask;
+	uint64_t			packet_count_mask;
+	uint32_t			byte_count_shift;
+	uint32_t			packet_count_shift;
 };
 
 /* Flow Mapper */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 39/51] net/bnxt: add support for conditional execution of mapper tables
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (37 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
                           ` (11 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for conditional execution of the mapper tables so that
actions like count will have table processed only if action count
is configured.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 45 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |  8 ++++
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 12 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  2 +
 5 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e2b771c9f..d0931d411 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2008,6 +2008,44 @@ ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 	return rc;
 }
 
+/*
+ * Function to process the conditional opcode of the mapper table.
+ * returns 1 to skip the table.
+ * return 0 to continue processing the table.
+ */
+static int32_t
+ulp_mapper_tbl_cond_opcode_process(struct bnxt_ulp_mapper_parms *parms,
+				   struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	int32_t rc = 1;
+
+	switch (tbl->cond_opcode) {
+	case BNXT_ULP_COND_OPCODE_NOP:
+		rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_COMP_FIELD:
+		if (tbl->cond_operand < BNXT_ULP_CF_IDX_LAST &&
+		    ULP_COMP_FLD_IDX_RD(parms, tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_ACTION_BIT:
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_HDR_BIT:
+		if (ULP_BITMAP_ISSET(parms->hdr_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	default:
+		BNXT_TF_DBG(ERR,
+			    "Invalid arg in mapper tbl for cond opcode\n");
+		break;
+	}
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -2027,6 +2065,9 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	for (i = 0; i < parms->num_atbls; i++) {
 		tbl = &parms->atbls[i];
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
@@ -2065,6 +2106,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 	for (i = 0; i < parms->num_ctbls; i++) {
 		struct bnxt_ulp_mapper_tbl_info *tbl = &parms->ctbls[i];
 
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
@@ -2326,6 +2370,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	memset(&parms, 0, sizeof(parms));
 	parms.act_prop = cparms->act_prop;
 	parms.act_bitmap = cparms->act;
+	parms.hdr_bitmap = cparms->hdr_bitmap;
 	parms.regfile = &regfile;
 	parms.hdr_field = cparms->hdr_field;
 	parms.comp_fld = cparms->comp_fld;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b159081b1..19134830a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -62,6 +62,7 @@ struct bnxt_ulp_mapper_parms {
 	uint32_t				num_ctbls;
 	struct ulp_rte_act_prop			*act_prop;
 	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_bitmap		*hdr_bitmap;
 	struct ulp_rte_hdr_field		*hdr_field;
 	uint32_t				*comp_fld;
 	struct ulp_regfile			*regfile;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 41ac77c6f..8fffaecce 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1128,6 +1128,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv4 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
@@ -1148,6 +1152,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv6 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 3168d29a9..27628a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -118,7 +118,17 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
 	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
 	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
-	BNXT_ULP_CF_IDX_LAST = 29
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 29,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 30,
+	BNXT_ULP_CF_IDX_LAST = 31
+};
+
+enum bnxt_ulp_cond_opcode {
+	BNXT_ULP_COND_OPCODE_NOP = 0,
+	BNXT_ULP_COND_OPCODE_COMP_FIELD = 1,
+	BNXT_ULP_COND_OPCODE_ACTION_BIT = 2,
+	BNXT_ULP_COND_OPCODE_HDR_BIT = 3,
+	BNXT_ULP_COND_OPCODE_LAST = 4
 };
 
 enum bnxt_ulp_critical_resource {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5a7a7b910..df999b18c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -165,6 +165,8 @@ struct bnxt_ulp_mapper_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t			resource_type; /* TF_ enum type */
 	enum bnxt_ulp_resource_sub_type	resource_sub_type;
+	enum bnxt_ulp_cond_opcode	cond_opcode;
+	uint32_t			cond_operand;
 	uint8_t		direction;
 	uint32_t	priority;
 	uint8_t		srch_b4_alloc;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 40/51] net/bnxt: enable port MAC qcfg command for trusted VF
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (38 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 41/51] net/bnxt: enhancements for port db Ajit Khaparde
                           ` (10 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_MAC_QCFG command on trusted vf to fetch the port count.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2605ef039..6ade32d1b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3194,14 +3194,14 @@ int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 
 	bp->port_svif = BNXT_SVIF_INVALID;
 
-	if (!BNXT_PF(bp))
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
 		return 0;
 
 	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
-	HWRM_CHECK_RESULT();
+	HWRM_CHECK_RESULT_SILENT();
 
 	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
 	if (port_svif_info &
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 41/51] net/bnxt: enhancements for port db
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (39 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
                           ` (9 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

1. Add "enum bnxt_ulp_intf_type” as the second parameter for the
   port & func helper functions
2. Return vfrep related port & func information in the helper functions
3. Allocate phy_port_list dynamically based on port count
4. Introduce ulp_func_id_tbl array for book keeping func related
   information indexed by func_id

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c           |  64 ++++++++--
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h |   6 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c       |   2 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c  |   9 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.c    | 143 +++++++++++++++++------
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    |  56 +++++++--
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  22 +++-
 8 files changed, 250 insertions(+), 62 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 43e5e7162..32acced60 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -23,6 +23,7 @@
 
 #include "tf_core.h"
 #include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -879,10 +880,11 @@ extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
-uint16_t bnxt_get_vnic_id(uint16_t port);
-uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
-uint16_t bnxt_get_fw_func_id(uint16_t port);
-uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif,
+		       enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_parif(uint16_t port, enum bnxt_ulp_intf_type type);
 uint16_t bnxt_get_phy_port_id(uint16_t port);
 uint16_t bnxt_get_vport(uint16_t port);
 enum bnxt_ulp_intf_type
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 355025741..332644d77 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5067,25 +5067,48 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 }
 
 uint16_t
-bnxt_get_svif(uint16_t port_id, bool func_svif)
+bnxt_get_svif(uint16_t port_id, bool func_svif,
+	      enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->svif;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
 uint16_t
-bnxt_get_vnic_id(uint16_t port)
+bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt_vnic_info *vnic;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->dflt_vnic_id;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
@@ -5094,12 +5117,23 @@ bnxt_get_vnic_id(uint16_t port)
 }
 
 uint16_t
-bnxt_get_fw_func_id(uint16_t port)
+bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return bp->fw_fid;
@@ -5116,8 +5150,14 @@ bnxt_get_interface_type(uint16_t port)
 		return BNXT_ULP_INTF_TYPE_VF_REP;
 
 	bp = eth_dev->data->dev_private;
-	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
-			   : BNXT_ULP_INTF_TYPE_VF;
+	if (BNXT_PF(bp))
+		return BNXT_ULP_INTF_TYPE_PF;
+	else if (BNXT_VF_IS_TRUSTED(bp))
+		return BNXT_ULP_INTF_TYPE_TRUSTED_VF;
+	else if (BNXT_VF(bp))
+		return BNXT_ULP_INTF_TYPE_VF;
+
+	return BNXT_ULP_INTF_TYPE_INVALID;
 }
 
 uint16_t
@@ -5130,6 +5170,9 @@ bnxt_get_phy_port_id(uint16_t port_id)
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
 		vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
 		eth_dev = vfr->parent_dev;
 	}
 
@@ -5139,15 +5182,20 @@ bnxt_get_phy_port_id(uint16_t port_id)
 }
 
 uint16_t
-bnxt_get_parif(uint16_t port_id)
+bnxt_get_parif(uint16_t port_id, enum bnxt_ulp_intf_type type)
 {
-	struct bnxt_vf_representor *vfr;
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
-		vfr = eth_dev->data->dev_private;
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid - 1;
+
 		eth_dev = vfr->parent_dev;
 	}
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f772d4919..ebb71405b 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -6,6 +6,11 @@
 #ifndef _BNXT_TF_COMMON_H_
 #define _BNXT_TF_COMMON_H_
 
+#include <inttypes.h>
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db_enum.h"
+
 #define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
 
 #define BNXT_ULP_EM_FLOWS			8192
@@ -48,6 +53,7 @@ enum ulp_direction_type {
 enum bnxt_ulp_intf_type {
 	BNXT_ULP_INTF_TYPE_INVALID = 0,
 	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_TRUSTED_VF,
 	BNXT_ULP_INTF_TYPE_VF,
 	BNXT_ULP_INTF_TYPE_PF_REP,
 	BNXT_ULP_INTF_TYPE_VF_REP,
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 1b52861d4..e5e7e5f43 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -658,7 +658,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	rc = ulp_dparms_init(bp, bp->ulp_ctx);
 
 	/* create the port database */
-	rc = ulp_port_db_init(bp->ulp_ctx);
+	rc = ulp_port_db_init(bp->ulp_ctx, bp->port_cnt);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to create the port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6eb2d6146..138b0b73d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -128,7 +128,8 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	mapper_cparms.act_prop = &params.act_prop;
 	mapper_cparms.class_tid = class_id;
 	mapper_cparms.act_tid = act_tmpl;
-	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id,
+						    BNXT_ULP_INTF_TYPE_INVALID);
 	mapper_cparms.dir = params.dir;
 
 	/* Call the ulp mapper to create the flow in the hardware. */
@@ -226,7 +227,8 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	}
 
 	flow_id = (uint32_t)(uintptr_t)flow;
-	func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	func_id = bnxt_get_fw_func_id(dev->data->port_id,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
@@ -270,7 +272,8 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	if (ulp_ctx_deinit_allowed(bp)) {
 		ret = ulp_flow_db_session_flow_flush(ulp_ctx);
 	} else if (bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx)) {
-		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id);
+		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id,
+					      BNXT_ULP_INTF_TYPE_INVALID);
 		ret = ulp_flow_db_function_flow_flush(ulp_ctx, func_id);
 	}
 	if (ret)
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index ea27ef41f..659cefa07 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -33,7 +33,7 @@ ulp_port_db_allocate_ifindex(struct bnxt_ulp_port_db *port_db)
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt)
 {
 	struct bnxt_ulp_port_db *port_db;
 
@@ -60,6 +60,18 @@ int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
 			    "Failed to allocate mem for port interface list\n");
 		goto error_free;
 	}
+
+	/* Allocate the phy port list */
+	port_db->phy_port_list = rte_zmalloc("bnxt_ulp_phy_port_list",
+					     port_cnt *
+					     sizeof(struct ulp_phy_port_info),
+					     0);
+	if (!port_db->phy_port_list) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate mem for phy port list\n");
+		goto error_free;
+	}
+
 	return 0;
 
 error_free:
@@ -89,6 +101,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 	bnxt_ulp_cntxt_ptr2_port_db_set(ulp_ctxt, NULL);
 
 	/* Free up all the memory. */
+	rte_free(port_db->phy_port_list);
 	rte_free(port_db->ulp_intf_list);
 	rte_free(port_db);
 	return 0;
@@ -110,6 +123,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	struct ulp_phy_port_info *port_data;
 	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	struct ulp_func_if_info *func;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -134,20 +148,48 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	intf = &port_db->ulp_intf_list[ifindex];
 
 	intf->type = bnxt_get_interface_type(port_id);
+	intf->drv_func_id = bnxt_get_fw_func_id(port_id,
+						BNXT_ULP_INTF_TYPE_INVALID);
+
+	func = &port_db->ulp_func_id_tbl[intf->drv_func_id];
+	if (!func->func_valid) {
+		func->func_svif = bnxt_get_svif(port_id, true,
+						BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_spif = bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+		func->func_valid = true;
+	}
 
-	intf->func_id = bnxt_get_fw_func_id(port_id);
-	intf->func_svif = bnxt_get_svif(port_id, 1);
-	intf->func_spif = bnxt_get_phy_port_id(port_id);
-	intf->func_parif = bnxt_get_parif(port_id);
-	intf->default_vnic = bnxt_get_vnic_id(port_id);
-	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+	if (intf->type == BNXT_ULP_INTF_TYPE_VF_REP) {
+		intf->vf_func_id =
+			bnxt_get_fw_func_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+
+		func = &port_db->ulp_func_id_tbl[intf->vf_func_id];
+		func->func_svif =
+			bnxt_get_svif(port_id, true, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->func_spif =
+			bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+	}
 
-	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
-		port_data = &port_db->phy_port_list[intf->phy_port_id];
-		port_data->port_svif = bnxt_get_svif(port_id, 0);
+	port_data = &port_db->phy_port_list[func->phy_port_id];
+	if (!port_data->port_valid) {
+		port_data->port_svif =
+			bnxt_get_svif(port_id, false,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_spif = bnxt_get_phy_port_id(port_id);
-		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_vport = bnxt_get_vport(port_id);
+		port_data->port_valid = true;
 	}
 
 	return 0;
@@ -194,6 +236,7 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 			    uint32_t ifindex,
+			    uint32_t fid_type,
 			    uint16_t *func_id)
 {
 	struct bnxt_ulp_port_db *port_db;
@@ -203,7 +246,12 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*func_id =  port_db->ulp_intf_list[ifindex].func_id;
+
+	if (fid_type == BNXT_ULP_DRV_FUNC_FID)
+		*func_id =  port_db->ulp_intf_list[ifindex].drv_func_id;
+	else
+		*func_id =  port_db->ulp_intf_list[ifindex].vf_func_id;
+
 	return 0;
 }
 
@@ -212,7 +260,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * svif_type [in] the svif type of the given ifindex.
  * svif [out] the svif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -220,21 +268,27 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t svif_type,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*svif = port_db->ulp_intf_list[ifindex].func_svif;
+
+	if (svif_type == BNXT_ULP_DRV_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
+	} else if (svif_type == BNXT_ULP_VF_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*svif = port_db->phy_port_list[phy_port_id].port_svif;
 	}
 
@@ -246,7 +300,7 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * spif_type [in] the spif type of the given ifindex.
  * spif [out] the spif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -254,21 +308,27 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t spif_type,
 		     uint16_t *spif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+
+	if (spif_type == BNXT_ULP_DRV_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
+	} else if (spif_type == BNXT_ULP_VF_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*spif = port_db->phy_port_list[phy_port_id].port_spif;
 	}
 
@@ -280,7 +340,7 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * parif_type [in] the parif type of the given ifindex.
  * parif [out] the parif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -288,21 +348,26 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t parif_type,
 		     uint16_t *parif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	if (parif_type == BNXT_ULP_DRV_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
+	} else if (parif_type == BNXT_ULP_VF_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*parif = port_db->phy_port_list[phy_port_id].port_parif;
 	}
 
@@ -321,16 +386,26 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 			     uint32_t ifindex,
+			     uint32_t vnic_type,
 			     uint16_t *vnic)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	} else {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	}
+
 	return 0;
 }
 
@@ -348,14 +423,16 @@ ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 		      uint32_t ifindex, uint16_t *vport)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+
+	func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+	phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 	*vport = port_db->phy_port_list[phy_port_id].port_vport;
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 87de3bcbc..b1419a34c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -9,19 +9,54 @@
 #include "bnxt_ulp.h"
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
+#define BNXT_PORT_DB_MAX_FUNC			2048
 
-/* Structure for the Port database resource information. */
-struct ulp_interface_info {
-	enum bnxt_ulp_intf_type	type;
-	uint16_t		func_id;
+enum bnxt_ulp_svif_type {
+	BNXT_ULP_DRV_FUNC_SVIF = 0,
+	BNXT_ULP_VF_FUNC_SVIF,
+	BNXT_ULP_PHY_PORT_SVIF
+};
+
+enum bnxt_ulp_spif_type {
+	BNXT_ULP_DRV_FUNC_SPIF = 0,
+	BNXT_ULP_VF_FUNC_SPIF,
+	BNXT_ULP_PHY_PORT_SPIF
+};
+
+enum bnxt_ulp_parif_type {
+	BNXT_ULP_DRV_FUNC_PARIF = 0,
+	BNXT_ULP_VF_FUNC_PARIF,
+	BNXT_ULP_PHY_PORT_PARIF
+};
+
+enum bnxt_ulp_vnic_type {
+	BNXT_ULP_DRV_FUNC_VNIC = 0,
+	BNXT_ULP_VF_FUNC_VNIC
+};
+
+enum bnxt_ulp_fid_type {
+	BNXT_ULP_DRV_FUNC_FID,
+	BNXT_ULP_VF_FUNC_FID
+};
+
+struct ulp_func_if_info {
+	uint16_t		func_valid;
 	uint16_t		func_svif;
 	uint16_t		func_spif;
 	uint16_t		func_parif;
-	uint16_t		default_vnic;
+	uint16_t		func_vnic;
 	uint16_t		phy_port_id;
 };
 
+/* Structure for the Port database resource information. */
+struct ulp_interface_info {
+	enum bnxt_ulp_intf_type	type;
+	uint16_t		drv_func_id;
+	uint16_t		vf_func_id;
+};
+
 struct ulp_phy_port_info {
+	uint16_t	port_valid;
 	uint16_t	port_svif;
 	uint16_t	port_spif;
 	uint16_t	port_parif;
@@ -35,7 +70,8 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
-	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	*phy_port_list;
+	struct ulp_func_if_info		ulp_func_id_tbl[BNXT_PORT_DB_MAX_FUNC];
 };
 
 /*
@@ -46,7 +82,7 @@ struct bnxt_ulp_port_db {
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt);
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt);
 
 /*
  * Deinitialize the port database. Memory is deallocated in
@@ -94,7 +130,8 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex, uint16_t *func_id);
+			    uint32_t ifindex, uint32_t fid_type,
+			    uint16_t *func_id);
 
 /*
  * Api to get the svif for a given ulp ifindex.
@@ -150,7 +187,8 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex, uint16_t *vnic);
+			     uint32_t ifindex, uint32_t vnic_type,
+			     uint16_t *vnic);
 
 /*
  * Api to get the vport id for a given ulp ifindex.
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 8fffaecce..073b3537f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -166,6 +166,8 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 	uint16_t port_id = svif;
 	uint32_t dir = 0;
 	struct ulp_rte_hdr_field *hdr_field;
+	enum bnxt_ulp_svif_type svif_type;
+	enum bnxt_ulp_intf_type if_type;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -187,7 +189,18 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 				    "Invalid port id\n");
 			return BNXT_TF_RC_ERROR;
 		}
-		ulp_port_db_svif_get(params->ulp_ctx, ifindex, dir, &svif);
+
+		if (dir == ULP_DIR_INGRESS) {
+			svif_type = BNXT_ULP_PHY_PORT_SVIF;
+		} else {
+			if_type = bnxt_get_interface_type(port_id);
+			if (if_type == BNXT_ULP_INTF_TYPE_VF_REP)
+				svif_type = BNXT_ULP_VF_FUNC_SVIF;
+			else
+				svif_type = BNXT_ULP_DRV_FUNC_SVIF;
+		}
+		ulp_port_db_svif_get(params->ulp_ctx, ifindex, svif_type,
+				     &svif);
 		svif = rte_cpu_to_be_16(svif);
 	}
 	hdr_field = &params->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX];
@@ -1256,7 +1269,7 @@ ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
 
 	/* copy the PF of the current device into VNIC Property */
 	svif = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
-	svif = bnxt_get_vnic_id(svif);
+	svif = bnxt_get_vnic_id(svif, BNXT_ULP_INTF_TYPE_INVALID);
 	svif = rte_cpu_to_be_32(svif);
 	memcpy(&params->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 	       &svif, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1280,7 +1293,8 @@ ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using VF conversion */
-		pid = bnxt_get_vnic_id(vf_action->id);
+		pid = bnxt_get_vnic_id(vf_action->id,
+				       BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1307,7 +1321,7 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using port conversion */
-		pid = bnxt_get_vnic_id(port_id->id);
+		pid = bnxt_get_vnic_id(port_id->id, BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 42/51] net/bnxt: manage VF to VFR conduit
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (40 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 41/51] net/bnxt: enhancements for port db Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
                           ` (8 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

When VF-VFR conduits are created, a mark is added to the mark database.
mark_flag indicates whether the mark is valid and has VFR information
(VFR_ID bit in mark_flag). Rx path was checking for this VFR_ID bit.
However, while adding the mark to the mark database, VFR_ID bit is not
set in mark_flag.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index b3527eccb..b2c8c349c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -18,6 +18,8 @@
 						BNXT_ULP_MARK_VALID)
 #define ULP_MARK_DB_ENTRY_IS_INVALID(mark_info) (!((mark_info)->flags &\
 						   BNXT_ULP_MARK_VALID))
+#define ULP_MARK_DB_ENTRY_SET_VFR_ID(mark_info) ((mark_info)->flags |=\
+						 BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_VFR_ID(mark_info) ((mark_info)->flags &\
 						BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_GLOBAL_HW_FID(mark_info) ((mark_info)->flags &\
@@ -263,6 +265,9 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
+
+		if (mark_flag & BNXT_ULP_MARK_VFR_ID)
+			ULP_MARK_DB_ENTRY_SET_VFR_ID(&mtbl->lfid_tbl[fid]);
 	}
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 43/51] net/bnxt: parse reps along with other dev-args
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (41 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
                           ` (7 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Representor dev-args need to be parsed during pci probe as they determine
subsequent probe of VF representor ports as well.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 332644d77..0b38c84e3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -98,8 +98,10 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+#define BNXT_DEVARG_REPRESENTOR	"representor"
 
 static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_REPRESENTOR,
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
 	BNXT_DEVARG_MAX_NUM_KFLOWS,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 44/51] net/bnxt: fill mapper parameters with default rules
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (42 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
                           ` (6 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Default rules are needed for the packets to be punted between the
following entities in the non-offloaded path
1. Device PORT to DPDK App
2. DPDK App to Device PORT
3. VF Representor to VF
4. VF to VF Representor

This patch fills all the relevant information in the computed fields
& the act_prop fields for the flow mapper to create the necessary
tables in the hardware to enable the default rules.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c                |   6 +-
 drivers/net/bnxt/meson.build                  |   1 +
 drivers/net/bnxt/tf_ulp/Makefile              |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |  24 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |  30 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       | 385 ++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  10 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |   3 +-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   5 +
 9 files changed, 444 insertions(+), 21 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 0b38c84e3..de8e11a6e 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1275,9 +1275,6 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
-	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_deinit(bp);
-
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
@@ -1333,6 +1330,9 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
+	if (BNXT_TRUFLOW_EN(bp))
+		bnxt_ulp_deinit(bp);
+
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 8f6ed419e..2939857ca 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -61,6 +61,7 @@ sources = files('bnxt_cpr.c',
 	'tf_ulp/ulp_rte_parser.c',
 	'tf_ulp/bnxt_ulp_flow.c',
 	'tf_ulp/ulp_port_db.c',
+	'tf_ulp/ulp_def_rules.c',
 
 	'rte_pmd_bnxt.c')
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 57341f876..3f1b43bae 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -16,3 +16,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index eecc09cea..3563f63fa 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -12,6 +12,8 @@
 
 #include "rte_ethdev.h"
 
+#include "ulp_template_db_enum.h"
+
 struct bnxt_ulp_data {
 	uint32_t			tbl_scope_id;
 	struct bnxt_ulp_mark_tbl	*mark_tbl;
@@ -49,6 +51,12 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+struct ulp_tlv_param {
+	enum bnxt_ulp_df_param_type type;
+	uint32_t length;
+	uint8_t value[16];
+};
+
 /*
  * Allow the deletion of context only for the bnxt device that
  * created the session
@@ -127,4 +135,20 @@ bnxt_ulp_cntxt_ptr2_port_db_set(struct bnxt_ulp_context	*ulp_ctx,
 struct bnxt_ulp_port_db *
 bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx);
 
+/* Function to create default flows. */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id);
+
+/* Function to destroy default flows. */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev,
+			 uint32_t flow_id);
+
+int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		      struct rte_flow_error *error);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 138b0b73d..7ef306e58 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -207,7 +207,7 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev,
 }
 
 /* Function to destroy the rte flow. */
-static int
+int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 		      struct rte_flow *flow,
 		      struct rte_flow_error *error)
@@ -220,9 +220,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
@@ -233,17 +234,22 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
 		BNXT_TF_DBG(ERR, "Incorrect device params\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
-	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id);
-	if (ret)
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret) {
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+		if (error)
+			rte_flow_error_set(error, -ret,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
+	}
 
 	return ret;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
new file mode 100644
index 000000000..46b558f31
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt_tf_common.h"
+#include "ulp_template_struct.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_db_field.h"
+#include "ulp_utils.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+struct bnxt_ulp_def_param_handler {
+	int32_t (*vfr_func)(struct bnxt_ulp_context *ulp_ctx,
+			    struct ulp_tlv_param *param,
+			    struct bnxt_ulp_mapper_create_parms *mapper_params);
+};
+
+static int32_t
+ulp_set_svif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t svif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t svif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_svif_get(ulp_ctx, ifindex, svif_type, &svif);
+	if (rc)
+		return rc;
+
+	if (svif_type == BNXT_ULP_PHY_PORT_SVIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SVIF;
+	else if (svif_type == BNXT_ULP_DRV_FUNC_SVIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SVIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SVIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, svif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_spif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t spif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t spif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_spif_get(ulp_ctx, ifindex, spif_type, &spif);
+	if (rc)
+		return rc;
+
+	if (spif_type == BNXT_ULP_PHY_PORT_SPIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SPIF;
+	else if (spif_type == BNXT_ULP_DRV_FUNC_SPIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SPIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SPIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, spif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_parif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t  ifindex, uint8_t parif_type,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t parif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_parif_get(ulp_ctx, ifindex, parif_type, &parif);
+	if (rc)
+		return rc;
+
+	if (parif_type == BNXT_ULP_PHY_PORT_PARIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_PARIF;
+	else if (parif_type == BNXT_ULP_DRV_FUNC_PARIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_PARIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_PARIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, parif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vport_in_comp_fld(struct bnxt_ulp_context *ulp_ctx, uint32_t ifindex,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vport;
+	int rc;
+
+	rc = ulp_port_db_vport_get(ulp_ctx, ifindex, &vport);
+	if (rc)
+		return rc;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_PHY_PORT_VPORT,
+			    vport);
+	return 0;
+}
+
+static int32_t
+ulp_set_vnic_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t vnic_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vnic;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_default_vnic_get(ulp_ctx, ifindex, vnic_type, &vnic);
+	if (rc)
+		return rc;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_VNIC;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_VNIC;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, vnic);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vlan_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	struct ulp_rte_act_prop *act_prop = mapper_params->act_prop;
+
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_SET_VLAN_VID)) {
+		BNXT_TF_DBG(ERR,
+			    "VLAN already set, multiple VLANs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	port_id = rte_cpu_to_be_16(port_id);
+
+	ULP_BITMAP_SET(mapper_params->act->bits,
+		       BNXT_ULP_ACTION_BIT_SET_VLAN_VID);
+
+	memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG],
+	       &port_id, sizeof(port_id));
+
+	return 0;
+}
+
+static int32_t
+ulp_set_mark_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		BNXT_TF_DBG(ERR,
+			    "MARK already set, multiple MARKs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_DEV_PORT_ID,
+			    port_id);
+
+	return 0;
+}
+
+static int32_t
+ulp_df_dev_port_handler(struct bnxt_ulp_context *ulp_ctx,
+			struct ulp_tlv_param *param,
+			struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t port_id;
+	uint32_t ifindex;
+	int rc;
+
+	port_id = param->value[0] | param->value[1];
+
+	rc = ulp_port_db_dev_port_to_ulp_index(ulp_ctx, port_id, &ifindex);
+	if (rc) {
+		BNXT_TF_DBG(ERR,
+				"Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* Set port SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_PHY_PORT_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_DRV_FUNC_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_PARIF,
+				       mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set uplink VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, true, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, false, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VPORT */
+	rc = ulp_set_vport_in_comp_fld(ulp_ctx, ifindex, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VLAN */
+	rc = ulp_set_vlan_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set MARK */
+	rc = ulp_set_mark_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
+struct bnxt_ulp_def_param_handler ulp_def_handler_tbl[] = {
+	[BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID] = {
+			.vfr_func = ulp_df_dev_port_handler }
+};
+
+/*
+ * Function to create default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * param_list [in] Ptr to a list of parameters (Currently, only DPDK port_id).
+ * ulp_class_tid [in] Class template ID number.
+ * flow_id [out] Ptr to flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id)
+{
+	struct ulp_rte_hdr_field	hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	uint32_t			comp_fld[BNXT_ULP_CF_IDX_LAST];
+	struct bnxt_ulp_mapper_create_parms mapper_params = { 0 };
+	struct ulp_rte_act_prop		act_prop;
+	struct ulp_rte_act_bitmap	act = { 0 };
+	struct bnxt_ulp_context		*ulp_ctx;
+	uint32_t type;
+	int rc;
+
+	memset(&mapper_params, 0, sizeof(mapper_params));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(comp_fld, 0, sizeof(comp_fld));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	mapper_params.hdr_field = hdr_field;
+	mapper_params.act = &act;
+	mapper_params.act_prop = &act_prop;
+	mapper_params.comp_fld = comp_fld;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized. "
+				 "Failed to create default flow.\n");
+		return -EINVAL;
+	}
+
+	type = param_list->type;
+	while (type != BNXT_ULP_DF_PARAM_TYPE_LAST) {
+		if (ulp_def_handler_tbl[type].vfr_func) {
+			rc = ulp_def_handler_tbl[type].vfr_func(ulp_ctx,
+								param_list,
+								&mapper_params);
+			if (rc) {
+				BNXT_TF_DBG(ERR,
+					    "Failed to create default flow.\n");
+				return rc;
+			}
+		}
+
+		param_list++;
+		type = param_list->type;
+	}
+
+	mapper_params.class_tid = ulp_class_tid;
+
+	rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, flow_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create default flow.\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/*
+ * Function to destroy default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * flow_id [in] Flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int rc;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return -EINVAL;
+	}
+
+	rc = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				     BNXT_ULP_DEFAULT_FLOW_TABLE);
+	if (rc)
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index d0931d411..e39398a1b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2274,16 +2274,15 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 }
 
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type)
 {
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
 		return -EINVAL;
 	}
 
-	return ulp_mapper_resources_free(ulp_ctx,
-					 fid,
-					 BNXT_ULP_REGULAR_FLOW_TABLE);
+	return ulp_mapper_resources_free(ulp_ctx, fid, flow_tbl_type);
 }
 
 /* Function to handle the default global templates that are allocated during
@@ -2486,7 +2485,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 flow_error:
 	/* Free all resources that were allocated during flow creation */
-	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
 	if (trc)
 		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 19134830a..b35065449 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -109,7 +109,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
 
 /* Function that frees all resources associated with the flow. */
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type);
 
 /*
  * Function that frees all resources and can be called on default or regular
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 27628a510..2346797db 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -145,6 +145,11 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
+enum bnxt_ulp_df_param_type {
+	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
+	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
+};
+
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 45/51] net/bnxt: add VF-rep and stat templates
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (43 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
                           ` (5 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The support for VF representor and counters is added to the
ulp templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |   21 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    2 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  424 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5198 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  409 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   87 +-
 7 files changed, 4948 insertions(+), 1656 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e39398a1b..3f175fb51 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -22,7 +22,7 @@ ulp_mapper_glb_resource_info_list_get(uint32_t *num_entries)
 {
 	if (!num_entries)
 		return NULL;
-	*num_entries = BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+	*num_entries = BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 	return ulp_glb_resource_tbl;
 }
 
@@ -119,11 +119,6 @@ ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_identifier(tfp, &fparms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
-		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
-#endif
 	return rc;
 }
 
@@ -182,11 +177,6 @@ ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_tbl_entry(tfp, &free_parms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
-		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
-#endif
 	return rc;
 }
 
@@ -1441,9 +1431,6 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			return rc;
 		}
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	ulp_mapper_result_dump("EEM Result", tbl, &data);
-#endif
 
 	/* do the transpose for the internal EM keys */
 	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
@@ -1594,10 +1581,6 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	/* if encap bit swap is enabled perform the bit swap */
 	if (parms->device_params->encap_byte_swap && encap_flds) {
 		ulp_blob_perform_encap_swap(&data);
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
-		ulp_mapper_blob_dump(&data);
-#endif
 	}
 
 	/*
@@ -2255,7 +2238,7 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Iterate the global resources and process each one */
 	for (dir = TF_DIR_RX; dir < TF_DIR_MAX; dir++) {
-		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 		      idx++) {
 			ent = &mapper_data->glb_res_tbl[dir][idx];
 			if (ent->resource_func ==
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b35065449..f6d55449b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -46,7 +46,7 @@ struct bnxt_ulp_mapper_glb_resource_entry {
 
 struct bnxt_ulp_mapper_data {
 	struct bnxt_ulp_mapper_glb_resource_entry
-		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ];
+		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ];
 	struct bnxt_ulp_mapper_cache_entry
 		*cache_tbl[BNXT_ULP_CACHE_TBL_MAX_SZ];
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 9b14fa0bd..3d6507399 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -9,62 +9,293 @@
 #include "ulp_rte_parser.h"
 
 uint16_t ulp_act_sig_tbl[BNXT_ULP_ACT_SIG_TBL_MAX_SZ] = {
-	[BNXT_ULP_ACT_HID_00a1] = 1,
-	[BNXT_ULP_ACT_HID_0029] = 2,
-	[BNXT_ULP_ACT_HID_0040] = 3
+	[BNXT_ULP_ACT_HID_0002] = 1,
+	[BNXT_ULP_ACT_HID_0022] = 2,
+	[BNXT_ULP_ACT_HID_0026] = 3,
+	[BNXT_ULP_ACT_HID_0006] = 4,
+	[BNXT_ULP_ACT_HID_0009] = 5,
+	[BNXT_ULP_ACT_HID_0029] = 6,
+	[BNXT_ULP_ACT_HID_002d] = 7,
+	[BNXT_ULP_ACT_HID_004b] = 8,
+	[BNXT_ULP_ACT_HID_004a] = 9,
+	[BNXT_ULP_ACT_HID_004f] = 10,
+	[BNXT_ULP_ACT_HID_004e] = 11,
+	[BNXT_ULP_ACT_HID_006c] = 12,
+	[BNXT_ULP_ACT_HID_0070] = 13,
+	[BNXT_ULP_ACT_HID_0021] = 14,
+	[BNXT_ULP_ACT_HID_0025] = 15,
+	[BNXT_ULP_ACT_HID_0043] = 16,
+	[BNXT_ULP_ACT_HID_0042] = 17,
+	[BNXT_ULP_ACT_HID_0047] = 18,
+	[BNXT_ULP_ACT_HID_0046] = 19,
+	[BNXT_ULP_ACT_HID_0064] = 20,
+	[BNXT_ULP_ACT_HID_0068] = 21,
+	[BNXT_ULP_ACT_HID_00a1] = 22,
+	[BNXT_ULP_ACT_HID_00df] = 23
 };
 
 struct bnxt_ulp_act_match_info ulp_act_match_list[] = {
 	[1] = {
-	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_hid = BNXT_ULP_ACT_HID_0002,
 	.act_sig = { .bits =
-		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
-		BNXT_ULP_ACTION_BIT_MARK |
-		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DROP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
-	.act_tid = 0
+	.act_tid = 1
 	},
 	[2] = {
+	.act_hid = BNXT_ULP_ACT_HID_0022,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[3] = {
+	.act_hid = BNXT_ULP_ACT_HID_0026,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[4] = {
+	.act_hid = BNXT_ULP_ACT_HID_0006,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[5] = {
+	.act_hid = BNXT_ULP_ACT_HID_0009,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[6] = {
 	.act_hid = BNXT_ULP_ACT_HID_0029,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
 		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[7] = {
+	.act_hid = BNXT_ULP_ACT_HID_002d,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
 		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.act_tid = 1
 	},
-	[3] = {
-	.act_hid = BNXT_ULP_ACT_HID_0040,
+	[8] = {
+	.act_hid = BNXT_ULP_ACT_HID_004b,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[9] = {
+	.act_hid = BNXT_ULP_ACT_HID_004a,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[10] = {
+	.act_hid = BNXT_ULP_ACT_HID_004f,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[11] = {
+	.act_hid = BNXT_ULP_ACT_HID_004e,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[12] = {
+	.act_hid = BNXT_ULP_ACT_HID_006c,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[13] = {
+	.act_hid = BNXT_ULP_ACT_HID_0070,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[14] = {
+	.act_hid = BNXT_ULP_ACT_HID_0021,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[15] = {
+	.act_hid = BNXT_ULP_ACT_HID_0025,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[16] = {
+	.act_hid = BNXT_ULP_ACT_HID_0043,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[17] = {
+	.act_hid = BNXT_ULP_ACT_HID_0042,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[18] = {
+	.act_hid = BNXT_ULP_ACT_HID_0047,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[19] = {
+	.act_hid = BNXT_ULP_ACT_HID_0046,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[20] = {
+	.act_hid = BNXT_ULP_ACT_HID_0064,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[21] = {
+	.act_hid = BNXT_ULP_ACT_HID_0068,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[22] = {
+	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 2
+	},
+	[23] = {
+	.act_hid = BNXT_ULP_ACT_HID_00df,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_VXLAN_ENCAP |
 		BNXT_ULP_ACTION_BIT_VPORT |
 		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
-	.act_tid = 2
+	.act_tid = 3
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 0
+	.num_tbls = 2,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 1,
-	.start_tbl_idx = 1
+	.start_tbl_idx = 2,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 2
+	.num_tbls = 3,
+	.start_tbl_idx = 3,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_STATS_64,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_ACTION_BIT,
+	.cond_operand = BNXT_ULP_ACTION_BIT_COUNT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
@@ -72,7 +303,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 0,
+	.result_start_idx = 1,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -87,7 +318,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 26,
+	.result_start_idx = 27,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -97,12 +328,46 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 53,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 56,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_TX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 52,
+	.result_start_idx = 59,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
@@ -114,10 +379,19 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
-	.field_bit_size = 14,
+	.field_bit_size = 64,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
 	.field_bit_size = 1,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
@@ -131,7 +405,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_COUNT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -187,7 +471,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -195,11 +489,7 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.result_operand = {
-		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
@@ -212,7 +502,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -224,7 +524,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DROP & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 14,
@@ -308,7 +618,11 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 12,
@@ -336,6 +650,50 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 128,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 14,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index 8eb559050..feac30af2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -10,8 +10,8 @@
 
 uint16_t ulp_class_sig_tbl[BNXT_ULP_CLASS_SIG_TBL_MAX_SZ] = {
 	[BNXT_ULP_CLASS_HID_0080] = 1,
-	[BNXT_ULP_CLASS_HID_0000] = 2,
-	[BNXT_ULP_CLASS_HID_0087] = 3
+	[BNXT_ULP_CLASS_HID_0087] = 2,
+	[BNXT_ULP_CLASS_HID_0000] = 3
 };
 
 struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
@@ -23,1871 +23,4722 @@ struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
 		BNXT_ULP_HDR_BIT_O_UDP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 0,
+	.class_tid = 8,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[2] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0000,
+	.class_hid = BNXT_ULP_CLASS_HID_0087,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
+		BNXT_ULP_HDR_BIT_T_VXLAN |
+		BNXT_ULP_HDR_BIT_I_ETH |
+		BNXT_ULP_HDR_BIT_I_IPV4 |
+		BNXT_ULP_HDR_BIT_I_UDP |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT |
+		BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 1,
+	.class_tid = 9,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[3] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0087,
+	.class_hid = BNXT_ULP_CLASS_HID_0000,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_HDR_BIT_T_VXLAN |
-		BNXT_ULP_HDR_BIT_I_ETH |
-		BNXT_ULP_HDR_BIT_I_IPV4 |
-		BNXT_ULP_HDR_BIT_I_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
 	.field_sig = { .bits =
-		BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT |
-		BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 2,
+	.class_tid = 10,
 	.act_vnic = 0,
 	.wc_pri = 0
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 4,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 2,
+	.start_tbl_idx = 4,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 6,
+	.start_tbl_idx = 6,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((4 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 0
+	.start_tbl_idx = 12,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((5 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 17,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((6 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 20,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((7 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 23,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((8 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 5
+	.start_tbl_idx = 24,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((9 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 5,
+	.start_tbl_idx = 29,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
+	},
+	[((10 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 10
+	.start_tbl_idx = 34,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 0,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
 	.result_start_idx = 0,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 0,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 2,
+	.key_start_idx = 0,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 1,
+	.result_start_idx = 26,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 15,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 14,
-	.result_bit_size = 10,
+	.result_start_idx = 39,
+	.result_bit_size = 32,
 	.result_num_fields = 1,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
-	.ident_nums = 1,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 40,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 41,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 18,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 15,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 67,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 60,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 23,
-	.result_bit_size = 64,
-	.result_num_fields = 9,
-	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 80,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 71,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 32,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 92,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 73,
+	.key_start_idx = 26,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 33,
+	.result_start_idx = 118,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 131,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 86,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 46,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.key_start_idx = 39,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 157,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
-	.ident_nums = 1,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_TX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 89,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 47,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 52,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 170,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 131,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 55,
+	.key_start_idx = 65,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 183,
 	.result_bit_size = 64,
-	.result_num_fields = 9,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 196,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 197,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 142,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 64,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 198,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
-	.ident_nums = 1,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_VFR_FLAG,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 144,
+	.key_start_idx = 78,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 65,
+	.result_start_idx = 224,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 157,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 78,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 237,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 249,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 160,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 79,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 91,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 275,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 202,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 87,
-	.result_bit_size = 64,
+	.result_start_idx = 288,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 104,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 314,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 117,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 327,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 340,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_GLOBAL,
+	.index_operand = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 130,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 366,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 132,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 367,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 145,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 380,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 148,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 381,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 190,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 389,
+	.result_bit_size = 64,
 	.result_num_fields = 9,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
-	}
-};
-
-struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 201,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 398,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 203,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 399,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 216,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 412,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 219,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 413,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 261,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 421,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 272,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 430,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 274,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 431,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 287,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 444,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 290,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 445,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 332,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 453,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	}
+};
+
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_PARIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_PARIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_DST_PORT & 0xff,
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT & 0xff,
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x02}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_DST_PORT & 0xff,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_DST_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
-	}
-};
-
-struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
 	.field_bit_size = 10,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
@@ -2309,7 +5160,12 @@ struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 16,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 2346797db..695546437 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -6,7 +6,7 @@
 #ifndef ULP_TEMPLATE_DB_H_
 #define ULP_TEMPLATE_DB_H_
 
-#define BNXT_ULP_REGFILE_MAX_SZ 16
+#define BNXT_ULP_REGFILE_MAX_SZ 17
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_CACHE_TBL_MAX_SZ 4
@@ -18,15 +18,15 @@
 #define BNXT_ULP_CLASS_HID_SHFTL 23
 #define BNXT_ULP_CLASS_HID_MASK 255
 #define BNXT_ULP_ACT_SIG_TBL_MAX_SZ 256
-#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 4
+#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 24
 #define BNXT_ULP_ACT_HID_LOW_PRIME 7919
 #define BNXT_ULP_ACT_HID_HIGH_PRIME 7919
-#define BNXT_ULP_ACT_HID_SHFTR 0
+#define BNXT_ULP_ACT_HID_SHFTR 23
 #define BNXT_ULP_ACT_HID_SHFTL 23
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
-#define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
-#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
+#define BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ 5
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -242,7 +242,8 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
 	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
 	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
-	BNXT_ULP_REGFILE_INDEX_LAST = 16
+	BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR = 16,
+	BNXT_ULP_REGFILE_INDEX_LAST = 17
 };
 
 enum bnxt_ulp_search_before_alloc {
@@ -252,18 +253,18 @@ enum bnxt_ulp_search_before_alloc {
 };
 
 enum bnxt_ulp_fdb_resource_flags {
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01,
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00,
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01
 };
 
 enum bnxt_ulp_fdb_type {
-	BNXT_ULP_FDB_TYPE_DEFAULT = 1,
-	BNXT_ULP_FDB_TYPE_REGULAR = 0
+	BNXT_ULP_FDB_TYPE_REGULAR = 0,
+	BNXT_ULP_FDB_TYPE_DEFAULT = 1
 };
 
 enum bnxt_ulp_flow_dir_bitmask {
-	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000,
-	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000
+	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000,
+	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000
 };
 
 enum bnxt_ulp_match_type_bitmask {
@@ -285,190 +286,66 @@ enum bnxt_ulp_resource_func {
 };
 
 enum bnxt_ulp_resource_sub_type {
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1
 };
 
 enum bnxt_ulp_sym {
-	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
-	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
-	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
-	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
-	BNXT_ULP_SYM_ECV_VALID_NO = 0,
-	BNXT_ULP_SYM_ECV_VALID_YES = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
-	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
-	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
-	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
-	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
-	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
-	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
-	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
-	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
-	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
-	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
-	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
-	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
-	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
-	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
-	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
-	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
-	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
-	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
-	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
-	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_PKT_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
-	BNXT_ULP_SYM_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_POP_VLAN_YES = 1,
 	BNXT_ULP_SYM_RECYCLE_CNT_IGNORE = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
 	BNXT_ULP_SYM_RECYCLE_CNT_ONE = 1,
-	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
 	BNXT_ULP_SYM_RECYCLE_CNT_TWO = 2,
-	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
+	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
+	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
 	BNXT_ULP_SYM_RESERVED_IGNORE = 0,
-	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
-	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
+	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
 	BNXT_ULP_SYM_TL2_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_NO = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_NO = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_YES = 1,
@@ -478,40 +355,164 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_TL4_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
-	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
+	BNXT_ULP_SYM_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_ECV_VALID_NO = 0,
+	BNXT_ULP_SYM_ECV_VALID_YES = 1,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
 	BNXT_ULP_SYM_WH_PLUS_INT_ACT_REC = 1,
-	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
-	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
 	BNXT_ULP_SYM_WH_PLUS_UC_ACT_REC = 0,
+	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
+	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
+	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
+	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
+	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
+	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
+	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
+	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
 enum bnxt_ulp_wh_plus {
-	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448
 };
 
 enum bnxt_ulp_act_prop_sz {
@@ -604,18 +605,44 @@ enum bnxt_ulp_act_prop_idx {
 
 enum bnxt_ulp_class_hid {
 	BNXT_ULP_CLASS_HID_0080 = 0x0080,
-	BNXT_ULP_CLASS_HID_0000 = 0x0000,
-	BNXT_ULP_CLASS_HID_0087 = 0x0087
+	BNXT_ULP_CLASS_HID_0087 = 0x0087,
+	BNXT_ULP_CLASS_HID_0000 = 0x0000
 };
 
 enum bnxt_ulp_act_hid {
-	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_0002 = 0x0002,
+	BNXT_ULP_ACT_HID_0022 = 0x0022,
+	BNXT_ULP_ACT_HID_0026 = 0x0026,
+	BNXT_ULP_ACT_HID_0006 = 0x0006,
+	BNXT_ULP_ACT_HID_0009 = 0x0009,
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
-	BNXT_ULP_ACT_HID_0040 = 0x0040
+	BNXT_ULP_ACT_HID_002d = 0x002d,
+	BNXT_ULP_ACT_HID_004b = 0x004b,
+	BNXT_ULP_ACT_HID_004a = 0x004a,
+	BNXT_ULP_ACT_HID_004f = 0x004f,
+	BNXT_ULP_ACT_HID_004e = 0x004e,
+	BNXT_ULP_ACT_HID_006c = 0x006c,
+	BNXT_ULP_ACT_HID_0070 = 0x0070,
+	BNXT_ULP_ACT_HID_0021 = 0x0021,
+	BNXT_ULP_ACT_HID_0025 = 0x0025,
+	BNXT_ULP_ACT_HID_0043 = 0x0043,
+	BNXT_ULP_ACT_HID_0042 = 0x0042,
+	BNXT_ULP_ACT_HID_0047 = 0x0047,
+	BNXT_ULP_ACT_HID_0046 = 0x0046,
+	BNXT_ULP_ACT_HID_0064 = 0x0064,
+	BNXT_ULP_ACT_HID_0068 = 0x0068,
+	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_00df = 0x00df
 };
 
 enum bnxt_ulp_df_tpl {
 	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
-	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2,
+	BNXT_ULP_DF_TPL_VFREP_TO_VF = 3,
+	BNXT_ULP_DF_TPL_VF_TO_VFREP = 4,
+	BNXT_ULP_DF_TPL_DRV_FUNC_SVIF_PUSH_VLAN = 5,
+	BNXT_ULP_DF_TPL_PORT_SVIF_VID_VNIC_POP_VLAN = 6,
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC = 7
 };
+
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
index 84b952304..769542042 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
@@ -6,220 +6,275 @@
 #ifndef ULP_HDR_FIELD_ENUMS_H_
 #define ULP_HDR_FIELD_ENUMS_H_
 
-enum bnxt_ulp_hf0 {
-	BNXT_ULP_HF0_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF0_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF0_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF0_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF0_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF0_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF0_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF0_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF0_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF0_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF0_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF0_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF0_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF0_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF0_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF0_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF0_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF0_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF0_IDX_O_UDP_CSUM              = 23
-};
-
 enum bnxt_ulp_hf1 {
-	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF1_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF1_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF1_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF1_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF1_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF1_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF1_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF1_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF1_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF1_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF1_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF1_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF1_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF1_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF1_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF1_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF1_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF1_IDX_O_UDP_CSUM              = 23
+	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0
 };
 
 enum bnxt_ulp_hf2 {
-	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF2_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF2_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF2_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF2_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF2_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF2_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF2_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF2_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF2_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF2_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF2_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF2_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF2_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF2_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF2_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF2_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF2_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF2_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF2_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF2_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF2_IDX_O_UDP_CSUM              = 23,
-	BNXT_ULP_HF2_IDX_T_VXLAN_FLAGS           = 24,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD0           = 25,
-	BNXT_ULP_HF2_IDX_T_VXLAN_VNI             = 26,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD1           = 27,
-	BNXT_ULP_HF2_IDX_I_ETH_DMAC              = 28,
-	BNXT_ULP_HF2_IDX_I_ETH_SMAC              = 29,
-	BNXT_ULP_HF2_IDX_I_ETH_TYPE              = 30,
-	BNXT_ULP_HF2_IDX_IO_VLAN_CFI_PRI         = 31,
-	BNXT_ULP_HF2_IDX_IO_VLAN_VID             = 32,
-	BNXT_ULP_HF2_IDX_IO_VLAN_TYPE            = 33,
-	BNXT_ULP_HF2_IDX_II_VLAN_CFI_PRI         = 34,
-	BNXT_ULP_HF2_IDX_II_VLAN_VID             = 35,
-	BNXT_ULP_HF2_IDX_II_VLAN_TYPE            = 36,
-	BNXT_ULP_HF2_IDX_I_IPV4_VER              = 37,
-	BNXT_ULP_HF2_IDX_I_IPV4_TOS              = 38,
-	BNXT_ULP_HF2_IDX_I_IPV4_LEN              = 39,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_ID          = 40,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_OFF         = 41,
-	BNXT_ULP_HF2_IDX_I_IPV4_TTL              = 42,
-	BNXT_ULP_HF2_IDX_I_IPV4_NEXT_PID         = 43,
-	BNXT_ULP_HF2_IDX_I_IPV4_CSUM             = 44,
-	BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR         = 45,
-	BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR         = 46,
-	BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT          = 47,
-	BNXT_ULP_HF2_IDX_I_UDP_DST_PORT          = 48,
-	BNXT_ULP_HF2_IDX_I_UDP_LENGTH            = 49,
-	BNXT_ULP_HF2_IDX_I_UDP_CSUM              = 50
-};
-
-enum bnxt_ulp_hf_bitmask0 {
-	BNXT_ULP_HF0_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf3 {
+	BNXT_ULP_HF3_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf4 {
+	BNXT_ULP_HF4_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf5 {
+	BNXT_ULP_HF5_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf6 {
+	BNXT_ULP_HF6_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf7 {
+	BNXT_ULP_HF7_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf8 {
+	BNXT_ULP_HF8_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF8_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF8_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF8_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF8_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF8_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF8_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF8_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF8_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF8_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF8_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF8_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF8_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF8_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF8_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF8_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF8_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF8_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF8_IDX_O_UDP_CSUM              = 23
+};
+
+enum bnxt_ulp_hf9 {
+	BNXT_ULP_HF9_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF9_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF9_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF9_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF9_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF9_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF9_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF9_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF9_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF9_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF9_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF9_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF9_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF9_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF9_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF9_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF9_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF9_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF9_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF9_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF9_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF9_IDX_O_UDP_CSUM              = 23,
+	BNXT_ULP_HF9_IDX_T_VXLAN_FLAGS           = 24,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD0           = 25,
+	BNXT_ULP_HF9_IDX_T_VXLAN_VNI             = 26,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD1           = 27,
+	BNXT_ULP_HF9_IDX_I_ETH_DMAC              = 28,
+	BNXT_ULP_HF9_IDX_I_ETH_SMAC              = 29,
+	BNXT_ULP_HF9_IDX_I_ETH_TYPE              = 30,
+	BNXT_ULP_HF9_IDX_IO_VLAN_CFI_PRI         = 31,
+	BNXT_ULP_HF9_IDX_IO_VLAN_VID             = 32,
+	BNXT_ULP_HF9_IDX_IO_VLAN_TYPE            = 33,
+	BNXT_ULP_HF9_IDX_II_VLAN_CFI_PRI         = 34,
+	BNXT_ULP_HF9_IDX_II_VLAN_VID             = 35,
+	BNXT_ULP_HF9_IDX_II_VLAN_TYPE            = 36,
+	BNXT_ULP_HF9_IDX_I_IPV4_VER              = 37,
+	BNXT_ULP_HF9_IDX_I_IPV4_TOS              = 38,
+	BNXT_ULP_HF9_IDX_I_IPV4_LEN              = 39,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_ID          = 40,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_OFF         = 41,
+	BNXT_ULP_HF9_IDX_I_IPV4_TTL              = 42,
+	BNXT_ULP_HF9_IDX_I_IPV4_PROTO_ID         = 43,
+	BNXT_ULP_HF9_IDX_I_IPV4_CSUM             = 44,
+	BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR         = 45,
+	BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR         = 46,
+	BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT          = 47,
+	BNXT_ULP_HF9_IDX_I_UDP_DST_PORT          = 48,
+	BNXT_ULP_HF9_IDX_I_UDP_LENGTH            = 49,
+	BNXT_ULP_HF9_IDX_I_UDP_CSUM              = 50
+};
+
+enum bnxt_ulp_hf10 {
+	BNXT_ULP_HF10_IDX_SVIF_INDEX             = 0,
+	BNXT_ULP_HF10_IDX_O_ETH_DMAC             = 1,
+	BNXT_ULP_HF10_IDX_O_ETH_SMAC             = 2,
+	BNXT_ULP_HF10_IDX_O_ETH_TYPE             = 3,
+	BNXT_ULP_HF10_IDX_OO_VLAN_CFI_PRI        = 4,
+	BNXT_ULP_HF10_IDX_OO_VLAN_VID            = 5,
+	BNXT_ULP_HF10_IDX_OO_VLAN_TYPE           = 6,
+	BNXT_ULP_HF10_IDX_OI_VLAN_CFI_PRI        = 7,
+	BNXT_ULP_HF10_IDX_OI_VLAN_VID            = 8,
+	BNXT_ULP_HF10_IDX_OI_VLAN_TYPE           = 9,
+	BNXT_ULP_HF10_IDX_O_IPV4_VER             = 10,
+	BNXT_ULP_HF10_IDX_O_IPV4_TOS             = 11,
+	BNXT_ULP_HF10_IDX_O_IPV4_LEN             = 12,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_ID         = 13,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_OFF        = 14,
+	BNXT_ULP_HF10_IDX_O_IPV4_TTL             = 15,
+	BNXT_ULP_HF10_IDX_O_IPV4_PROTO_ID        = 16,
+	BNXT_ULP_HF10_IDX_O_IPV4_CSUM            = 17,
+	BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR        = 18,
+	BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR        = 19,
+	BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT         = 20,
+	BNXT_ULP_HF10_IDX_O_UDP_DST_PORT         = 21,
+	BNXT_ULP_HF10_IDX_O_UDP_LENGTH           = 22,
+	BNXT_ULP_HF10_IDX_O_UDP_CSUM             = 23
 };
 
 enum bnxt_ulp_hf_bitmask1 {
-	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000
 };
 
 enum bnxt_ulp_hf_bitmask2 {
-	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_VID         = 0x0000000010000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_VER          = 0x0000000004000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_NEXT_PID     = 0x0000000000100000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_CSUM          = 0x0000000000002000
+	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask3 {
+	BNXT_ULP_HF3_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask4 {
+	BNXT_ULP_HF4_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask5 {
+	BNXT_ULP_HF5_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask6 {
+	BNXT_ULP_HF6_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask7 {
+	BNXT_ULP_HF7_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask8 {
+	BNXT_ULP_HF8_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+};
+
+enum bnxt_ulp_hf_bitmask9 {
+	BNXT_ULP_HF9_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_VID         = 0x0000000010000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_VER          = 0x0000000004000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_PROTO_ID     = 0x0000000000100000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_CSUM          = 0x0000000000002000
 };
 
+enum bnxt_ulp_hf_bitmask10 {
+	BNXT_ULP_HF10_BITMASK_SVIF_INDEX         = 0x8000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_DMAC         = 0x4000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_SMAC         = 0x2000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_TYPE         = 0x1000000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_CFI_PRI    = 0x0800000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_VID        = 0x0400000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_TYPE       = 0x0200000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_CFI_PRI    = 0x0100000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_VID        = 0x0080000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_TYPE       = 0x0040000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_VER         = 0x0020000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TOS         = 0x0010000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_LEN         = 0x0008000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_ID     = 0x0004000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_OFF    = 0x0002000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TTL         = 0x0001000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_PROTO_ID    = 0x0000800000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_CSUM        = 0x0000400000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR    = 0x0000200000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR    = 0x0000100000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT     = 0x0000080000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT     = 0x0000040000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_LENGTH       = 0x0000020000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_CSUM         = 0x0000010000000000
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 7c440e3a4..f0a57cf65 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -294,60 +294,72 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 
 struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[] = {
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	}
 };
 
 struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
-	.flow_mem_type          = BNXT_ULP_FLOW_MEM_TYPE_EXT,
-	.byte_order             = BNXT_ULP_BYTE_ORDER_LE,
-	.encap_byte_swap        = 1,
-	.flow_db_num_entries    = 32768,
-	.mark_db_lfid_entries   = 65536,
-	.mark_db_gfid_entries   = 65536,
-	.flow_count_db_entries  = 16384,
-	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2,
-	.ext_cntr_table_type    = 0,
-	.byte_count_mask        = 0x00000003ffffffff,
-	.packet_count_mask      = 0xfffffffc00000000,
-	.byte_count_shift       = 0,
-	.packet_count_shift     = 36
+		.flow_mem_type           = BNXT_ULP_FLOW_MEM_TYPE_EXT,
+		.byte_order              = BNXT_ULP_BYTE_ORDER_LE,
+		.encap_byte_swap         = 1,
+		.flow_db_num_entries     = 32768,
+		.mark_db_lfid_entries    = 65536,
+		.mark_db_gfid_entries    = 65536,
+		.flow_count_db_entries   = 16384,
+		.num_resources_per_flow  = 8,
+		.num_phy_ports           = 2,
+		.ext_cntr_table_type     = 0,
+		.byte_count_mask         = 0x0000000fffffffff,
+		.packet_count_mask       = 0xffffffff00000000,
+		.byte_count_shift        = 0,
+		.packet_count_shift      = 36
 	}
 };
 
 struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[] = {
 	[0] = {
-	.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index       = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction               = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_RX
 	},
 	[1] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction          = TF_DIR_TX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_TX
 	},
 	[2] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_L2_CTXT,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
-	.direction          = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_RX
+	},
+	[3] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_TX
+	},
+	[4] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+		.resource_type           = TF_TBL_TYPE_FULL_ACT_RECORD,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR,
+		.direction               = TF_DIR_TX
 	}
 };
 
@@ -547,10 +559,11 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
 };
 
 uint32_t bnxt_ulp_encap_vtag_map[] = {
-	[0] = BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
-	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
-	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
 
 uint32_t ulp_glb_template_tbl[] = {
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC
 };
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 46/51] net/bnxt: create default flow rules for the VF-rep
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (44 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
                           ` (4 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Invoked 3 new APIs for the default flow create/destroy and to get
the action ptr for a default flow.
Changed ulp_intf_update() to accept rte_eth_dev as input and invoke
the same from the VF rep start function.
ULP Mark Manager will indicate if the cfa_code returned in the
Rx completion descriptor was for one of the default flow rules
created for the VF representor conduit. The mark_id returned
in such a case would be the VF rep's DPDK Port id, which can be
used to get the corresponding rte_eth_dev struct in bnxt_vf_recv

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   4 +-
 drivers/net/bnxt/bnxt_reps.c | 134 ++++++++++++++++++++++++-----------
 drivers/net/bnxt/bnxt_reps.h |   3 +-
 drivers/net/bnxt/bnxt_rxr.c  |  25 +++----
 drivers/net/bnxt/bnxt_txq.h  |   1 +
 5 files changed, 111 insertions(+), 56 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 32acced60..f16bf3319 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -806,8 +806,10 @@ struct bnxt_vf_representor {
 	uint16_t		fw_fid;
 	uint16_t		dflt_vnic_id;
 	uint16_t		svif;
-	uint16_t		tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	uint16_t		rx_cfa_code;
+	uint32_t		rep2vf_flow_id;
+	uint32_t		vf2rep_flow_id;
 	/* Private data store of associated PF/Trusted VF */
 	struct rte_eth_dev	*parent_dev;
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index ea6f0010f..a37a06184 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -12,6 +12,9 @@
 #include "bnxt_txr.h"
 #include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
+#include "bnxt_tf_common.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
@@ -29,30 +32,20 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 };
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf)
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf)
 {
 	struct bnxt_sw_rx_bd *prod_rx_buf;
 	struct bnxt_rx_ring_info *rep_rxr;
 	struct bnxt_rx_queue *rep_rxq;
 	struct rte_eth_dev *vfr_eth_dev;
 	struct bnxt_vf_representor *vfr_bp;
-	uint16_t vf_id;
 	uint16_t mask;
 	uint8_t que;
 
-	vf_id = bp->cfa_code_map[cfa_code];
-	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
-	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
-		return 1;
-	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	vfr_eth_dev = &rte_eth_devices[port_id];
 	if (!vfr_eth_dev)
 		return 1;
 	vfr_bp = vfr_eth_dev->data->dev_private;
-	if (vfr_bp->rx_cfa_code != cfa_code) {
-		/* cfa_code not meant for this VF rep!!?? */
-		return 1;
-	}
 	/* If rxq_id happens to be > max rep_queue, use rxq0 */
 	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
 	rep_rxq = vfr_bp->rx_queues[que];
@@ -127,7 +120,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	pthread_mutex_lock(&parent->rep_info->vfr_lock);
 	ptxq = parent->tx_queues[qid];
 
-	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+	ptxq->vfr_tx_cfa_action = vf_rep_bp->vfr_tx_cfa_action;
 
 	for (i = 0; i < nb_pkts; i++) {
 		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
@@ -135,7 +128,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	}
 
 	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
-	ptxq->tx_cfa_action = 0;
+	ptxq->vfr_tx_cfa_action = 0;
 	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
 
 	return rc;
@@ -252,10 +245,67 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
-static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+static int bnxt_tf_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
+{
+	int rc;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
+	struct rte_eth_dev *parent_dev = vfr->parent_dev;
+	struct bnxt *parent_bp = parent_dev->data->dev_private;
+	uint16_t vfr_port_id = vfr_ethdev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(vfr_port_id >> 8) & 0xff, vfr_port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	ulp_port_db_dev_port_intf_update(parent_bp->ulp_ctx, vfr_ethdev);
+
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VFREP_TO_VF,
+				     &vfr->rep2vf_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VFR->VF failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VFR->VF! ***\n");
+	BNXT_TF_DBG(DEBUG, "rep2vf_flow_id = %d\n", vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_db_cfa_action_get(parent_bp->ulp_ctx,
+						vfr->rep2vf_flow_id,
+						&vfr->vfr_tx_cfa_action);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Failed to get action_ptr for VFR->VF dflt rule\n");
+		return -EIO;
+	}
+	BNXT_TF_DBG(DEBUG, "tx_cfa_action = %d\n", vfr->vfr_tx_cfa_action);
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VF_TO_VFREP,
+				     &vfr->vf2rep_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VF->VFR failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VF->VFR! ***\n");
+	BNXT_TF_DBG(DEBUG, "vfr2rep_flow_id = %d\n", vfr->vf2rep_flow_id);
+
+	return 0;
+}
+
+static int bnxt_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
 {
 	int rc = 0;
-	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
 
 	if (!vfr || !vfr->parent_dev) {
 		PMD_DRV_LOG(ERR,
@@ -263,10 +313,8 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 		return -ENOMEM;
 	}
 
-	parent_bp = vfr->parent_dev->data->dev_private;
-
 	/* Check if representor has been already allocated in FW */
-	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+	if (vfr->vfr_tx_cfa_action && vfr->rx_cfa_code)
 		return 0;
 
 	/*
@@ -274,24 +322,14 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 	 * Otherwise the FW will create the VF-rep rules with
 	 * default drop action.
 	 */
-
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
-				     vfr->vf_id,
-				     &vfr->tx_cfa_action,
-				     &vfr->rx_cfa_code);
-	 */
-	if (!rc) {
-		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+	rc = bnxt_tf_vfr_alloc(vfr_ethdev);
+	if (!rc)
 		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
 			    vfr->vf_id);
-	} else {
+	else
 		PMD_DRV_LOG(ERR,
 			    "Failed to alloc representor %d in FW\n",
 			    vfr->vf_id);
-	}
 
 	return rc;
 }
@@ -312,7 +350,7 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
 	int rc;
 
-	rc = bnxt_vfr_alloc(rep_bp);
+	rc = bnxt_vfr_alloc(eth_dev);
 
 	if (!rc) {
 		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
@@ -327,6 +365,25 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
+static int bnxt_tf_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->rep2vf_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed rep2vf flowid: %d\n",
+			    vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->vf2rep_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed vf2rep flowid: %d\n",
+			    vfr->vf2rep_flow_id);
+	return 0;
+}
+
 static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 {
 	int rc = 0;
@@ -341,15 +398,10 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp = vfr->parent_dev->data->dev_private;
 
 	/* Check if representor has been already freed in FW */
-	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+	if (!vfr->vfr_tx_cfa_action && !vfr->rx_cfa_code)
 		return 0;
 
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
-				    vfr->vf_id);
-	 */
+	rc = bnxt_tf_vfr_free(vfr);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
 			    "Failed to free representor %d in FW\n",
@@ -360,7 +412,7 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
 	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
 		    vfr->vf_id);
-	vfr->tx_cfa_action = 0;
+	vfr->vfr_tx_cfa_action = 0;
 	vfr->rx_cfa_code = 0;
 
 	return rc;
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 5c2e0a0b9..418b95afc 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -13,8 +13,7 @@
 #define BNXT_VF_IDX_INVALID             0xffff
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf);
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 37b534fc2..64058879e 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -403,9 +403,9 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
-static void
+static uint32_t
 bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
-			  struct rte_mbuf *mbuf)
+			  struct rte_mbuf *mbuf, uint32_t *vfr_flag)
 {
 	uint32_t cfa_code;
 	uint32_t meta_fmt;
@@ -415,8 +415,6 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	uint32_t flags2;
 	uint32_t gfid_support = 0;
 	int rc;
-	uint32_t vfr_flag;
-
 
 	if (BNXT_GFID_ENABLED(bp))
 		gfid_support = 1;
@@ -485,19 +483,21 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	}
 
 	rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid,
-				  cfa_code, &vfr_flag, &mark_id);
+				  cfa_code, vfr_flag, &mark_id);
 	if (!rc) {
 		/* Got the mark, write it to the mbuf and return */
 		mbuf->hash.fdir.hi = mark_id;
 		mbuf->udata64 = (cfa_code & 0xffffffffull) << 32;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
 		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-		return;
+		return mark_id;
 	}
 
 skip_mark:
 	mbuf->hash.fdir.hi = 0;
 	mbuf->hash.fdir.id = 0;
+
+	return 0;
 }
 
 void bnxt_set_mark_in_mbuf(struct bnxt *bp,
@@ -553,7 +553,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	int rc = 0;
 	uint8_t agg_buf = 0;
 	uint16_t cmp_type;
-	uint32_t flags2_f = 0;
+	uint32_t flags2_f = 0, vfr_flag = 0, mark_id = 0;
 	uint16_t flags_type;
 	struct bnxt *bp = rxq->bp;
 
@@ -632,7 +632,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	}
 
 	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+		mark_id = bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf,
+						    &vfr_flag);
 	else
 		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
@@ -736,10 +737,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
-	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
-	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
-		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
-				   mbuf)) {
+	if (BNXT_TRUFLOW_EN(bp) &&
+	    (BNXT_VF_IS_TRUSTED(bp) || BNXT_PF(bp)) &&
+	    vfr_flag) {
+		if (!bnxt_vfr_recv(mark_id, rxq->queue_id, mbuf)) {
 			/* Now return an error so that nb_rx_pkts is not
 			 * incremented.
 			 * This packet was meant to be given to the representor.
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 69ff89aab..a1ab3f39a 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -30,6 +30,7 @@ struct bnxt_tx_queue {
 	int			index;
 	int			tx_wake_thresh;
 	uint32_t                tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 47/51] net/bnxt: add port default rules for ingress and egress
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (45 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
                           ` (3 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

ingress & egress port default rules are needed to send the packet
from port_to_dpdk & dpdk_to_port respectively.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c     | 76 +++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h |  3 ++
 2 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de8e11a6e..2a19c5040 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -29,6 +29,7 @@
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
 #include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -1162,6 +1163,73 @@ static int bnxt_handle_if_change_status(struct bnxt *bp)
 	return rc;
 }
 
+static int32_t
+bnxt_create_port_app_df_rule(struct bnxt *bp, uint8_t flow_type,
+			     uint32_t *flow_id)
+{
+	uint16_t port_id = bp->eth_dev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(port_id >> 8) & 0xff, port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	return ulp_default_flow_create(bp->eth_dev, param_list, flow_type,
+				       flow_id);
+}
+
+static int32_t
+bnxt_create_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+	int rc;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_PORT_TO_VS,
+					  &cfg_data->port_to_app_flow_id);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to create port to app default rule\n");
+		return rc;
+	}
+
+	BNXT_TF_DBG(DEBUG, "***** created port to app default rule ******\n");
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_VS_TO_PORT,
+					  &cfg_data->app_to_port_flow_id);
+	if (!rc) {
+		rc = ulp_default_flow_db_cfa_action_get(bp->ulp_ctx,
+							cfg_data->app_to_port_flow_id,
+							&cfg_data->tx_cfa_action);
+		if (rc)
+			goto err;
+
+		BNXT_TF_DBG(DEBUG,
+			    "***** created app to port default rule *****\n");
+		return 0;
+	}
+
+err:
+	BNXT_TF_DBG(DEBUG, "Failed to create app to port default rule\n");
+	return rc;
+}
+
+static void
+bnxt_destroy_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->port_to_app_flow_id);
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->app_to_port_flow_id);
+}
+
 static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
@@ -1330,8 +1398,11 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
-	if (BNXT_TRUFLOW_EN(bp))
+	if (BNXT_TRUFLOW_EN(bp)) {
+		if (bp->rep_info != NULL)
+			bnxt_destroy_df_rules(bp);
 		bnxt_ulp_deinit(bp);
+	}
 
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
@@ -1581,6 +1652,9 @@ static int bnxt_promiscuous_disable_op(struct rte_eth_dev *eth_dev)
 	if (rc != 0)
 		vnic->flags = old_flags;
 
+	if (BNXT_TRUFLOW_EN(bp) && bp->rep_info != NULL)
+		bnxt_create_df_rules(bp);
+
 	return rc;
 }
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 3563f63fa..4843da562 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,9 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	uint32_t			port_to_app_flow_id;
+	uint32_t			app_to_port_flow_id;
+	uint32_t			tx_cfa_action;
 };
 
 struct bnxt_ulp_context {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 48/51] net/bnxt: fill cfa action in the Tx descriptor
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (46 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
                           ` (2 subsequent siblings)
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Currently, only vfrep transmit requires cfa_action to be filled
in the tx buffer descriptor. However with truflow, dpdk(non vfrep)
to port also requires cfa_action to be filled in the tx buffer
descriptor.

This patch uses the correct cfa_action pointer while transmitting
the packet. Based on whether the packet is transmitted on non-vfrep
or vfrep, tx_cfa_action or vfr_tx_cfa_action inside txq will be
filled in the tx buffer descriptor.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index d7e193d38..f5884268e 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
+				PKT_TX_QINQ_PKT) ||
+	     txq->bp->ulp_ctx->cfg_data->tx_cfa_action ||
+	     txq->vfr_tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +186,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = txq->tx_cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp)) {
+			if (txq->vfr_tx_cfa_action)
+				cfa_action = txq->vfr_tx_cfa_action;
+			else
+				cfa_action =
+				      txq->bp->ulp_ctx->cfg_data->tx_cfa_action;
+		}
+
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
@@ -212,7 +222,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					&txr->tx_desc_ring[txr->tx_prod];
 		txbd1->lflags = 0;
 		txbd1->cfa_meta = vlan_tag_flags;
-		txbd1->cfa_action = cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp))
+			txbd1->cfa_action = cfa_action;
 
 		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
 			uint16_t hdr_size;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 49/51] net/bnxt: add ULP Flow counter Manager
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (47 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 51/51] doc: update release notes Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

The Flow counter manager allocates memory to hold the software view
of the counters where the on-chip counter data will be accumulated
along with another memory block that will be shadowing the on-chip
counter data i.e where the raw counter data will be DMAed into from
the chip.
It also keeps track of the first HW counter ID as that will be needed
to retrieve the counter data in bulk using a TF API. It issues this cmd
in an rte_alarm thread that keeps running every second.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_ulp/Makefile      |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h    |   8 +
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c  | 465 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h  | 148 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c |  27 ++
 7 files changed, 685 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 2939857ca..5fb0ed380 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -46,6 +46,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
 	'tf_core/tf_em_host.c',
+	'tf_ulp/ulp_fc_mgr.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 3f1b43bae..abb68150d 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -17,3 +17,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_fc_mgr.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index e5e7e5f43..c05861150 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -18,6 +18,7 @@
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "ulp_mark_mgr.h"
+#include "ulp_fc_mgr.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 #include "ulp_port_db.h"
@@ -705,6 +706,12 @@ bnxt_ulp_init(struct bnxt *bp)
 		goto jump_to_error;
 	}
 
+	rc = ulp_fc_mgr_init(bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize ulp flow counter mgr\n");
+		goto jump_to_error;
+	}
+
 	return rc;
 
 jump_to_error:
@@ -752,6 +759,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	/* cleanup the ulp mapper */
 	ulp_mapper_deinit(bp->ulp_ctx);
 
+	/* Delete the Flow Counter Manager */
+	ulp_fc_mgr_deinit(bp->ulp_ctx);
+
 	/* Delete the Port database */
 	ulp_port_db_deinit(bp->ulp_ctx);
 
@@ -963,3 +973,28 @@ bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx)
 
 	return ulp_ctx->cfg_data->port_db;
 }
+
+/* Function to set the flow counter info into the context */
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->fc_info = ulp_fc_info;
+
+	return 0;
+}
+
+/* Function to retrieve the flow counter info from the context. */
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->fc_info;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 4843da562..a13328426 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,7 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	struct bnxt_ulp_fc_info		*fc_info;
 	uint32_t			port_to_app_flow_id;
 	uint32_t			app_to_port_flow_id;
 	uint32_t			tx_cfa_action;
@@ -154,4 +155,11 @@ int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *error);
 
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info);
+
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
new file mode 100644
index 000000000..f70d4a295
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -0,0 +1,465 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_alarm.h>
+#include "bnxt.h"
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "ulp_fc_mgr.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_struct.h"
+#include "tf_tbl.h"
+
+static int
+ulp_fc_mgr_shadow_mem_alloc(struct hw_fc_mem_info *parms, int size)
+{
+	/* Allocate memory*/
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("ulp_fc_info",
+				    RTE_CACHE_LINE_ROUNDUP(size),
+				    4096);
+	if (parms->mem_va == NULL) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	rte_mem_lock_page(parms->mem_va);
+
+	parms->mem_pa = (void *)(uintptr_t)rte_mem_virt2phy(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void
+ulp_fc_mgr_shadow_mem_free(struct hw_fc_mem_info *parms)
+{
+	rte_free(parms->mem_va);
+}
+
+/*
+ * Allocate and Initialize all Flow Counter Manager resources for this ulp
+ * context.
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager.
+ *
+ */
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id, sw_acc_cntr_tbl_sz, hw_fc_mem_info_sz;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i, rc;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	ulp_fc_info = rte_zmalloc("ulp_fc_info", sizeof(*ulp_fc_info), 0);
+	if (!ulp_fc_info)
+		goto error;
+
+	rc = pthread_mutex_init(&ulp_fc_info->fc_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Failed to initialize fc mutex\n");
+		goto error;
+	}
+
+	/* Add the FC info tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, ulp_fc_info);
+
+	sw_acc_cntr_tbl_sz = sizeof(struct sw_acc_counter) *
+				dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		ulp_fc_info->sw_acc_tbl[i] = rte_zmalloc("ulp_sw_acc_cntr_tbl",
+							 sw_acc_cntr_tbl_sz, 0);
+		if (!ulp_fc_info->sw_acc_tbl[i])
+			goto error;
+	}
+
+	hw_fc_mem_info_sz = sizeof(uint64_t) * dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_fc_mgr_shadow_mem_alloc(&ulp_fc_info->shadow_hw_tbl[i],
+						 hw_fc_mem_info_sz);
+		if (rc)
+			goto error;
+	}
+
+	return 0;
+
+error:
+	ulp_fc_mgr_deinit(ctxt);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for fc mgr\n");
+
+	return -ENOMEM;
+}
+
+/*
+ * Release all resources in the Flow Counter Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EINVAL;
+
+	ulp_fc_mgr_thread_cancel(ctxt);
+
+	pthread_mutex_destroy(&ulp_fc_info->fc_lock);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		rte_free(ulp_fc_info->sw_acc_tbl[i]);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		ulp_fc_mgr_shadow_mem_free(&ulp_fc_info->shadow_hw_tbl[i]);
+
+
+	rte_free(ulp_fc_info);
+
+	/* Safe to ignore on deinit */
+	(void)bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, NULL);
+
+	return 0;
+}
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	return !!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD);
+}
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD)) {
+		rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+				  ulp_fc_mgr_alarm_cb,
+				  (void *)ctxt);
+		ulp_fc_info->flags |= ULP_FLAG_FC_THREAD;
+	}
+
+	return 0;
+}
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	ulp_fc_info->flags &= ~ULP_FLAG_FC_THREAD;
+	rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt);
+}
+
+/*
+ * DMA-in the raw counter data from the HW and accumulate in the
+ * local accumulator table using the TF-Core API
+ *
+ * tfp [in] The TF-Core context
+ *
+ * fc_info [in] The ULP Flow counter info ptr
+ *
+ * dir [in] The direction of the flow
+ *
+ * num_counters [in] The number of counters
+ *
+ */
+static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+				       struct bnxt_ulp_fc_info *fc_info,
+				       enum tf_dir dir, uint32_t num_counters)
+{
+	int rc = 0;
+	struct tf_tbl_get_bulk_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD: Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t *stats = NULL;
+	uint16_t i = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
+	parms.num_entries = num_counters;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.entry_sz_in_bytes = sizeof(uint64_t);
+	stats = (uint64_t *)fc_info->shadow_hw_tbl[dir].mem_va;
+	parms.physical_mem_addr = (uintptr_t)fc_info->shadow_hw_tbl[dir].mem_pa;
+
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Memory not initialized id:0x%x dir:%d\n",
+			    parms.starting_idx, dir);
+		return -EINVAL;
+	}
+
+	rc = tf_tbl_bulk_get(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Get failed for id:0x%x rc:%d\n",
+			    parms.starting_idx, rc);
+		return rc;
+	}
+
+	for (i = 0; i < num_counters; i++) {
+		/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+		sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
+		if (!sw_acc_tbl_entry->valid)
+			continue;
+		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i]);
+		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i]);
+	}
+
+	return rc;
+}
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg)
+{
+	int rc = 0, i;
+	struct bnxt_ulp_context *ctxt = arg;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct bnxt_ulp_device_params *dparms;
+	struct tf *tfp;
+	uint32_t dev_id;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ctxt);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return;
+	}
+
+	/*
+	 * Take the fc_lock to ensure no flow is destroyed
+	 * during the bulk get
+	 */
+	if (pthread_mutex_trylock(&ulp_fc_info->fc_lock))
+		goto out;
+
+	if (!ulp_fc_info->num_entries) {
+		pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
+					     dparms->flow_count_db_entries);
+		if (rc)
+			break;
+	}
+
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	/*
+	 * If cmd fails once, no need of
+	 * invoking again every second
+	 */
+
+	if (rc) {
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+out:
+	rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+			  ulp_fc_mgr_alarm_cb,
+			  (void *)ctxt);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	/* Assuming start_idx of 0 is invalid */
+	return (ulp_fc_info->shadow_hw_tbl[dir].start_idx != 0);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+				 uint32_t start_idx)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EIO;
+
+	/* Assuming that 0 is an invalid counter ID ? */
+	if (ulp_fc_info->shadow_hw_tbl[dir].start_idx == 0)
+		ulp_fc_info->shadow_hw_tbl[dir].start_idx = start_idx;
+
+	return 0;
+}
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			    uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->num_entries++;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
+
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			      uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
+	ulp_fc_info->num_entries--;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
new file mode 100644
index 000000000..faa77dd75
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FC_MGR_H_
+#define _ULP_FC_MGR_H_
+
+#include "bnxt_ulp.h"
+#include "tf_core.h"
+
+#define ULP_FLAG_FC_THREAD			BIT(0)
+#define ULP_FC_TIMER	1/* Timer freq in Sec Flow Counters */
+
+/* Macros to extract packet/byte counters from a 64-bit flow counter. */
+#define FLOW_CNTR_BYTE_WIDTH 36
+#define FLOW_CNTR_BYTE_MASK  (((uint64_t)1 << FLOW_CNTR_BYTE_WIDTH) - 1)
+
+#define FLOW_CNTR_PKTS(v) ((v) >> FLOW_CNTR_BYTE_WIDTH)
+#define FLOW_CNTR_BYTES(v) ((v) & FLOW_CNTR_BYTE_MASK)
+
+struct sw_acc_counter {
+	uint64_t pkt_count;
+	uint64_t byte_count;
+	bool	valid;
+};
+
+struct hw_fc_mem_info {
+	/*
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/*
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+	uint32_t start_idx;
+};
+
+struct bnxt_ulp_fc_info {
+	struct sw_acc_counter	*sw_acc_tbl[TF_DIR_MAX];
+	struct hw_fc_mem_info	shadow_hw_tbl[TF_DIR_MAX];
+	uint32_t		flags;
+	uint32_t		num_entries;
+	pthread_mutex_t		fc_lock;
+};
+
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Release all resources in the flow counter manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg);
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			     uint32_t start_idx);
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			uint32_t hw_cntr_id);
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			  uint32_t hw_cntr_id);
+/*
+ * Check if the starting HW counter ID value is set in the
+ * flow counter manager.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_FC_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 7696de2a5..a3cfe54bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -10,6 +10,7 @@
 #include "ulp_utils.h"
 #include "ulp_template_struct.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 
 #define ULP_FLOW_DB_RES_DIR_BIT		31
 #define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
@@ -484,6 +485,21 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 		ulp_flow_db_res_params_to_info(fid_resource, params);
 	}
 
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		/* Store the first HW counter ID for this table */
+		if (!ulp_fc_mgr_start_idx_isset(ulp_ctxt, params->direction))
+			ulp_fc_mgr_start_idx_set(ulp_ctxt, params->direction,
+						 params->resource_hndl);
+
+		ulp_fc_mgr_cntr_set(ulp_ctxt, params->direction,
+				    params->resource_hndl);
+
+		if (!ulp_fc_mgr_thread_isstarted(ulp_ctxt))
+			ulp_fc_mgr_thread_start(ulp_ctxt);
+	}
+
 	/* all good, return success */
 	return 0;
 }
@@ -574,6 +590,17 @@ int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
 					nxt_idx);
 	}
 
+	/* Now that the HW Flow counter resource is deleted, reset it's
+	 * corresponding slot in the SW accumulation table in the Flow Counter
+	 * manager
+	 */
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		ulp_fc_mgr_cntr_reset(ulp_ctxt, params->direction,
+				      params->resource_hndl);
+	}
+
 	/* all good, return success */
 	return 0;
 }
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 50/51] net/bnxt: add support for count action in flow query
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (48 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 51/51] doc: update release notes Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use the flow counter manager to fetch the accumulated stats for
a flow.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c |  45 +++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c    | 141 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h    |  17 ++-
 3 files changed, 196 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 7ef306e58..36a014184 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -9,6 +9,7 @@
 #include "ulp_matcher.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 #include <rte_malloc.h>
 
 static int32_t
@@ -289,11 +290,53 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	return ret;
 }
 
+/* Function to query the rte flows. */
+static int32_t
+bnxt_ulp_flow_query(struct rte_eth_dev *eth_dev,
+		    struct rte_flow *flow,
+		    const struct rte_flow_action *action,
+		    void *data,
+		    struct rte_flow_error *error)
+{
+	int rc = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	struct rte_flow_query_count *count;
+	uint32_t flow_id;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to query flow.");
+		return -EINVAL;
+	}
+
+	flow_id = (uint32_t)(uintptr_t)flow;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		count = data;
+		rc = ulp_fc_mgr_query_count_get(ulp_ctx, flow_id, count);
+		if (rc) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to query flow.");
+		}
+		break;
+	default:
+		rte_flow_error_set(error, -rc, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "Unsupported action item");
+	}
+
+	return rc;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
 	.flush = bnxt_ulp_flow_flush,
-	.query = NULL,
+	.query = bnxt_ulp_flow_query,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index f70d4a295..9944e9e5c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -11,6 +11,7 @@
 #include "bnxt_ulp.h"
 #include "bnxt_tf_common.h"
 #include "ulp_fc_mgr.h"
+#include "ulp_flow_db.h"
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "tf_tbl.h"
@@ -226,9 +227,10 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
  * num_counters [in] The number of counters
  *
  */
-static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+__rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 				       struct bnxt_ulp_fc_info *fc_info,
 				       enum tf_dir dir, uint32_t num_counters)
+/* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
 {
 	int rc = 0;
 	struct tf_tbl_get_bulk_parms parms = { 0 };
@@ -275,6 +277,45 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 
 	return rc;
 }
+
+static int ulp_get_single_flow_stat(struct tf *tfp,
+				    struct bnxt_ulp_fc_info *fc_info,
+				    enum tf_dir dir,
+				    uint32_t hw_cntr_id)
+{
+	int rc = 0;
+	struct tf_get_tbl_entry_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD:Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t stats = 0;
+	uint32_t sw_cntr_indx = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.idx = hw_cntr_id;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.data_sz_in_bytes = sizeof(uint64_t);
+	parms.data = (uint8_t *)&stats;
+	rc = tf_get_tbl_entry(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Get failed for id:0x%x rc:%d\n",
+			    parms.idx, rc);
+		return rc;
+	}
+
+	/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+	sw_cntr_indx = hw_cntr_id - fc_info->shadow_hw_tbl[dir].start_idx;
+	sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][sw_cntr_indx];
+	sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats);
+	sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats);
+
+	return rc;
+}
+
 /*
  * Alarm handler that will issue the TF-Core API to fetch
  * data from the chip's internal flow counters
@@ -282,15 +323,18 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
+
 void
 ulp_fc_mgr_alarm_cb(void *arg)
 {
-	int rc = 0, i;
+	int rc = 0;
+	unsigned int j;
+	enum tf_dir i;
 	struct bnxt_ulp_context *ctxt = arg;
 	struct bnxt_ulp_fc_info *ulp_fc_info;
 	struct bnxt_ulp_device_params *dparms;
 	struct tf *tfp;
-	uint32_t dev_id;
+	uint32_t dev_id, hw_cntr_id = 0;
 
 	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
 	if (!ulp_fc_info)
@@ -325,13 +369,27 @@ ulp_fc_mgr_alarm_cb(void *arg)
 		ulp_fc_mgr_thread_cancel(ctxt);
 		return;
 	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
+	/*
+	 * Commented for now till GET_BULK is resolved, just get the first flow
+	 * stat for now
+	 for (i = 0; i < TF_DIR_MAX; i++) {
 		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
 					     dparms->flow_count_db_entries);
 		if (rc)
 			break;
 	}
+	*/
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		for (j = 0; j < ulp_fc_info->num_entries; j++) {
+			if (!ulp_fc_info->sw_acc_tbl[i][j].valid)
+				continue;
+			hw_cntr_id = ulp_fc_info->sw_acc_tbl[i][j].hw_cntr_id;
+			rc = ulp_get_single_flow_stat(tfp, ulp_fc_info, i,
+						      hw_cntr_id);
+			if (rc)
+				break;
+		}
+	}
 
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -425,6 +483,7 @@ int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = hw_cntr_id;
 	ulp_fc_info->num_entries++;
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -456,6 +515,7 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
 	ulp_fc_info->num_entries--;
@@ -463,3 +523,74 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 
 	return 0;
 }
+
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ctxt,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count)
+{
+	int rc = 0;
+	uint32_t nxt_resource_index = 0;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct ulp_flow_db_res_params params;
+	enum tf_dir dir;
+	uint32_t hw_cntr_id = 0, sw_cntr_idx = 0;
+	struct sw_acc_counter sw_acc_tbl_entry;
+	bool found_cntr_resource = false;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -ENODEV;
+
+	do {
+		rc = ulp_flow_db_resource_get(ctxt,
+					      BNXT_ULP_REGULAR_FLOW_TABLE,
+					      flow_id,
+					      &nxt_resource_index,
+					      &params);
+		if (params.resource_func ==
+		     BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE &&
+		     (params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT ||
+		      params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT)) {
+			found_cntr_resource = true;
+			break;
+		}
+
+	} while (!rc);
+
+	if (rc)
+		return rc;
+
+	if (found_cntr_resource) {
+		dir = params.direction;
+		hw_cntr_id = params.resource_hndl;
+		sw_cntr_idx = hw_cntr_id -
+				ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+		sw_acc_tbl_entry = ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx];
+		if (params.resource_sub_type ==
+			BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+			count->hits_set = 1;
+			count->bytes_set = 1;
+			count->hits = sw_acc_tbl_entry.pkt_count;
+			count->bytes = sw_acc_tbl_entry.byte_count;
+		} else {
+			/* TBD: Handle External counters */
+			rc = -EINVAL;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
index faa77dd75..207267049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -23,6 +23,7 @@ struct sw_acc_counter {
 	uint64_t pkt_count;
 	uint64_t byte_count;
 	bool	valid;
+	uint32_t hw_cntr_id;
 };
 
 struct hw_fc_mem_info {
@@ -142,7 +143,21 @@ bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
-
 bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ulp_ctx,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count);
 #endif /* _ULP_FC_MGR_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v4 51/51] doc: update release notes
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
                           ` (49 preceding siblings ...)
  2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
@ 2020-07-02 23:28         ` Ajit Khaparde
  50 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-02 23:28 UTC (permalink / raw)
  To: dev

Update release notes with enhancements in Broadcom PMD.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/rel_notes/release_20_08.rst | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 5cbc4ce14..9bcea29ba 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -91,6 +91,16 @@ New Features
 
   * Added support for DCF datapath configuration.
 
+* **Updated Broadcom bnxt driver.**
+
+  Updated the Broadcom bnxt driver with new features and improvements, including:
+
+  * Added support for VF representors.
+  * Added support for multiple devices.
+  * Added support for new resource manager API.
+  * Added support for VXLAN encap/decap.
+  * Added support for rte_flow_query for COUNT action.
+
 * **Added support for BPF_ABS/BPF_IND load instructions.**
 
   Added support for two BPF non-generic instructions:
@@ -107,7 +117,6 @@ New Features
   * Dump ``rte_flow`` memory consumption.
   * Measure packet per second forwarding.
 
-
 Removed Items
 -------------
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management
  2020-07-01 21:31     ` Ferruh Yigit
  2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
  2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
@ 2020-07-03 21:01       ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
                           ` (52 more replies)
  2 siblings, 53 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev

v1->v2:
 - update commit message
 - rebase patches against latest changes in the tree
 - fix signed-off-by tags
 - update release notes

v2->v3:
 - fix compilation issues

v3->v4:
 - rebase against latest dpdk-next-net

v4->v5:
 - fix uninitlalized variable in patch [29/51]
 - rebase against latest dpdk-next-net

Ajit Khaparde (1):
  doc: update release notes

Jay Ding (5):
  net/bnxt: implement support for TCAM access
  net/bnxt: support two level priority for TCAMs
  net/bnxt: add external action alloc and free
  net/bnxt: implement IF tables set and get
  net/bnxt: add global config set and get APIs

Kishore Padmanabha (8):
  net/bnxt: integrate with the latest tf core changes
  net/bnxt: add support for if table processing
  net/bnxt: disable Tx vector mode if truflow is enabled
  net/bnxt: add index opcode and operand to mapper table
  net/bnxt: add support for global resource templates
  net/bnxt: add support for internal exact match entries
  net/bnxt: add support for conditional execution of mapper tables
  net/bnxt: add VF-rep and stat templates

Lance Richardson (1):
  net/bnxt: initialize parent PF information

Michael Wildt (7):
  net/bnxt: add multi device support
  net/bnxt: update multi device design support
  net/bnxt: multiple device implementation
  net/bnxt: update identifier with remap support
  net/bnxt: update RM with residual checker
  net/bnxt: update table get to use new design
  net/bnxt: add TF register and unregister

Mike Baucom (1):
  net/bnxt: add support for internal encap records

Peter Spreadborough (7):
  net/bnxt: add support for exact match
  net/bnxt: modify EM insert and delete to use HWRM direct
  net/bnxt: add HCAPI interface support
  net/bnxt: support EM and TCAM lookup with table scope
  net/bnxt: update RM to support HCAPI only
  net/bnxt: remove table scope from session
  net/bnxt: add support for EEM System memory

Randy Schacher (2):
  net/bnxt: add core changes for EM and EEM lookups
  net/bnxt: align CFA resources with RM

Shahaji Bhosle (2):
  net/bnxt: support bulk table get and mirror
  net/bnxt: support two-level priority for TCAMs

Somnath Kotur (7):
  net/bnxt: add basic infrastructure for VF reps
  net/bnxt: add support for VF-reps data path
  net/bnxt: get IDs for VF-Rep endpoint
  net/bnxt: parse reps along with other dev-args
  net/bnxt: create default flow rules for the VF-rep
  net/bnxt: add ULP Flow counter Manager
  net/bnxt: add support for count action in flow query

Venkat Duvvuru (10):
  net/bnxt: modify port db dev interface
  net/bnxt: get port and function info
  net/bnxt: add support for hwrm port phy qcaps
  net/bnxt: modify port db to handle more info
  net/bnxt: enable port MAC qcfg command for trusted VF
  net/bnxt: enhancements for port db
  net/bnxt: manage VF to VFR conduit
  net/bnxt: fill mapper parameters with default rules
  net/bnxt: add port default rules for ingress and egress
  net/bnxt: fill cfa action in the Tx descriptor

 config/common_base                            |    1 +
 doc/guides/rel_notes/release_20_08.rst        |   11 +-
 drivers/net/bnxt/Makefile                     |    8 +-
 drivers/net/bnxt/bnxt.h                       |  121 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  519 +-
 drivers/net/bnxt/bnxt_hwrm.c                  |  122 +-
 drivers/net/bnxt/bnxt_hwrm.h                  |    7 +
 drivers/net/bnxt/bnxt_reps.c                  |  773 +++
 drivers/net/bnxt/bnxt_reps.h                  |   45 +
 drivers/net/bnxt/bnxt_rxr.c                   |   39 +-
 drivers/net/bnxt/bnxt_rxr.h                   |    1 +
 drivers/net/bnxt/bnxt_txq.h                   |    2 +
 drivers/net/bnxt/bnxt_txr.c                   |   18 +-
 drivers/net/bnxt/hcapi/Makefile               |   10 +
 drivers/net/bnxt/hcapi/cfa_p40_hw.h           |  781 +++
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h          |  303 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h            |  276 +
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h       |  672 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c         |  399 ++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h         |  467 ++
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++--
 drivers/net/bnxt/meson.build                  |   21 +-
 drivers/net/bnxt/tf_core/Makefile             |   29 +-
 drivers/net/bnxt/tf_core/bitalloc.c           |  107 +
 drivers/net/bnxt/tf_core/bitalloc.h           |    5 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h |  293 +
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  995 +---
 drivers/net/bnxt/tf_core/ll.c                 |   52 +
 drivers/net/bnxt/tf_core/ll.h                 |   46 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_common.h          |   43 +
 drivers/net/bnxt/tf_core/tf_core.c            | 1495 +++--
 drivers/net/bnxt/tf_core/tf_core.h            |  874 ++-
 drivers/net/bnxt/tf_core/tf_device.c          |  271 +
 drivers/net/bnxt/tf_core/tf_device.h          |  650 ++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  147 +
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  104 +
 drivers/net/bnxt/tf_core/tf_em.c              |  515 --
 drivers/net/bnxt/tf_core/tf_em.h              |  492 +-
 drivers/net/bnxt/tf_core/tf_em_common.c       | 1048 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  134 +
 drivers/net/bnxt/tf_core/tf_em_host.c         |  531 ++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  352 ++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  533 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c      |  199 +
 drivers/net/bnxt/tf_core/tf_global_cfg.h      |  170 +
 drivers/net/bnxt/tf_core/tf_identifier.c      |  186 +
 drivers/net/bnxt/tf_core/tf_identifier.h      |  147 +
 drivers/net/bnxt/tf_core/tf_if_tbl.c          |  178 +
 drivers/net/bnxt/tf_core/tf_if_tbl.h          |  236 +
 drivers/net/bnxt/tf_core/tf_msg.c             | 1681 +++---
 drivers/net/bnxt/tf_core/tf_msg.h             |  409 +-
 drivers/net/bnxt/tf_core/tf_resources.h       |  531 --
 drivers/net/bnxt/tf_core/tf_rm.c              | 3840 +++---------
 drivers/net/bnxt/tf_core/tf_rm.h              |  554 +-
 drivers/net/bnxt/tf_core/tf_session.c         |  776 +++
 drivers/net/bnxt/tf_core/tf_session.h         |  565 +-
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      |  240 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |   63 +
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     |  239 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1930 +-----
 drivers/net/bnxt/tf_core/tf_tbl.h             |  469 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  430 ++
 drivers/net/bnxt/tf_core/tf_tcam.h            |  360 ++
 drivers/net/bnxt/tf_core/tf_util.c            |  176 +
 drivers/net/bnxt/tf_core/tf_util.h            |   98 +
 drivers/net/bnxt/tf_core/tfp.c                |   33 +-
 drivers/net/bnxt/tf_core/tfp.h                |  153 +-
 drivers/net/bnxt/tf_ulp/Makefile              |    2 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   16 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  129 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   35 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |   84 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       |  385 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  596 ++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |  163 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   42 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  481 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    6 +-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   10 +
 drivers/net/bnxt/tf_ulp/ulp_port_db.c         |  235 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.h         |  122 +-
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |   30 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  433 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5217 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  537 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   85 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   23 +-
 drivers/net/bnxt/tf_ulp/ulp_utils.c           |    2 +-
 94 files changed, 28009 insertions(+), 11247 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-06 10:07           ` Ferruh Yigit
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
                           ` (51 subsequent siblings)
  52 siblings, 1 reply; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru, Kalesh AP

From: Somnath Kotur <somnath.kotur@broadcom.com>

Defines data structures and code to init/uninit
VF representors during pci_probe and pci_remove
respectively.
Most of the dev_ops for the VF representor are just
stubs for now and will be will be filled out in next patch.

To create a representor using testpmd:
testpmd -c 0xff -wB:D.F,representor=1 -- -i
testpmd -c 0xff -w05:02.0,representor=[1] -- -i

To create a representor using ovs-dpdk:
1. Firt add the trusted VF port to a bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0
2. Add the representor port to the bridge
ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
options:dpdk-devargs=0000:06:02.0,representor=1

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile      |   2 +
 drivers/net/bnxt/bnxt.h        |  64 +++++++-
 drivers/net/bnxt/bnxt_ethdev.c | 225 ++++++++++++++++++++------
 drivers/net/bnxt/bnxt_reps.c   | 287 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_reps.h   |  35 ++++
 drivers/net/bnxt/meson.build   |   1 +
 6 files changed, 566 insertions(+), 48 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_reps.c
 create mode 100644 drivers/net/bnxt/bnxt_reps.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index a375299c3..365627499 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -14,6 +14,7 @@ LIB = librte_pmd_bnxt.a
 EXPORT_MAP := rte_pmd_bnxt_version.map
 
 CFLAGS += -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 CFLAGS += $(WERROR_FLAGS)
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
@@ -38,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_reps.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c
 ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index d455f8d84..9b7b87cee 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -220,6 +220,7 @@ struct bnxt_child_vf_info {
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
+#define BNXT_MAX_VF_REPS	64
 #define BNXT_TOTAL_VFS(bp)	((bp)->pf->total_vfs)
 #define BNXT_FIRST_VF_FID	128
 #define BNXT_PF_RINGS_USED(bp)	bnxt_get_num_queues(bp)
@@ -492,6 +493,10 @@ struct bnxt_mark_info {
 	bool		valid;
 };
 
+struct bnxt_rep_info {
+	struct rte_eth_dev	*vfr_eth_dev;
+};
+
 /* address space location of register */
 #define BNXT_FW_STATUS_REG_TYPE_MASK	3
 /* register is located in PCIe config space */
@@ -515,6 +520,40 @@ struct bnxt_mark_info {
 #define BNXT_FW_STATUS_HEALTHY		0x8000
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
+#define BNXT_ETH_RSS_SUPPORT (	\
+	ETH_RSS_IPV4 |		\
+	ETH_RSS_NONFRAG_IPV4_TCP |	\
+	ETH_RSS_NONFRAG_IPV4_UDP |	\
+	ETH_RSS_IPV6 |		\
+	ETH_RSS_NONFRAG_IPV6_TCP |	\
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
+				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_CKSUM | \
+				     DEV_TX_OFFLOAD_UDP_CKSUM | \
+				     DEV_TX_OFFLOAD_TCP_TSO | \
+				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     DEV_TX_OFFLOAD_QINQ_INSERT | \
+				     DEV_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
+				     DEV_RX_OFFLOAD_VLAN_STRIP | \
+				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_UDP_CKSUM | \
+				     DEV_RX_OFFLOAD_TCP_CKSUM | \
+				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
+				     DEV_RX_OFFLOAD_KEEP_CRC | \
+				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
+				     DEV_RX_OFFLOAD_TCP_LRO | \
+				     DEV_RX_OFFLOAD_SCATTER | \
+				     DEV_RX_OFFLOAD_RSS_HASH)
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -682,6 +721,9 @@ struct bnxt {
 #define BNXT_MAX_RINGS(bp) \
 	(RTE_MIN((((bp)->max_cp_rings - BNXT_NUM_ASYNC_CPR(bp)) / 2U), \
 		 BNXT_MAX_TX_RINGS(bp)))
+
+#define BNXT_MAX_VF_REP_RINGS	8
+
 	uint16_t		max_nq_rings;
 	uint16_t		max_l2_ctx;
 	uint16_t		max_rx_em_flows;
@@ -711,7 +753,9 @@ struct bnxt {
 
 	uint16_t		fw_reset_min_msecs;
 	uint16_t		fw_reset_max_msecs;
-
+	uint16_t		switch_domain_id;
+	uint16_t		num_reps;
+	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -732,6 +776,18 @@ struct bnxt {
 
 #define BNXT_FC_TIMER	1 /* Timer freq in Sec Flow Counters */
 
+/**
+ * Structure to store private data for each VF representor instance
+ */
+struct bnxt_vf_representor {
+	uint16_t switch_domain_id;
+	uint16_t vf_id;
+	/* Private data store of associated PF/Trusted VF */
+	struct bnxt	*parent_priv;
+	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
 int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 		     bool exp_link_status);
@@ -744,7 +800,13 @@ void bnxt_schedule_fw_health_check(struct bnxt *bp);
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
 bool bnxt_stratus_device(struct bnxt *bp);
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete);
+
 extern const struct rte_flow_ops bnxt_flow_ops;
+
 #define bnxt_acquire_flow_lock(bp) \
 	pthread_mutex_lock(&(bp)->flow_lock)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4a0c45e74..1455e60d7 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -18,6 +18,7 @@
 #include "bnxt_filter.h"
 #include "bnxt_hwrm.h"
 #include "bnxt_irq.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxq.h"
 #include "bnxt_rxr.h"
@@ -92,40 +93,6 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-#define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
-				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_VLAN_STRIP | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
-
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
@@ -162,7 +129,6 @@ static int bnxt_devarg_max_num_kflow_invalid(uint16_t max_num_kflows)
 }
 
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev);
 static int bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev);
@@ -197,7 +163,7 @@ static uint16_t bnxt_rss_ctxts(const struct bnxt *bp)
 				    BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
-static uint16_t  bnxt_rss_hash_tbl_size(const struct bnxt *bp)
+uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 {
 	if (!BNXT_CHIP_THOR(bp))
 		return HW_HASH_INDEX_SIZE;
@@ -1046,7 +1012,7 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	return -ENOSPC;
 }
 
-static void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
+void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 {
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
@@ -1272,6 +1238,12 @@ static int bnxt_dev_set_link_down_op(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static void bnxt_free_switch_domain(struct bnxt *bp)
+{
+	if (bp->switch_domain_id)
+		rte_eth_switch_domain_free(bp->switch_domain_id);
+}
+
 /* Unload the driver, release resources */
 static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
@@ -1340,6 +1312,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
+	bnxt_free_switch_domain(bp);
+
 	bnxt_uninit_resources(bp, false);
 
 	bnxt_free_leds_info(bp);
@@ -1521,8 +1495,8 @@ int bnxt_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete,
 	return rc;
 }
 
-static int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
-			       int wait_to_complete)
+int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
+			int wait_to_complete)
 {
 	return bnxt_link_update(eth_dev, wait_to_complete, ETH_LINK_UP);
 }
@@ -5476,8 +5450,26 @@ bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
 	rte_kvargs_free(kvlist);
 }
 
+static int bnxt_alloc_switch_domain(struct bnxt *bp)
+{
+	int rc = 0;
+
+	if (BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) {
+		rc = rte_eth_switch_domain_alloc(&bp->switch_domain_id);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Failed to alloc switch domain: %d\n", rc);
+		else
+			PMD_DRV_LOG(INFO,
+				    "Switch domain allocated %d\n",
+				    bp->switch_domain_id);
+	}
+
+	return rc;
+}
+
 static int
-bnxt_dev_init(struct rte_eth_dev *eth_dev)
+bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	static int version_printed;
@@ -5556,6 +5548,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	if (rc)
 		goto error_free;
 
+	bnxt_alloc_switch_domain(bp);
+
 	/* Pass the information to the rte_eth_dev_close() that it should also
 	 * release the private port resources.
 	 */
@@ -5688,25 +5682,162 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *bp = eth_dev->data->dev_private;
+	struct rte_eth_dev *vf_rep_eth_dev;
+	int ret = 0, i;
+
+	if (!bp)
+		return -EINVAL;
+
+	for (i = 0; i < bp->num_reps; i++) {
+		vf_rep_eth_dev = bp->rep_info[i].vfr_eth_dev;
+		if (!vf_rep_eth_dev)
+			continue;
+		rte_eth_dev_destroy(vf_rep_eth_dev, bnxt_vf_representor_uninit);
+	}
+	ret = rte_eth_dev_destroy(eth_dev, bnxt_dev_uninit);
+
+	return ret;
+}
+
 static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	struct rte_pci_device *pci_dev)
 {
-	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct bnxt),
-		bnxt_dev_init);
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
+	uint16_t num_rep;
+	int i, ret = 0;
+	struct bnxt *backing_bp;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+	}
+
+	if (num_rep > BNXT_MAX_VF_REPS) {
+		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
+			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
+		ret = -EINVAL;
+		return ret;
+	}
+
+	/* probe representor ports now */
+	if (!backing_eth_dev)
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = -ENODEV;
+		return ret;
+	}
+	backing_bp = backing_eth_dev->data->dev_private;
+
+	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
+		PMD_DRV_LOG(ERR,
+			    "Not a PF or trusted VF. No Representor support\n");
+		/* Returning an error is not an option.
+		 * Applications are not handling this correctly
+		 */
+		return ret;
+	}
+
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		struct bnxt_vf_representor representor = {
+			.vf_id = eth_da.representor_ports[i],
+			.switch_domain_id = backing_bp->switch_domain_id,
+			.parent_priv = backing_bp
+		};
+
+		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
+			PMD_DRV_LOG(ERR, "VF-Rep id %d >= %d MAX VF ID\n",
+				    representor.vf_id, BNXT_MAX_VF_REPS);
+			continue;
+		}
+
+		/* representor port net_bdf_port */
+		snprintf(name, sizeof(name), "net_%s_representor_%d",
+			 pci_dev->device.name, eth_da.representor_ports[i]);
+
+		ret = rte_eth_dev_create(&pci_dev->device, name,
+					 sizeof(struct bnxt_vf_representor),
+					 NULL, NULL,
+					 bnxt_vf_representor_init,
+					 &representor);
+
+		if (!ret) {
+			vf_rep_eth_dev = rte_eth_dev_allocated(name);
+			if (!vf_rep_eth_dev) {
+				PMD_DRV_LOG(ERR, "Failed to find the eth_dev"
+					    " for VF-Rep: %s.", name);
+				bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+				ret = -ENODEV;
+				return ret;
+			}
+			backing_bp->rep_info[representor.vf_id].vfr_eth_dev =
+				vf_rep_eth_dev;
+			backing_bp->num_reps++;
+		} else {
+			PMD_DRV_LOG(ERR, "failed to create bnxt vf "
+				    "representor %s.", name);
+			bnxt_pci_remove_dev_with_reps(backing_eth_dev);
+		}
+	}
+
+	return ret;
 }
 
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
-		return rte_eth_dev_pci_generic_remove(pci_dev,
-				bnxt_dev_uninit);
-	else
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!eth_dev)
+		return ENODEV; /* Invoked typically only by OVS-DPDK, by the
+				* time it comes here the eth_dev is already
+				* deleted by rte_eth_dev_close(), so returning
+				* +ve value will atleast help in proper cleanup
+				*/
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_vf_representor_uninit);
+		else
+			return rte_eth_dev_destroy(eth_dev,
+						   bnxt_dev_uninit);
+	} else {
 		return rte_eth_dev_pci_generic_remove(pci_dev, NULL);
+	}
 }
 
 static struct rte_pci_driver bnxt_rte_pmd = {
 	.id_table = bnxt_pci_id_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+			RTE_PCI_DRV_PROBE_AGAIN, /* Needed in case of VF-REPs
+						  * and OVS-DPDK
+						  */
 	.probe = bnxt_pci_probe,
 	.remove = bnxt_pci_remove,
 };
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
new file mode 100644
index 000000000..21f1b0765
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_ring.h"
+#include "bnxt_reps.h"
+#include "hsi_struct_def_dpdk.h"
+
+static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
+	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
+	.dev_configure = bnxt_vf_rep_dev_configure_op,
+	.dev_start = bnxt_vf_rep_dev_start_op,
+	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.link_update = bnxt_vf_rep_link_update_op,
+	.dev_close = bnxt_vf_rep_dev_close_op,
+	.dev_stop = bnxt_vf_rep_dev_stop_op
+};
+
+static uint16_t
+bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
+		     __rte_unused struct rte_mbuf **rx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
+		     __rte_unused struct rte_mbuf **tx_pkts,
+		     __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
+{
+	struct bnxt_vf_representor *vf_rep_bp = eth_dev->data->dev_private;
+	struct bnxt_vf_representor *rep_params =
+				 (struct bnxt_vf_representor *)params;
+	struct rte_eth_link *link;
+	struct bnxt *parent_bp;
+
+	vf_rep_bp->vf_id = rep_params->vf_id;
+	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
+	vf_rep_bp->parent_priv = rep_params->parent_priv;
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+	eth_dev->data->representor_id = rep_params->vf_id;
+
+	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
+	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
+	       sizeof(vf_rep_bp->mac_addr));
+	eth_dev->data->mac_addrs =
+		(struct rte_ether_addr *)&vf_rep_bp->mac_addr;
+	eth_dev->dev_ops = &bnxt_vf_rep_dev_ops;
+
+	/* No data-path, but need stub Rx/Tx functions to avoid crash
+	 * when testing with ovs-dpdk
+	 */
+	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
+	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
+	/* Link state. Inherited from PF or trusted VF */
+	parent_bp = vf_rep_bp->parent_priv;
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+
+	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
+	bnxt_print_link_info(eth_dev);
+
+	/* Pass the information to the rte_eth_dev_close() that it should also
+	 * release the private port resources.
+	 */
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+	PMD_DRV_LOG(INFO,
+		    "Switch domain id %d: Representor Device %d init done\n",
+		    vf_rep_bp->switch_domain_id, vf_rep_bp->vf_id);
+
+	return 0;
+}
+
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+
+	uint16_t vf_id;
+
+	eth_dev->data->mac_addrs = NULL;
+
+	parent_bp = rep->parent_priv;
+	if (parent_bp) {
+		parent_bp->num_reps--;
+		vf_id = rep->vf_id;
+		if (parent_bp->rep_info) {
+			memset(&parent_bp->rep_info[vf_id], 0,
+			       sizeof(parent_bp->rep_info[vf_id]));
+			/* mark that this representor has been freed */
+		}
+	}
+	eth_dev->dev_ops = NULL;
+	return 0;
+}
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
+{
+	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *rep =
+		(struct bnxt_vf_representor *)eth_dev->data->dev_private;
+	struct rte_eth_link *link;
+	int rc;
+
+	parent_bp = rep->parent_priv;
+	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
+
+	/* Link state. Inherited from PF or trusted VF */
+	link = &parent_bp->eth_dev->data->dev_link;
+
+	eth_dev->data->dev_link.link_speed = link->link_speed;
+	eth_dev->data->dev_link.link_duplex = link->link_duplex;
+	eth_dev->data->dev_link.link_status = link->link_status;
+	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
+	bnxt_print_link_info(eth_dev);
+
+	return rc;
+}
+
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_rep_link_update_op(eth_dev, 1);
+
+	return 0;
+}
+
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
+{
+	eth_dev = eth_dev;
+}
+
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
+{
+	bnxt_vf_representor_uninit(eth_dev);
+}
+
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp;
+	uint16_t max_vnics, i, j, vpool, vrxq;
+	unsigned int max_rx_rings;
+	int rc = 0;
+
+	/* MAC Specifics */
+	parent_bp = rep_bp->parent_priv;
+	if (!parent_bp) {
+		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
+		return rc;
+	}
+	PMD_DRV_LOG(DEBUG, "Representor dev_info_get_op\n");
+	dev_info->max_mac_addrs = parent_bp->max_l2_ctx;
+	dev_info->max_hash_mac_addrs = 0;
+
+	max_rx_rings = BNXT_MAX_VF_REP_RINGS;
+	/* For the sake of symmetry, max_rx_queues = max_tx_queues */
+	dev_info->max_rx_queues = max_rx_rings;
+	dev_info->max_tx_queues = max_rx_rings;
+	dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
+	dev_info->hash_key_size = 40;
+	max_vnics = parent_bp->max_vnics;
+
+	/* MTU specifics */
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+	dev_info->max_mtu = BNXT_MAX_MTU;
+
+	/* Fast path specifics */
+	dev_info->min_rx_bufsize = 1;
+	dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+
+	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
+	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
+		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
+	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
+
+	/* *INDENT-OFF* */
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 0,
+		},
+		.rx_free_thresh = 32,
+		/* If no descriptors available, pkts are dropped by default */
+		.rx_drop_en = 1,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = 32,
+			.hthresh = 0,
+			.wthresh = 0,
+		},
+		.tx_free_thresh = 32,
+		.tx_rs_thresh = 32,
+	};
+	eth_dev->data->dev_conf.intr_conf.lsc = 1;
+
+	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+	dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC;
+	dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC;
+
+	/* *INDENT-ON* */
+
+	/*
+	 * TODO: default_rxconf, default_txconf, rx_desc_lim, and tx_desc_lim
+	 *       need further investigation.
+	 */
+
+	/* VMDq resources */
+	vpool = 64; /* ETH_64_POOLS */
+	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	for (i = 0; i < 4; vpool >>= 1, i++) {
+		if (max_vnics > vpool) {
+			for (j = 0; j < 5; vrxq >>= 1, j++) {
+				if (dev_info->max_rx_queues > vrxq) {
+					if (vpool > vrxq)
+						vpool = vrxq;
+					goto found;
+				}
+			}
+			/* Not enough resources to support VMDq */
+			break;
+		}
+	}
+	/* Not enough resources to support VMDq */
+	vpool = 0;
+	vrxq = 0;
+found:
+	dev_info->max_vmdq_pools = vpool;
+	dev_info->vmdq_queue_num = vrxq;
+
+	dev_info->vmdq_pool_base = 0;
+	dev_info->vmdq_queue_base = 0;
+
+	return 0;
+}
+
+int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
+{
+	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	return 0;
+}
+
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
+
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf)
+{
+	eth_dev = eth_dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
new file mode 100644
index 000000000..6048faf08
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_REPS_H_
+#define _BNXT_REPS_H_
+
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+
+int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
+int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
+				struct rte_eth_dev_info *dev_info);
+int bnxt_vf_rep_dev_configure_op(struct rte_eth_dev *eth_dev);
+
+int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl);
+int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_rxconf *
+				  rx_conf,
+				  __rte_unused struct rte_mempool *mp);
+int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
+				  __rte_unused uint16_t queue_idx,
+				  __rte_unused uint16_t nb_desc,
+				  __rte_unused unsigned int socket_id,
+				  __rte_unused const struct rte_eth_txconf *
+				  tx_conf);
+void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
+void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+#endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 4306c6039..5c7859cb5 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -21,6 +21,7 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+	'bnxt_reps.c',
 
 	'tf_core/tf_core.c',
 	'tf_core/bitalloc.c',
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 02/51] net/bnxt: add support for VF-reps data path
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
                           ` (50 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Added code to support Tx/Rx from a VF representor port.
The VF-reps use the RX/TX rings of the Trusted VF/PF.
For each VF-rep, the Trusted VF/PF driver issues a VFR_ALLOC FW cmd that
returns "cfa_code" and "cfa_action" values.
The FW sets up the filter tables in such a way that VF traffic by
default (in absence of other rules) gets punted to the parent function
i.e. either the Trusted VF or the PF.
The cfa_code value in the RX-compl informs the driver of the source VF.
For traffic being transmitted from the VF-rep, the TX BD is tagged with
a cfa_action value that informs the HW to punt it to the corresponding
VF.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  30 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 150 ++++++++---
 drivers/net/bnxt/bnxt_reps.c   | 476 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/bnxt_reps.h   |  11 +
 drivers/net/bnxt/bnxt_rxr.c    |  22 +-
 drivers/net/bnxt/bnxt_rxr.h    |   1 +
 drivers/net/bnxt/bnxt_txq.h    |   1 +
 drivers/net/bnxt/bnxt_txr.c    |   4 +-
 8 files changed, 616 insertions(+), 79 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 9b7b87cee..443d9fee4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -495,6 +495,7 @@ struct bnxt_mark_info {
 
 struct bnxt_rep_info {
 	struct rte_eth_dev	*vfr_eth_dev;
+	pthread_mutex_t		vfr_lock;
 };
 
 /* address space location of register */
@@ -755,7 +756,8 @@ struct bnxt {
 	uint16_t		fw_reset_max_msecs;
 	uint16_t		switch_domain_id;
 	uint16_t		num_reps;
-	struct bnxt_rep_info	rep_info[BNXT_MAX_VF_REPS];
+	struct bnxt_rep_info	*rep_info;
+	uint16_t                *cfa_code_map;
 	/* Struct to hold adapter error recovery related info */
 	struct bnxt_error_recovery_info *recovery_info;
 #define BNXT_MARK_TABLE_SZ	(sizeof(struct bnxt_mark_info)  * 64 * 1024)
@@ -780,12 +782,28 @@ struct bnxt {
  * Structure to store private data for each VF representor instance
  */
 struct bnxt_vf_representor {
-	uint16_t switch_domain_id;
-	uint16_t vf_id;
+	uint16_t		switch_domain_id;
+	uint16_t		vf_id;
+	uint16_t		tx_cfa_action;
+	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
-	struct bnxt	*parent_priv;
-	uint8_t		mac_addr[RTE_ETHER_ADDR_LEN];
-	uint8_t		dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct rte_eth_dev	*parent_dev;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t			dflt_mac_addr[RTE_ETHER_ADDR_LEN];
+	struct bnxt_rx_queue	**rx_queues;
+	unsigned int		rx_nr_rings;
+	unsigned int		tx_nr_rings;
+	uint64_t                tx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                tx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_bytes[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_pkts[BNXT_MAX_VF_REP_RINGS];
+	uint64_t                rx_drop_bytes[BNXT_MAX_VF_REP_RINGS];
+};
+
+struct bnxt_vf_rep_tx_queue {
+	struct bnxt_tx_queue *txq;
+	struct bnxt_vf_representor *bp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1455e60d7..cddba17bd 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -136,6 +136,7 @@ static void bnxt_cancel_fw_health_check(struct bnxt *bp);
 static int bnxt_restore_vlan_filters(struct bnxt *bp);
 static void bnxt_dev_recover(void *arg);
 static void bnxt_free_error_recovery_info(struct bnxt *bp);
+static void bnxt_free_rep_info(struct bnxt *bp);
 
 int is_bnxt_in_error(struct bnxt *bp)
 {
@@ -5242,7 +5243,7 @@ bnxt_init_locks(struct bnxt *bp)
 
 static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 {
-	int rc;
+	int rc = 0;
 
 	rc = bnxt_init_fw(bp);
 	if (rc)
@@ -5641,6 +5642,8 @@ bnxt_uninit_locks(struct bnxt *bp)
 {
 	pthread_mutex_destroy(&bp->flow_lock);
 	pthread_mutex_destroy(&bp->def_cp_lock);
+	if (bp->rep_info)
+		pthread_mutex_destroy(&bp->rep_info->vfr_lock);
 }
 
 static int
@@ -5663,6 +5666,7 @@ bnxt_uninit_resources(struct bnxt *bp, bool reconfig_dev)
 
 	bnxt_uninit_locks(bp);
 	bnxt_free_flow_stats_info(bp);
+	bnxt_free_rep_info(bp);
 	rte_free(bp->ptp_cfg);
 	bp->ptp_cfg = NULL;
 	return rc;
@@ -5702,56 +5706,73 @@ static int bnxt_pci_remove_dev_with_reps(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static void bnxt_free_rep_info(struct bnxt *bp)
 {
-	char name[RTE_ETH_NAME_MAX_LEN];
-	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
-	struct rte_eth_dev *backing_eth_dev, *vf_rep_eth_dev;
-	uint16_t num_rep;
-	int i, ret = 0;
-	struct bnxt *backing_bp;
+	rte_free(bp->rep_info);
+	bp->rep_info = NULL;
+	rte_free(bp->cfa_code_map);
+	bp->cfa_code_map = NULL;
+}
 
-	if (pci_dev->device.devargs) {
-		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
-					    &eth_da);
-		if (ret)
-			return ret;
-	}
+static int bnxt_init_rep_info(struct bnxt *bp)
+{
+	int i = 0, rc;
 
-	num_rep = eth_da.nb_representor_ports;
-	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
-		    num_rep);
+	if (bp->rep_info)
+		return 0;
 
-	/* We could come here after first level of probe is already invoked
-	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
-	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
-	 */
-	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
-					 sizeof(struct bnxt),
-					 eth_dev_pci_specific_init, pci_dev,
-					 bnxt_dev_init, NULL);
+	bp->rep_info = rte_zmalloc("bnxt_rep_info",
+				   sizeof(bp->rep_info[0]) * BNXT_MAX_VF_REPS,
+				   0);
+	if (!bp->rep_info) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for rep info\n");
+		return -ENOMEM;
+	}
+	bp->cfa_code_map = rte_zmalloc("bnxt_cfa_code_map",
+				       sizeof(*bp->cfa_code_map) *
+				       BNXT_MAX_CFA_CODE, 0);
+	if (!bp->cfa_code_map) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for cfa_code_map\n");
+		bnxt_free_rep_info(bp);
+		return -ENOMEM;
+	}
 
-		if (ret || !num_rep)
-			return ret;
+	for (i = 0; i < BNXT_MAX_CFA_CODE; i++)
+		bp->cfa_code_map[i] = BNXT_VF_IDX_INVALID;
+
+	rc = pthread_mutex_init(&bp->rep_info->vfr_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Unable to initialize vfr_lock\n");
+		bnxt_free_rep_info(bp);
+		return rc;
 	}
+	return rc;
+}
+
+static int bnxt_rep_port_probe(struct rte_pci_device *pci_dev,
+			       struct rte_eth_devargs eth_da,
+			       struct rte_eth_dev *backing_eth_dev)
+{
+	struct rte_eth_dev *vf_rep_eth_dev;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct bnxt *backing_bp;
+	uint16_t num_rep;
+	int i, ret = 0;
 
+	num_rep = eth_da.nb_representor_ports;
 	if (num_rep > BNXT_MAX_VF_REPS) {
 		PMD_DRV_LOG(ERR, "nb_representor_ports = %d > %d MAX VF REPS\n",
-			    eth_da.nb_representor_ports, BNXT_MAX_VF_REPS);
-		ret = -EINVAL;
-		return ret;
+			    num_rep, BNXT_MAX_VF_REPS);
+		return -EINVAL;
 	}
 
-	/* probe representor ports now */
-	if (!backing_eth_dev)
-		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-	if (backing_eth_dev == NULL) {
-		ret = -ENODEV;
-		return ret;
+	if (num_rep > RTE_MAX_ETHPORTS) {
+		PMD_DRV_LOG(ERR,
+			    "nb_representor_ports = %d > %d MAX ETHPORTS\n",
+			    num_rep, RTE_MAX_ETHPORTS);
+		return -EINVAL;
 	}
+
 	backing_bp = backing_eth_dev->data->dev_private;
 
 	if (!(BNXT_PF(backing_bp) || BNXT_VF_IS_TRUSTED(backing_bp))) {
@@ -5760,14 +5781,17 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		/* Returning an error is not an option.
 		 * Applications are not handling this correctly
 		 */
-		return ret;
+		return 0;
 	}
 
-	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+	if (bnxt_init_rep_info(backing_bp))
+		return 0;
+
+	for (i = 0; i < num_rep; i++) {
 		struct bnxt_vf_representor representor = {
 			.vf_id = eth_da.representor_ports[i],
 			.switch_domain_id = backing_bp->switch_domain_id,
-			.parent_priv = backing_bp
+			.parent_dev = backing_eth_dev
 		};
 
 		if (representor.vf_id >= BNXT_MAX_VF_REPS) {
@@ -5808,6 +5832,48 @@ static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return ret;
 }
 
+static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			  struct rte_pci_device *pci_dev)
+{
+	struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+	struct rte_eth_dev *backing_eth_dev;
+	uint16_t num_rep;
+	int ret = 0;
+
+	if (pci_dev->device.devargs) {
+		ret = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+					    &eth_da);
+		if (ret)
+			return ret;
+	}
+
+	num_rep = eth_da.nb_representor_ports;
+	PMD_DRV_LOG(DEBUG, "nb_representor_ports = %d\n",
+		    num_rep);
+
+	/* We could come here after first level of probe is already invoked
+	 * as part of an application bringup(OVS-DPDK vswitchd), so first check
+	 * for already allocated eth_dev for the backing device (PF/Trusted VF)
+	 */
+	backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (backing_eth_dev == NULL) {
+		ret = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+					 sizeof(struct bnxt),
+					 eth_dev_pci_specific_init, pci_dev,
+					 bnxt_dev_init, NULL);
+
+		if (ret || !num_rep)
+			return ret;
+
+		backing_eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+	}
+
+	/* probe representor ports now */
+	ret = bnxt_rep_port_probe(pci_dev, eth_da, backing_eth_dev);
+
+	return ret;
+}
+
 static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct rte_eth_dev *eth_dev;
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 21f1b0765..777179558 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -6,6 +6,11 @@
 #include "bnxt.h"
 #include "bnxt_ring.h"
 #include "bnxt_reps.h"
+#include "bnxt_rxq.h"
+#include "bnxt_rxr.h"
+#include "bnxt_txq.h"
+#include "bnxt_txr.h"
+#include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
@@ -13,25 +18,128 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_configure = bnxt_vf_rep_dev_configure_op,
 	.dev_start = bnxt_vf_rep_dev_start_op,
 	.rx_queue_setup = bnxt_vf_rep_rx_queue_setup_op,
+	.rx_queue_release = bnxt_vf_rep_rx_queue_release_op,
 	.tx_queue_setup = bnxt_vf_rep_tx_queue_setup_op,
+	.tx_queue_release = bnxt_vf_rep_tx_queue_release_op,
 	.link_update = bnxt_vf_rep_link_update_op,
 	.dev_close = bnxt_vf_rep_dev_close_op,
-	.dev_stop = bnxt_vf_rep_dev_stop_op
+	.dev_stop = bnxt_vf_rep_dev_stop_op,
+	.stats_get = bnxt_vf_rep_stats_get_op,
+	.stats_reset = bnxt_vf_rep_stats_reset_op,
 };
 
-static uint16_t
-bnxt_vf_rep_rx_burst(__rte_unused void *rx_queue,
-		     __rte_unused struct rte_mbuf **rx_pkts,
-		     __rte_unused uint16_t nb_pkts)
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf)
 {
+	struct bnxt_sw_rx_bd *prod_rx_buf;
+	struct bnxt_rx_ring_info *rep_rxr;
+	struct bnxt_rx_queue *rep_rxq;
+	struct rte_eth_dev *vfr_eth_dev;
+	struct bnxt_vf_representor *vfr_bp;
+	uint16_t vf_id;
+	uint16_t mask;
+	uint8_t que;
+
+	vf_id = bp->cfa_code_map[cfa_code];
+	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
+	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
+		return 1;
+	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	if (!vfr_eth_dev)
+		return 1;
+	vfr_bp = vfr_eth_dev->data->dev_private;
+	if (vfr_bp->rx_cfa_code != cfa_code) {
+		/* cfa_code not meant for this VF rep!!?? */
+		return 1;
+	}
+	/* If rxq_id happens to be > max rep_queue, use rxq0 */
+	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
+	rep_rxq = vfr_bp->rx_queues[que];
+	rep_rxr = rep_rxq->rx_ring;
+	mask = rep_rxr->rx_ring_struct->ring_mask;
+
+	/* Put this mbuf on the RxQ of the Representor */
+	prod_rx_buf =
+		&rep_rxr->rx_buf_ring[rep_rxr->rx_prod++ & mask];
+	if (!prod_rx_buf->mbuf) {
+		prod_rx_buf->mbuf = mbuf;
+		vfr_bp->rx_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_pkts[que]++;
+	} else {
+		vfr_bp->rx_drop_bytes[que] += mbuf->pkt_len;
+		vfr_bp->rx_drop_pkts[que]++;
+		rte_free(mbuf); /* Representor Rx ring full, drop pkt */
+	}
+
 	return 0;
 }
 
 static uint16_t
-bnxt_vf_rep_tx_burst(__rte_unused void *tx_queue,
-		     __rte_unused struct rte_mbuf **tx_pkts,
+bnxt_vf_rep_rx_burst(void *rx_queue,
+		     struct rte_mbuf **rx_pkts,
+		     uint16_t nb_pkts)
+{
+	struct bnxt_rx_queue *rxq = rx_queue;
+	struct bnxt_sw_rx_bd *cons_rx_buf;
+	struct bnxt_rx_ring_info *rxr;
+	uint16_t nb_rx_pkts = 0;
+	uint16_t mask, i;
+
+	if (!rxq)
+		return 0;
+
+	rxr = rxq->rx_ring;
+	mask = rxr->rx_ring_struct->ring_mask;
+	for (i = 0; i < nb_pkts; i++) {
+		cons_rx_buf = &rxr->rx_buf_ring[rxr->rx_cons & mask];
+		if (!cons_rx_buf->mbuf)
+			return nb_rx_pkts;
+		rx_pkts[nb_rx_pkts] = cons_rx_buf->mbuf;
+		rx_pkts[nb_rx_pkts]->port = rxq->port_id;
+		cons_rx_buf->mbuf = NULL;
+		nb_rx_pkts++;
+		rxr->rx_cons++;
+	}
+
+	return nb_rx_pkts;
+}
+
+static uint16_t
+bnxt_vf_rep_tx_burst(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
 		     __rte_unused uint16_t nb_pkts)
 {
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+	struct bnxt_tx_queue *ptxq;
+	struct bnxt *parent;
+	struct  bnxt_vf_representor *vf_rep_bp;
+	int qid;
+	int rc;
+	int i;
+
+	if (!vfr_txq)
+		return 0;
+
+	qid = vfr_txq->txq->queue_id;
+	vf_rep_bp = vfr_txq->bp;
+	parent = vf_rep_bp->parent_dev->data->dev_private;
+	pthread_mutex_lock(&parent->rep_info->vfr_lock);
+	ptxq = parent->tx_queues[qid];
+
+	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+
+	for (i = 0; i < nb_pkts; i++) {
+		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
+		vf_rep_bp->tx_pkts[qid]++;
+	}
+
+	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
+	ptxq->tx_cfa_action = 0;
+	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
+
+	return rc;
+
 	return 0;
 }
 
@@ -45,7 +153,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
-	vf_rep_bp->parent_priv = rep_params->parent_priv;
+	vf_rep_bp->parent_dev = rep_params->parent_dev;
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	eth_dev->data->representor_id = rep_params->vf_id;
@@ -63,7 +171,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->rx_pkt_burst = bnxt_vf_rep_rx_burst;
 	eth_dev->tx_pkt_burst = bnxt_vf_rep_tx_burst;
 	/* Link state. Inherited from PF or trusted VF */
-	parent_bp = vf_rep_bp->parent_priv;
+	parent_bp = vf_rep_bp->parent_dev->data->dev_private;
 	link = &parent_bp->eth_dev->data->dev_link;
 
 	eth_dev->data->dev_link.link_speed = link->link_speed;
@@ -94,18 +202,18 @@ int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev)
 	uint16_t vf_id;
 
 	eth_dev->data->mac_addrs = NULL;
-
-	parent_bp = rep->parent_priv;
-	if (parent_bp) {
-		parent_bp->num_reps--;
-		vf_id = rep->vf_id;
-		if (parent_bp->rep_info) {
-			memset(&parent_bp->rep_info[vf_id], 0,
-			       sizeof(parent_bp->rep_info[vf_id]));
-			/* mark that this representor has been freed */
-		}
-	}
 	eth_dev->dev_ops = NULL;
+
+	parent_bp = rep->parent_dev->data->dev_private;
+	if (!parent_bp)
+		return 0;
+
+	parent_bp->num_reps--;
+	vf_id = rep->vf_id;
+	if (parent_bp->rep_info)
+		memset(&parent_bp->rep_info[vf_id], 0,
+		       sizeof(parent_bp->rep_info[vf_id]));
+		/* mark that this representor has been freed */
 	return 0;
 }
 
@@ -117,7 +225,7 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	struct rte_eth_link *link;
 	int rc;
 
-	parent_bp = rep->parent_priv;
+	parent_bp = rep->parent_dev->data->dev_private;
 	rc = bnxt_link_update_op(parent_bp->eth_dev, wait_to_compl);
 
 	/* Link state. Inherited from PF or trusted VF */
@@ -132,16 +240,134 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
+static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already allocated in FW */
+	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * Alloc VF rep rules in CFA after default VNIC is created.
+	 * Otherwise the FW will create the VF-rep rules with
+	 * default drop action.
+	 */
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
+				     vfr->vf_id,
+				     &vfr->tx_cfa_action,
+				     &vfr->rx_cfa_code);
+	 */
+	if (!rc) {
+		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
+			    vfr->vf_id);
+	} else {
+		PMD_DRV_LOG(ERR,
+			    "Failed to alloc representor %d in FW\n",
+			    vfr->vf_id);
+	}
+
+	return rc;
+}
+
+static void bnxt_vf_rep_free_rx_mbufs(struct bnxt_vf_representor *rep_bp)
+{
+	struct bnxt_rx_queue *rxq;
+	unsigned int i;
+
+	for (i = 0; i < rep_bp->rx_nr_rings; i++) {
+		rxq = rep_bp->rx_queues[i];
+		bnxt_rx_queue_release_mbufs(rxq);
+	}
+}
+
 int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 {
-	bnxt_vf_rep_link_update_op(eth_dev, 1);
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int rc;
 
-	return 0;
+	rc = bnxt_vfr_alloc(rep_bp);
+
+	if (!rc) {
+		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
+		eth_dev->tx_pkt_burst = &bnxt_vf_rep_tx_burst;
+
+		bnxt_vf_rep_link_update_op(eth_dev, 1);
+	} else {
+		eth_dev->data->dev_link.link_status = 0;
+		bnxt_vf_rep_free_rx_mbufs(rep_bp);
+	}
+
+	return rc;
+}
+
+static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+	struct bnxt *parent_bp;
+
+	if (!vfr || !vfr->parent_dev) {
+		PMD_DRV_LOG(ERR,
+			    "No memory allocated for representor\n");
+		return -ENOMEM;
+	}
+
+	parent_bp = vfr->parent_dev->data->dev_private;
+
+	/* Check if representor has been already freed in FW */
+	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+		return 0;
+
+	/*
+	 * This is where we need to replace invoking an HWRM cmd
+	 * with the new TFLIB ULP API to do more/less the same job
+	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
+				    vfr->vf_id);
+	 */
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to free representor %d in FW\n",
+			    vfr->vf_id);
+		return rc;
+	}
+
+	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
+	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
+		    vfr->vf_id);
+	vfr->tx_cfa_action = 0;
+	vfr->rx_cfa_code = 0;
+
+	return rc;
 }
 
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *vfr_bp = eth_dev->data->dev_private;
+
+	/* Avoid crashes as we are about to free queues */
+	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
+	eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+
+	bnxt_vfr_free(vfr_bp);
+
+	if (eth_dev->data->dev_started)
+		eth_dev->data->dev_link.link_status = 0;
+
+	bnxt_vf_rep_free_rx_mbufs(vfr_bp);
 }
 
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev)
@@ -159,7 +385,7 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	int rc = 0;
 
 	/* MAC Specifics */
-	parent_bp = rep_bp->parent_priv;
+	parent_bp = rep_bp->parent_dev->data->dev_private;
 	if (!parent_bp) {
 		PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
 		return rc;
@@ -257,7 +483,13 @@ int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 int bnxt_vf_rep_dev_configure_op(__rte_unused struct rte_eth_dev *eth_dev)
 {
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+
 	PMD_DRV_LOG(DEBUG, "Representor dev_configure_op\n");
+	rep_bp->rx_queues = (void *)eth_dev->data->rx_queues;
+	rep_bp->tx_nr_rings = eth_dev->data->nb_tx_queues;
+	rep_bp->rx_nr_rings = eth_dev->data->nb_rx_queues;
+
 	return 0;
 }
 
@@ -269,9 +501,94 @@ int bnxt_vf_rep_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  rx_conf,
 				  __rte_unused struct rte_mempool *mp)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_rx_queue *parent_rxq;
+	struct bnxt_rx_queue *rxq;
+	struct bnxt_sw_rx_bd *buf_ring;
+	int rc = 0;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Rx ring %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_RX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid\n", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_rxq = parent_bp->rx_queues[queue_idx];
+	if (!parent_rxq) {
+		PMD_DRV_LOG(ERR, "Parent RxQ has not been configured yet\n");
+		return -EINVAL;
+	}
+
+	if (nb_desc != parent_rxq->nb_rx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent rxq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->rx_queues) {
+		rxq = eth_dev->data->rx_queues[queue_idx];
+		if (rxq)
+			bnxt_rx_queue_release_op(rxq);
+	}
+
+	rxq = rte_zmalloc_socket("bnxt_vfr_rx_queue",
+				 sizeof(struct bnxt_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_rx_queue allocation failed!\n");
+		return -ENOMEM;
+	}
+
+	rxq->nb_rx_desc = nb_desc;
+
+	rc = bnxt_init_rx_ring_struct(rxq, socket_id);
+	if (rc)
+		goto out;
+
+	buf_ring = rte_zmalloc_socket("bnxt_rx_vfr_buf_ring",
+				      sizeof(struct bnxt_sw_rx_bd) *
+				      rxq->rx_ring->rx_ring_struct->ring_size,
+				      RTE_CACHE_LINE_SIZE, socket_id);
+	if (!buf_ring) {
+		PMD_DRV_LOG(ERR, "bnxt_rx_vfr_buf_ring allocation failed!\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rxq->rx_ring->rx_buf_ring = buf_ring;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = eth_dev->data->port_id;
+	eth_dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
+
+out:
+	if (rxq)
+		bnxt_rx_queue_release_op(rxq);
+
+	return rc;
+}
+
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue)
+{
+	struct bnxt_rx_queue *rxq = (struct bnxt_rx_queue *)rx_queue;
+
+	if (!rxq)
+		return;
+
+	bnxt_rx_queue_release_mbufs(rxq);
+
+	bnxt_free_ring(rxq->rx_ring->rx_ring_struct);
+	bnxt_free_ring(rxq->rx_ring->ag_ring_struct);
+	bnxt_free_ring(rxq->cp_ring->cp_ring_struct);
+
+	rte_free(rxq);
 }
 
 int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
@@ -281,7 +598,112 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf)
 {
-	eth_dev = eth_dev;
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	struct bnxt *parent_bp = rep_bp->parent_dev->data->dev_private;
+	struct bnxt_tx_queue *parent_txq, *txq;
+	struct bnxt_vf_rep_tx_queue *vfr_txq;
+
+	if (queue_idx >= BNXT_MAX_VF_REP_RINGS) {
+		PMD_DRV_LOG(ERR,
+			    "Cannot create Tx rings %d. %d rings available\n",
+			    queue_idx, BNXT_MAX_VF_REP_RINGS);
+		return -EINVAL;
+	}
+
+	if (!nb_desc || nb_desc > MAX_TX_DESC_CNT) {
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	parent_txq = parent_bp->tx_queues[queue_idx];
+	if (!parent_txq) {
+		PMD_DRV_LOG(ERR, "Parent TxQ has not been configured yet\n");
+		return -EINVAL;
+	}
 
+	if (nb_desc != parent_txq->nb_tx_desc) {
+		PMD_DRV_LOG(ERR, "nb_desc %d do not match parent txq", nb_desc);
+		return -EINVAL;
+	}
+
+	if (eth_dev->data->tx_queues) {
+		vfr_txq = eth_dev->data->tx_queues[queue_idx];
+		bnxt_vf_rep_tx_queue_release_op(vfr_txq);
+		vfr_txq = NULL;
+	}
+
+	vfr_txq = rte_zmalloc_socket("bnxt_vfr_tx_queue",
+				     sizeof(struct bnxt_vf_rep_tx_queue),
+				     RTE_CACHE_LINE_SIZE, socket_id);
+	if (!vfr_txq) {
+		PMD_DRV_LOG(ERR, "bnxt_vfr_tx_queue allocation failed!");
+		return -ENOMEM;
+	}
+	txq = rte_zmalloc_socket("bnxt_tx_queue",
+				 sizeof(struct bnxt_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!");
+		rte_free(vfr_txq);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->queue_id = queue_idx;
+	txq->port_id = eth_dev->data->port_id;
+	vfr_txq->txq = txq;
+	vfr_txq->bp = rep_bp;
+	eth_dev->data->tx_queues[queue_idx] = vfr_txq;
+
+	return 0;
+}
+
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue)
+{
+	struct bnxt_vf_rep_tx_queue *vfr_txq = tx_queue;
+
+	if (!vfr_txq)
+		return;
+
+	rte_free(vfr_txq->txq);
+	rte_free(vfr_txq);
+}
+
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		stats->obytes += rep_bp->tx_bytes[i];
+		stats->opackets += rep_bp->tx_pkts[i];
+		stats->ibytes += rep_bp->rx_bytes[i];
+		stats->ipackets += rep_bp->rx_pkts[i];
+		stats->imissed += rep_bp->rx_drop_pkts[i];
+
+		stats->q_ipackets[i] = rep_bp->rx_pkts[i];
+		stats->q_ibytes[i] = rep_bp->rx_bytes[i];
+		stats->q_opackets[i] = rep_bp->tx_pkts[i];
+		stats->q_obytes[i] = rep_bp->tx_bytes[i];
+		stats->q_errors[i] = rep_bp->rx_drop_pkts[i];
+	}
+
+	return 0;
+}
+
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev)
+{
+	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
+	int i;
+
+	for (i = 0; i < BNXT_MAX_VF_REP_RINGS; i++) {
+		rep_bp->tx_pkts[i] = 0;
+		rep_bp->tx_bytes[i] = 0;
+		rep_bp->rx_pkts[i] = 0;
+		rep_bp->rx_bytes[i] = 0;
+		rep_bp->rx_drop_pkts[i] = 0;
+	}
 	return 0;
 }
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 6048faf08..5c2e0a0b9 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -9,6 +9,12 @@
 #include <rte_malloc.h>
 #include <rte_ethdev.h>
 
+#define BNXT_MAX_CFA_CODE               65536
+#define BNXT_VF_IDX_INVALID             0xffff
+
+uint16_t
+bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
+	      struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
@@ -30,6 +36,11 @@ int bnxt_vf_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 				  __rte_unused unsigned int socket_id,
 				  __rte_unused const struct rte_eth_txconf *
 				  tx_conf);
+void bnxt_vf_rep_rx_queue_release_op(void *rx_queue);
+void bnxt_vf_rep_tx_queue_release_op(void *tx_queue);
 void bnxt_vf_rep_dev_stop_op(struct rte_eth_dev *eth_dev);
 void bnxt_vf_rep_dev_close_op(struct rte_eth_dev *eth_dev);
+int bnxt_vf_rep_stats_get_op(struct rte_eth_dev *eth_dev,
+			     struct rte_eth_stats *stats);
+int bnxt_vf_rep_stats_reset_op(struct rte_eth_dev *eth_dev);
 #endif /* _BNXT_REPS_H_ */
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 11807f409..37b534fc2 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -12,6 +12,7 @@
 #include <rte_memory.h>
 
 #include "bnxt.h"
+#include "bnxt_reps.h"
 #include "bnxt_ring.h"
 #include "bnxt_rxr.h"
 #include "bnxt_rxq.h"
@@ -539,7 +540,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
-			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
+		       struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
@@ -735,6 +736,20 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
+	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
+	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
+		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
+				   mbuf)) {
+			/* Now return an error so that nb_rx_pkts is not
+			 * incremented.
+			 * This packet was meant to be given to the representor.
+			 * So no need to account the packet and give it to
+			 * parent Rx burst function.
+			 */
+			rc = -ENODEV;
+		}
+	}
+
 next_rx:
 
 	*raw_cons = tmp_raw_cons;
@@ -751,6 +766,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint32_t raw_cons = cpr->cp_raw_cons;
 	uint32_t cons;
 	int nb_rx_pkts = 0;
+	int nb_rep_rx_pkts = 0;
 	struct rx_pkt_cmpl *rxcmp;
 	uint16_t prod = rxr->rx_prod;
 	uint16_t ag_prod = rxr->ag_prod;
@@ -784,6 +800,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				nb_rx_pkts++;
 			if (rc == -EBUSY)	/* partial completion */
 				break;
+			if (rc == -ENODEV)	/* completion for representor */
+				nb_rep_rx_pkts++;
 		} else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
 			evt =
 			bnxt_event_hwrm_resp_handler(rxq->bp,
@@ -802,7 +820,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 
 	cpr->cp_raw_cons = raw_cons;
-	if (!nb_rx_pkts && !evt) {
+	if (!nb_rx_pkts && !nb_rep_rx_pkts && !evt) {
 		/*
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 811dcd86b..e60c97fa1 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -188,6 +188,7 @@ struct bnxt_sw_rx_bd {
 struct bnxt_rx_ring_info {
 	uint16_t		rx_prod;
 	uint16_t		ag_prod;
+	uint16_t                rx_cons; /* Needed for representor */
 	struct bnxt_db_info     rx_db;
 	struct bnxt_db_info     ag_db;
 
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 37a3f9539..69ff89aab 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -29,6 +29,7 @@ struct bnxt_tx_queue {
 	struct bnxt		*bp;
 	int			index;
 	int			tx_wake_thresh;
+	uint32_t                tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 16021407e..d7e193d38 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT))
+				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +184,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = 0;
+		cfa_action = txq->tx_cfa_action;
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 03/51] net/bnxt: get IDs for VF-Rep endpoint
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
                           ` (49 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use 'first_vf_id' and the 'vf_id' that is input as part of adding
a representor to obtain the PCI function ID(FID) of the VF(VFR endpoint).
Use the FID as an input to FUNC_QCFG HWRM cmd to obtain the default
vnic ID of the VF.
Along with getting the default vNIC ID by supplying the FW FID of
the VF-rep endpoint to HWRM_FUNC_QCFG, obtain and store it's
function svif.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |  3 +++
 drivers/net/bnxt/bnxt_hwrm.c | 27 +++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h |  4 ++++
 drivers/net/bnxt/bnxt_reps.c | 12 ++++++++++++
 4 files changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 443d9fee4..7afbd5cab 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -784,6 +784,9 @@ struct bnxt {
 struct bnxt_vf_representor {
 	uint16_t		switch_domain_id;
 	uint16_t		vf_id;
+	uint16_t		fw_fid;
+	uint16_t		dflt_vnic_id;
+	uint16_t		svif;
 	uint16_t		tx_cfa_action;
 	uint16_t		rx_cfa_code;
 	/* Private data store of associated PF/Trusted VF */
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 945bc9018..ed42e58d4 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,33 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t svif_info;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	req.fid = rte_cpu_to_le_16(fid);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	if (vnic_id)
+		*vnic_id = rte_le_to_cpu_16(resp->dflt_vnic_id);
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif && (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID))
+		*svif = svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 {
 	struct hwrm_port_mac_qcfg_input req = {0};
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 58b414d4f..8d19998df 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -270,4 +270,8 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 				 enum bnxt_flow_dir dir,
 				 uint16_t cntr,
 				 uint16_t num_entries);
+int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
+			       uint16_t *vnic_id);
+int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
+				 uint16_t *vnic_id, uint16_t *svif);
 #endif
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 777179558..ea6f0010f 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -150,6 +150,7 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 				 (struct bnxt_vf_representor *)params;
 	struct rte_eth_link *link;
 	struct bnxt *parent_bp;
+	int rc = 0;
 
 	vf_rep_bp->vf_id = rep_params->vf_id;
 	vf_rep_bp->switch_domain_id = rep_params->switch_domain_id;
@@ -179,6 +180,17 @@ int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_link.link_status = link->link_status;
 	eth_dev->data->dev_link.link_autoneg = link->link_autoneg;
 
+	vf_rep_bp->fw_fid = rep_params->vf_id + parent_bp->first_vf_id;
+	PMD_DRV_LOG(INFO, "vf_rep->fw_fid = %d\n", vf_rep_bp->fw_fid);
+	rc = bnxt_hwrm_get_dflt_vnic_svif(parent_bp, vf_rep_bp->fw_fid,
+					  &vf_rep_bp->dflt_vnic_id,
+					  &vf_rep_bp->svif);
+	if (rc)
+		PMD_DRV_LOG(ERR, "Failed to get default vnic id of VF\n");
+	else
+		PMD_DRV_LOG(INFO, "vf_rep->dflt_vnic_id = %d\n",
+			    vf_rep_bp->dflt_vnic_id);
+
 	PMD_DRV_LOG(INFO, "calling bnxt_print_link_info\n");
 	bnxt_print_link_info(eth_dev);
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 04/51] net/bnxt: initialize parent PF information
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (2 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
                           ` (48 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev
  Cc: Lance Richardson, Venkat Duvvuru, Somnath Kotur, Kalesh AP,
	Kishore Padmanabha

From: Lance Richardson <lance.richardson@broadcom.com>

Add support to query parent PF information (MAC address,
function ID, port ID and default VNIC) from firmware.

Current firmware returns zero for parent default vnic,
a temporary Wh+-specific workaround is included until
that can be fixed.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  9 ++++++++
 drivers/net/bnxt/bnxt_ethdev.c | 23 +++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 42 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 75 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 7afbd5cab..2b87899a4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -217,6 +217,14 @@ struct bnxt_child_vf_info {
 	bool			persist_stats;
 };
 
+struct bnxt_parent_info {
+#define	BNXT_PF_FID_INVALID	0xFFFF
+	uint16_t		fid;
+	uint16_t		vnic;
+	uint16_t		port_id;
+	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
+};
+
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	((bp)->pf->max_vfs)
@@ -738,6 +746,7 @@ struct bnxt {
 #define BNXT_OUTER_TPID_BD_SHFT	16
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
+	struct bnxt_parent_info	*parent;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index cddba17bd..b765cbadb 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -96,6 +96,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+
 static const char *const bnxt_dev_args[] = {
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
@@ -172,6 +173,11 @@ uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 	return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_THOR;
 }
 
+static void bnxt_free_parent_info(struct bnxt *bp)
+{
+	rte_free(bp->parent);
+}
+
 static void bnxt_free_pf_info(struct bnxt *bp)
 {
 	rte_free(bp->pf);
@@ -222,6 +228,16 @@ static void bnxt_free_mem(struct bnxt *bp, bool reconfig)
 	bp->grp_info = NULL;
 }
 
+static int bnxt_alloc_parent_info(struct bnxt *bp)
+{
+	bp->parent = rte_zmalloc("bnxt_parent_info",
+				 sizeof(struct bnxt_parent_info), 0);
+	if (bp->parent == NULL)
+		return -ENOMEM;
+
+	return 0;
+}
+
 static int bnxt_alloc_pf_info(struct bnxt *bp)
 {
 	bp->pf = rte_zmalloc("bnxt_pf_info", sizeof(struct bnxt_pf_info), 0);
@@ -1321,6 +1337,7 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	bnxt_free_cos_queues(bp);
 	bnxt_free_link_info(bp);
 	bnxt_free_pf_info(bp);
+	bnxt_free_parent_info(bp);
 
 	eth_dev->dev_ops = NULL;
 	eth_dev->rx_pkt_burst = NULL;
@@ -5209,6 +5226,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_port_mac_qcfg(bp);
 
+	bnxt_hwrm_parent_pf_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
@@ -5527,6 +5546,10 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev, void *params __rte_unused)
 	if (rc)
 		goto error_free;
 
+	rc = bnxt_alloc_parent_info(bp);
+	if (rc)
+		goto error_free;
+
 	rc = bnxt_alloc_hwrm_resources(bp);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ed42e58d4..347e1c71e 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3094,6 +3094,48 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp)
+{
+	struct hwrm_func_qcfg_input req = {0};
+	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	int rc;
+
+	if (!BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	if (!bp->parent)
+		return -EINVAL;
+
+	bp->parent->fid = BNXT_PF_FID_INVALID;
+
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
+
+	req.fid = rte_cpu_to_le_16(0xfffe); /* Request parent PF information. */
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	memcpy(bp->parent->mac_addr, resp->mac_address, RTE_ETHER_ADDR_LEN);
+	bp->parent->vnic = rte_le_to_cpu_16(resp->dflt_vnic_id);
+	bp->parent->fid = rte_le_to_cpu_16(resp->fid);
+	bp->parent->port_id = rte_le_to_cpu_16(resp->port_id);
+
+	/* FIXME: Temporary workaround - remove when firmware issue is fixed. */
+	if (bp->parent->vnic == 0) {
+		PMD_DRV_LOG(ERR, "Error: parent VNIC unavailable.\n");
+		/* Use hard-coded values appropriate for current Wh+ fw. */
+		if (bp->parent->fid == 2)
+			bp->parent->vnic = 0x100;
+		else
+			bp->parent->vnic = 1;
+	}
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 8d19998df..ef8997500 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -274,4 +274,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 			       uint16_t *vnic_id);
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
+int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 05/51] net/bnxt: modify port db dev interface
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (3 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 06/51] net/bnxt: get port and function info Ajit Khaparde
                           ` (47 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Modify ulp_port_db_dev_port_intf_update prototype to take
"struct rte_eth_dev *" as the second parameter.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    | 4 ++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 5 +++--
 drivers/net/bnxt/tf_ulp/ulp_port_db.h | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 0c3c638ce..c7281ab9a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -548,7 +548,7 @@ bnxt_ulp_init(struct bnxt *bp)
 		}
 
 		/* update the port database */
-		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+		rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 		if (rc) {
 			BNXT_TF_DBG(ERR,
 				    "Failed to update port database\n");
@@ -584,7 +584,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	}
 
 	/* update the port database */
-	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp);
+	rc = ulp_port_db_dev_port_intf_update(bp->ulp_ctx, bp->eth_dev);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to update port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index e3b924289..66b584026 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -104,10 +104,11 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp)
+					 struct rte_eth_dev *eth_dev)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint32_t port_id = bp->eth_dev->data->port_id;
+	struct bnxt *bp = eth_dev->data->dev_private;
+	uint32_t port_id = eth_dev->data->port_id;
 	uint32_t ifindex;
 	struct ulp_interface_info *intf;
 	int32_t rc;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 271c29a47..929a5a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -71,7 +71,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
  * Returns 0 on success or negative number on failure.
  */
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
-					 struct bnxt *bp);
+					 struct rte_eth_dev *eth_dev);
 
 /*
  * Api to get the ulp ifindex for a given device port.
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 06/51] net/bnxt: get port and function info
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (4 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
                           ` (46 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

add helper functions to get port & function related information
like parif, physical port id & vport id.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  8 ++++
 drivers/net/bnxt/bnxt_ethdev.c           | 58 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h | 10 ++++
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    | 10 ----
 4 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2b87899a4..0bdf8f5ba 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -855,6 +855,9 @@ extern const struct rte_flow_ops bnxt_flow_ops;
 	} \
 } while (0)
 
+#define	BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)	\
+		((eth_dev)->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+
 extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG_RAW(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
@@ -870,6 +873,11 @@ void bnxt_ulp_deinit(struct bnxt *bp);
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 uint16_t bnxt_get_fw_func_id(uint16_t port);
+uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_phy_port_id(uint16_t port);
+uint16_t bnxt_get_vport(uint16_t port);
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port);
 
 void bnxt_cancel_fc_thread(struct bnxt *bp);
 void bnxt_flow_cnt_alarm_cb(void *arg);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index b765cbadb..a2adf15b0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -28,6 +28,7 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
+#include "bnxt_tf_common.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -5100,6 +5101,63 @@ bnxt_get_fw_func_id(uint16_t port)
 	return bp->fw_fid;
 }
 
+enum bnxt_ulp_intf_type
+bnxt_get_interface_type(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev))
+		return BNXT_ULP_INTF_TYPE_VF_REP;
+
+	bp = eth_dev->data->dev_private;
+	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
+			   : BNXT_ULP_INTF_TYPE_VF;
+}
+
+uint16_t
+bnxt_get_phy_port_id(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->pf->port_id : bp->parent->port_id;
+}
+
+uint16_t
+bnxt_get_parif(uint16_t port_id)
+{
+	struct bnxt_vf_representor *vfr;
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		vfr = eth_dev->data->dev_private;
+		eth_dev = vfr->parent_dev;
+	}
+
+	bp = eth_dev->data->dev_private;
+
+	return BNXT_PF(bp) ? bp->fw_fid - 1 : bp->parent->fid - 1;
+}
+
+uint16_t
+bnxt_get_vport(uint16_t port_id)
+{
+	return (1 << bnxt_get_phy_port_id(port_id));
+}
+
 static void bnxt_alloc_error_recovery_info(struct bnxt *bp)
 {
 	struct bnxt_error_recovery_info *info = bp->recovery_info;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f41757908..f772d4919 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -44,6 +44,16 @@ enum ulp_direction_type {
 	ULP_DIR_EGRESS,
 };
 
+/* enumeration of the interface types */
+enum bnxt_ulp_intf_type {
+	BNXT_ULP_INTF_TYPE_INVALID = 0,
+	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_VF,
+	BNXT_ULP_INTF_TYPE_PF_REP,
+	BNXT_ULP_INTF_TYPE_VF_REP,
+	BNXT_ULP_INTF_TYPE_LAST
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 929a5a510..604c4385a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -10,16 +10,6 @@
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
 
-/* enumeration of the interface types */
-enum bnxt_ulp_intf_type {
-	BNXT_ULP_INTF_TYPE_INVALID = 0,
-	BNXT_ULP_INTF_TYPE_PF = 1,
-	BNXT_ULP_INTF_TYPE_VF,
-	BNXT_ULP_INTF_TYPE_PF_REP,
-	BNXT_ULP_INTF_TYPE_VF_REP,
-	BNXT_ULP_INTF_TYPE_LAST
-};
-
 /* Structure for the Port database resource information. */
 struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 07/51] net/bnxt: add support for hwrm port phy qcaps
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (5 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 06/51] net/bnxt: get port and function info Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
                           ` (45 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kalesh AP, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_PHY_QCAPS to the firmware to get the physical
port count of the device.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c |  2 ++
 drivers/net/bnxt/bnxt_hwrm.c   | 22 ++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0bdf8f5ba..65862abdc 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -747,6 +747,7 @@ struct bnxt {
 	uint32_t		outer_tpid_bd;
 	struct bnxt_pf_info	*pf;
 	struct bnxt_parent_info	*parent;
+	uint8_t			port_cnt;
 	uint8_t			vxlan_port_cnt;
 	uint8_t			geneve_port_cnt;
 	uint16_t		vxlan_port;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index a2adf15b0..72cc2daa6 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5286,6 +5286,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 
 	bnxt_hwrm_parent_pf_qcfg(bp);
 
+	bnxt_hwrm_port_phy_qcaps(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 347e1c71e..e6a28d07c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1330,6 +1330,28 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	return rc;
 }
 
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp)
+{
+	int rc = 0;
+	struct hwrm_port_phy_qcaps_input req = {0};
+	struct hwrm_port_phy_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
+		return 0;
+
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCAPS, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	bp->port_cnt = resp->port_cnt;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static bool bnxt_find_lossy_profile(struct bnxt *bp)
 {
 	int i = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index ef8997500..87cd40779 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -275,4 +275,5 @@ int bnxt_hwrm_get_dflt_vnic_id(struct bnxt *bp, uint16_t fid,
 int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
+int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
 #endif
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 08/51] net/bnxt: modify port db to handle more info
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (6 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 09/51] net/bnxt: add support for exact match Ajit Khaparde
                           ` (44 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur, Kishore Padmanabha

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Apart from func_svif, func_id & vnic, port_db now stores and
retrieves func_spif, func_parif, phy_port_id, port_svif, port_spif,
port_parif, port_vport. New helper functions have been added to
support the same.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_port_db.c | 145 +++++++++++++++++++++-----
 drivers/net/bnxt/tf_ulp/ulp_port_db.h |  72 ++++++++++---
 2 files changed, 179 insertions(+), 38 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 66b584026..ea27ef41f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -106,13 +106,12 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 					 struct rte_eth_dev *eth_dev)
 {
-	struct bnxt_ulp_port_db *port_db;
-	struct bnxt *bp = eth_dev->data->dev_private;
 	uint32_t port_id = eth_dev->data->port_id;
-	uint32_t ifindex;
+	struct ulp_phy_port_info *port_data;
+	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	uint32_t ifindex;
 	int32_t rc;
-	struct bnxt_vnic_info *vnic;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db) {
@@ -133,22 +132,22 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 
 	/* update the interface details */
 	intf = &port_db->ulp_intf_list[ifindex];
-	if (BNXT_PF(bp) || BNXT_VF(bp)) {
-		if (BNXT_PF(bp)) {
-			intf->type = BNXT_ULP_INTF_TYPE_PF;
-			intf->port_svif = bp->port_svif;
-		} else {
-			intf->type = BNXT_ULP_INTF_TYPE_VF;
-		}
-		intf->func_id = bp->fw_fid;
-		intf->func_svif = bp->func_svif;
-		vnic = BNXT_GET_DEFAULT_VNIC(bp);
-		if (vnic)
-			intf->default_vnic = vnic->fw_vnic_id;
-		intf->bp = bp;
-		memcpy(intf->mac_addr, bp->mac_addr, sizeof(intf->mac_addr));
-	} else {
-		BNXT_TF_DBG(ERR, "Invalid interface type\n");
+
+	intf->type = bnxt_get_interface_type(port_id);
+
+	intf->func_id = bnxt_get_fw_func_id(port_id);
+	intf->func_svif = bnxt_get_svif(port_id, 1);
+	intf->func_spif = bnxt_get_phy_port_id(port_id);
+	intf->func_parif = bnxt_get_parif(port_id);
+	intf->default_vnic = bnxt_get_vnic_id(port_id);
+	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+
+	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
+		port_data = &port_db->phy_port_list[intf->phy_port_id];
+		port_data->port_svif = bnxt_get_svif(port_id, 0);
+		port_data->port_spif = bnxt_get_phy_port_id(port_id);
+		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_vport = bnxt_get_vport(port_id);
 	}
 
 	return 0;
@@ -209,7 +208,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 }
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -225,16 +224,88 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS)
+	if (dir == ULP_DIR_EGRESS) {
 		*svif = port_db->ulp_intf_list[ifindex].func_svif;
-	else
-		*svif = port_db->ulp_intf_list[ifindex].port_svif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*svif = port_db->phy_port_list[phy_port_id].port_svif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *spif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*spif = port_db->phy_port_list[phy_port_id].port_spif;
+	}
+
+	return 0;
+}
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex,
+		     uint32_t dir,
+		     uint16_t *parif)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (dir == ULP_DIR_EGRESS) {
+		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	} else {
+		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		*parif = port_db->phy_port_list[phy_port_id].port_parif;
+	}
+
 	return 0;
 }
 
@@ -262,3 +333,29 @@ ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
 	return 0;
 }
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint16_t *vport)
+{
+	struct bnxt_ulp_port_db *port_db;
+	uint16_t phy_port_id;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+	*vport = port_db->phy_port_list[phy_port_id].port_vport;
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 604c4385a..87de3bcbc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -15,11 +15,17 @@ struct ulp_interface_info {
 	enum bnxt_ulp_intf_type	type;
 	uint16_t		func_id;
 	uint16_t		func_svif;
-	uint16_t		port_svif;
+	uint16_t		func_spif;
+	uint16_t		func_parif;
 	uint16_t		default_vnic;
-	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
-	/* back pointer to the bnxt driver, it is null for rep ports */
-	struct bnxt		*bp;
+	uint16_t		phy_port_id;
+};
+
+struct ulp_phy_port_info {
+	uint16_t	port_svif;
+	uint16_t	port_spif;
+	uint16_t	port_parif;
+	uint16_t	port_vport;
 };
 
 /* Structure for the Port database */
@@ -29,6 +35,7 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
 };
 
 /*
@@ -74,8 +81,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
-				  uint32_t port_id,
-				  uint32_t *ifindex);
+				  uint32_t port_id, uint32_t *ifindex);
 
 /*
  * Api to get the function id for a given ulp ifindex.
@@ -88,11 +94,10 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex,
-			    uint16_t *func_id);
+			    uint32_t ifindex, uint16_t *func_id);
 
 /*
- * Api to get the svid for a given ulp ifindex.
+ * Api to get the svif for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
@@ -103,9 +108,36 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
-		     uint32_t ifindex,
-		     uint32_t dir,
-		     uint16_t *svif);
+		     uint32_t ifindex, uint32_t dir, uint16_t *svif);
+
+/*
+ * Api to get the spif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * spif [out] the spif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
+		     uint32_t ifindex, uint32_t dir, uint16_t *spif);
+
+
+/*
+ * Api to get the parif for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * dir [in] the direction for the flow.
+ * parif [out] the parif of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex, uint32_t dir, uint16_t *parif);
 
 /*
  * Api to get the vnic id for a given ulp ifindex.
@@ -118,7 +150,19 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex,
-			     uint16_t *vnic);
+			     uint32_t ifindex, uint16_t *vnic);
+
+/*
+ * Api to get the vport id for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ * vport [out] the port of the given ifindex.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+		      uint32_t ifindex,	uint16_t *vport);
 
 #endif /* _ULP_PORT_DB_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 09/51] net/bnxt: add support for exact match
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (7 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete Ajit Khaparde
                           ` (43 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Add Exact Match support
- Create EM table pool of memory indices
- Insert exact match internal entry API
- Sends EM internal insert and delete request to firmware

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++++++++---
 drivers/net/bnxt/tf_core/hwrm_tf.h            |    9 +
 drivers/net/bnxt/tf_core/lookup3.h            |    1 -
 drivers/net/bnxt/tf_core/stack.c              |    8 +
 drivers/net/bnxt/tf_core/stack.h              |   10 +
 drivers/net/bnxt/tf_core/tf_core.c            |  144 +-
 drivers/net/bnxt/tf_core/tf_core.h            |  383 +-
 drivers/net/bnxt/tf_core/tf_em.c              |   98 +-
 drivers/net/bnxt/tf_core/tf_em.h              |   31 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
 drivers/net/bnxt/tf_core/tf_msg.c             |   86 +-
 drivers/net/bnxt/tf_core/tf_msg.h             |   13 +
 drivers/net/bnxt/tf_core/tf_session.h         |   18 +
 drivers/net/bnxt/tf_core/tf_tbl.c             |   99 +-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   57 +-
 drivers/net/bnxt/tf_core/tfp.h                |  123 +-
 16 files changed, 3493 insertions(+), 690 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index 7e30c9ffc..30516eb75 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -611,6 +611,10 @@ struct cmd_nums {
 	#define HWRM_FUNC_VF_BW_QCFG                      UINT32_C(0x196)
 	/* Queries pf ids belong to specified host(s) */
 	#define HWRM_FUNC_HOST_PF_IDS_QUERY               UINT32_C(0x197)
+	/* Queries extended stats per function */
+	#define HWRM_FUNC_QSTATS_EXT                      UINT32_C(0x198)
+	/* Queries exteded statitics context */
+	#define HWRM_STAT_EXT_CTX_QUERY                   UINT32_C(0x199)
 	/* Experimental */
 	#define HWRM_SELFTEST_QLIST                       UINT32_C(0x200)
 	/* Experimental */
@@ -647,41 +651,49 @@ struct cmd_nums {
 	/* Experimental */
 	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
 	/* Experimental */
-	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	#define HWRM_TF_SESSION_REGISTER                  UINT32_C(0x2c8)
 	/* Experimental */
-	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	#define HWRM_TF_SESSION_UNREGISTER                UINT32_C(0x2c9)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2ca)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2cb)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2cc)
 	/* Experimental */
-	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cd)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2ce)
 	/* Experimental */
-	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cf)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2da)
 	/* Experimental */
-	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2db)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2e4)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2e5)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2e6)
 	/* Experimental */
-	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2e7)
 	/* Experimental */
-	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2e8)
 	/* Experimental */
-	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2e9)
 	/* Experimental */
-	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	#define HWRM_TF_EM_INSERT                         UINT32_C(0x2ea)
 	/* Experimental */
-	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	#define HWRM_TF_EM_DELETE                         UINT32_C(0x2eb)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2f8)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2f9)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2fa)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2fb)
 	/* Experimental */
 	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
@@ -715,6 +727,13 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
 	/* Send driver debug information to firmware */
 	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
+	/* Query debug capabilities of firmware */
+	#define HWRM_DBG_QCAPS                            UINT32_C(0xff20)
+	/* Retrieve debug settings of firmware */
+	#define HWRM_DBG_QCFG                             UINT32_C(0xff21)
+	/* Set destination parameters for crashdump medium */
+	#define HWRM_DBG_CRASHDUMP_MEDIUM_CFG             UINT32_C(0xff22)
+	#define HWRM_NVM_REQ_ARBITRATION                  UINT32_C(0xffed)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -914,8 +933,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 30
-#define HWRM_VERSION_STR "1.10.1.30"
+#define HWRM_VERSION_RSVD 45
+#define HWRM_VERSION_STR "1.10.1.45"
 
 /****************
  * hwrm_ver_get *
@@ -2292,6 +2311,35 @@ struct cmpl_base {
 	 * Completion of TX packet. Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_TX_L2             UINT32_C(0x0)
+	/*
+	 * NO-OP completion:
+	 * Completion of NO-OP. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_NO_OP             UINT32_C(0x1)
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of coalesced TX packet. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_COAL        UINT32_C(0x2)
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of PTP TX packet. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_TX_L2_PTP         UINT32_C(0x3)
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_TPA_START_V2   UINT32_C(0xd)
+	/*
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
+	 */
+	#define CMPL_BASE_TYPE_RX_L2_V2          UINT32_C(0xf)
 	/*
 	 * RX L2 completion:
 	 * Completion of and L2 RX packet. Length = 32B
@@ -2321,6 +2369,24 @@ struct cmpl_base {
 	 * Length = 16B
 	 */
 	#define CMPL_BASE_TYPE_STAT_EJECT        UINT32_C(0x1a)
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by
+	 * the Primate and processed by the VEE hardware to ensure that
+	 * all completions on a VEE function have been processed by the
+	 * VEE hardware before FLR process is completed.
+	 */
+	#define CMPL_BASE_TYPE_VEE_FLUSH         UINT32_C(0x1c)
+	/*
+	 * Mid Path Short Completion :
+	 * Completion of a Mid Path Command. Length = 16B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_SHORT    UINT32_C(0x1e)
+	/*
+	 * Mid Path Long Completion :
+	 * Completion of a Mid Path Command. Length = 32B
+	 */
+	#define CMPL_BASE_TYPE_MID_PATH_LONG     UINT32_C(0x1f)
 	/*
 	 * HWRM Command Completion:
 	 * Completion of an HWRM command.
@@ -2398,7 +2464,9 @@ struct tx_cmpl {
 	uint16_t	unused_0;
 	/*
 	 * This is a copy of the opaque field from the first TX BD of this
-	 * transmitted packet.
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
 	 */
 	uint32_t	opaque;
 	uint16_t	errors_v;
@@ -2407,58 +2475,352 @@ struct tx_cmpl {
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	#define TX_CMPL_V                              UINT32_C(0x1)
-	#define TX_CMPL_ERRORS_MASK                    UINT32_C(0xfffe)
-	#define TX_CMPL_ERRORS_SFT                     1
+	#define TX_CMPL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_ERRORS_SFT                         1
 	/*
 	 * This error indicates that there was some sort of problem
 	 * with the BDs for the packet.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK        UINT32_C(0xe)
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_SFT             1
 	/* No error */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR      (UINT32_C(0x0) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
 	/*
 	 * Bad Format:
 	 * BDs were not formatted correctly.
 	 */
-	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT       (UINT32_C(0x2) << 1)
+	#define TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
 	#define TX_CMPL_ERRORS_BUFFER_ERROR_LAST \
 		TX_CMPL_ERRORS_BUFFER_ERROR_BAD_FMT
 	/*
 	 * When this bit is '1', it indicates that the length of
 	 * the packet was zero. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT          UINT32_C(0x10)
+	#define TX_CMPL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
 	/*
 	 * When this bit is '1', it indicates that the packet
 	 * was longer than the programmed limit in TDI. No
 	 * packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH      UINT32_C(0x20)
+	#define TX_CMPL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
 	/*
 	 * When this bit is '1', it indicates that one or more of the
 	 * BDs associated with this packet generated a PCI error.
 	 * This probably means the address was not valid.
 	 */
-	#define TX_CMPL_ERRORS_DMA_ERROR                UINT32_C(0x40)
+	#define TX_CMPL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
 	/*
 	 * When this bit is '1', it indicates that the packet was longer
 	 * than indicated by the hint. No packet was transmitted.
 	 */
-	#define TX_CMPL_ERRORS_HINT_TOO_SHORT           UINT32_C(0x80)
+	#define TX_CMPL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
 	/*
 	 * When this bit is '1', it indicates that the packet was
 	 * dropped due to Poison TLP error on one or more of the
 	 * TLPs in the PXP completion.
 	 */
-	#define TX_CMPL_ERRORS_POISON_TLP_ERROR         UINT32_C(0x100)
+	#define TX_CMPL_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such completions
+	 * are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
 	/* unused2 is 16 b */
 	uint16_t	unused_1;
 	/* unused3 is 32 b */
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* tx_cmpl_coal (size:128b/16B) */
+struct tx_cmpl_coal {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_COAL_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_COAL_TYPE_SFT        0
+	/*
+	 * TX L2 coalesced completion:
+	 * Completion of TX packet. Length = 16B
+	 */
+	#define TX_CMPL_COAL_TYPE_TX_L2_COAL   UINT32_C(0x2)
+	#define TX_CMPL_COAL_TYPE_LAST        TX_CMPL_COAL_TYPE_TX_L2_COAL
+	#define TX_CMPL_COAL_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_COAL_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_COAL_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_COAL_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of the packet
+	 * which corresponds with the reported sq_cons_idx. Note that, with
+	 * coalesced completions, completions are generated for only some of the
+	 * packets. The driver will see the opaque field for only those packets.
+	 * Note that, if the packet was described by a short CSO or short CSO
+	 * inline BD, then the 16-bit opaque field from the short CSO BD will
+	 * appear in the bottom 16 bits of this field. For TX rings with
+	 * completion coalescing enabled (which would use the coalesced
+	 * completion record), it is suggested that the driver populate the
+	 * opaque field to indicate the specific TX ring with which the
+	 * completion is associated, then utilize the opaque and sq_cons_idx
+	 * fields in the coalesced completion record to determine the specific
+	 * packets that are to be completed on that ring.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_COAL_V                                  UINT32_C(0x1)
+	#define TX_CMPL_COAL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define TX_CMPL_COAL_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_COAL_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_COAL_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_COAL_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_COAL_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_COAL_ERRORS_POISON_TLP_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped
+	 * due to a transient internal error in TDC. The packet or LSO can
+	 * be retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_COAL_ERRORS_INTERNAL_ERROR \
+		UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_COAL_ERRORS_TIMESTAMP_INVALID_ERROR \
+		UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	uint32_t	sq_cons_idx;
+	/*
+	 * This value is SQ index for the start of the packet following the
+	 * last completed packet.
+	 */
+	#define TX_CMPL_COAL_SQ_CONS_IDX_MASK UINT32_C(0xffffff)
+	#define TX_CMPL_COAL_SQ_CONS_IDX_SFT 0
+} __rte_packed;
+
+/* tx_cmpl_ptp (size:128b/16B) */
+struct tx_cmpl_ptp {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define TX_CMPL_PTP_TYPE_MASK       UINT32_C(0x3f)
+	#define TX_CMPL_PTP_TYPE_SFT        0
+	/*
+	 * TX L2 PTP completion:
+	 * Completion of TX packet. Length = 32B
+	 */
+	#define TX_CMPL_PTP_TYPE_TX_L2_PTP    UINT32_C(0x2)
+	#define TX_CMPL_PTP_TYPE_LAST        TX_CMPL_PTP_TYPE_TX_L2_PTP
+	#define TX_CMPL_PTP_FLAGS_MASK      UINT32_C(0xffc0)
+	#define TX_CMPL_PTP_FLAGS_SFT       6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define TX_CMPL_PTP_FLAGS_ERROR      UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet completed
+	 * was transmitted using the push acceleration data provided
+	 * by the driver. When this bit is '0', it indicates that the
+	 * packet had not push acceleration data written or was executed
+	 * as a normal packet even though push data was provided.
+	 */
+	#define TX_CMPL_PTP_FLAGS_PUSH       UINT32_C(0x80)
+	/* unused1 is 16 b */
+	uint16_t	unused_0;
+	/*
+	 * This is a copy of the opaque field from the first TX BD of this
+	 * transmitted packet. Note that, if the packet was described by a short
+	 * CSO or short CSO inline BD, then the 16-bit opaque field from the
+	 * short CSO BD will appear in the bottom 16 bits of this field.
+	 */
+	uint32_t	opaque;
+	uint16_t	errors_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define TX_CMPL_PTP_V                                  UINT32_C(0x1)
+	#define TX_CMPL_PTP_ERRORS_MASK                        UINT32_C(0xfffe)
+	#define TX_CMPL_PTP_ERRORS_SFT                         1
+	/*
+	 * This error indicates that there was some sort of problem
+	 * with the BDs for the packet.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_MASK            UINT32_C(0xe)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_SFT             1
+	/* No error */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT \
+		(UINT32_C(0x2) << 1)
+	#define TX_CMPL_PTP_ERRORS_BUFFER_ERROR_LAST \
+		TX_CMPL_PTP_ERRORS_BUFFER_ERROR_BAD_FMT
+	/*
+	 * When this bit is '1', it indicates that the length of
+	 * the packet was zero. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_ZERO_LENGTH_PKT              UINT32_C(0x10)
+	/*
+	 * When this bit is '1', it indicates that the packet
+	 * was longer than the programmed limit in TDI. No
+	 * packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_EXCESSIVE_BD_LENGTH          UINT32_C(0x20)
+	/*
+	 * When this bit is '1', it indicates that one or more of the
+	 * BDs associated with this packet generated a PCI error.
+	 * This probably means the address was not valid.
+	 */
+	#define TX_CMPL_PTP_ERRORS_DMA_ERROR                    UINT32_C(0x40)
+	/*
+	 * When this bit is '1', it indicates that the packet was longer
+	 * than indicated by the hint. No packet was transmitted.
+	 */
+	#define TX_CMPL_PTP_ERRORS_HINT_TOO_SHORT               UINT32_C(0x80)
+	/*
+	 * When this bit is '1', it indicates that the packet was
+	 * dropped due to Poison TLP error on one or more of the
+	 * TLPs in the PXP completion.
+	 */
+	#define TX_CMPL_PTP_ERRORS_POISON_TLP_ERROR             UINT32_C(0x100)
+	/*
+	 * When this bit is '1', it indicates that the packet was dropped due
+	 * to a transient internal error in TDC. The packet or LSO can be
+	 * retried and may transmit successfully on a subsequent attempt.
+	 */
+	#define TX_CMPL_PTP_ERRORS_INTERNAL_ERROR               UINT32_C(0x200)
+	/*
+	 * When this bit is '1', it was not possible to collect a a timestamp
+	 * for a PTP completion, in which case the timestamp_hi and
+	 * timestamp_lo fields are invalid. When this bit is '0' for a PTP
+	 * completion, the timestamp_hi and timestamp_lo fields are valid.
+	 * RJRN will copy the value of this bit into the field of the same
+	 * name in all TX completions, regardless of whether such
+	 * completions are PTP completions or other TX completions.
+	 */
+	#define TX_CMPL_PTP_ERRORS_TIMESTAMP_INVALID_ERROR      UINT32_C(0x400)
+	/* unused2 is 16 b */
+	uint16_t	unused_1;
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint32_t	timestamp_lo;
+} __rte_packed;
+
+/* tx_cmpl_ptp_hi (size:128b/16B) */
+struct tx_cmpl_ptp_hi {
+	/*
+	 * This is timestamp value (lower 32bits) read from PM for the PTP
+	 * timestamp enabled packet.
+	 */
+	uint16_t	timestamp_hi[3];
+	uint16_t	reserved16;
+	uint64_t	v2;
+	/*
+	 * This value is written by the NIC such that it will be different for
+	 * each pass through the completion queue.The even passes will write 1.
+	 * The odd passes will write 0
+	 */
+	#define TX_CMPL_PTP_HI_V2     UINT32_C(0x1)
+} __rte_packed;
+
 /* rx_pkt_cmpl (size:128b/16B) */
 struct rx_pkt_cmpl {
 	uint16_t	flags_type;
@@ -3003,12 +3365,8 @@ struct rx_pkt_cmpl_hi {
 	#define RX_PKT_CMPL_REORDER_SFT 0
 } __rte_packed;
 
-/*
- * This TPA completion structure is used on devices where the
- * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
- */
-/* rx_tpa_start_cmpl (size:128b/16B) */
-struct rx_tpa_start_cmpl {
+/* rx_pkt_v2_cmpl (size:128b/16B) */
+struct rx_pkt_v2_cmpl {
 	uint16_t	flags_type;
 	/*
 	 * This field indicates the exact type of the completion.
@@ -3017,84 +3375,143 @@ struct rx_tpa_start_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	#define RX_PKT_V2_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_PKT_V2_CMPL_TYPE_SFT                       0
 	/*
-	 * RX L2 TPA Start Completion:
-	 * Completion at the beginning of a TPA operation.
-	 * Length = 32B
+	 * RX L2 V2 completion:
+	 * Completion of and L2 RX packet. Length = 32B
+	 * This is the new version of the RX_L2 completion used in SR2
+	 * and later chips.
 	 */
-	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
-	#define RX_TPA_START_CMPL_TYPE_LAST \
-		RX_TPA_START_CMPL_TYPE_RX_TPA_START
-	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_START_CMPL_FLAGS_SFT                6
-	/* This bit will always be '0' for TPA start completions. */
-	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_PKT_V2_CMPL_TYPE_RX_L2_V2                    UINT32_C(0xf)
+	#define RX_PKT_V2_CMPL_TYPE_LAST \
+		RX_PKT_V2_CMPL_TYPE_RX_L2_V2
+	#define RX_PKT_V2_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_PKT_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an
+	 * error of some type. Type of error is indicated in
+	 * error_flags.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Normal:
+	 * Packet was placed using normal algorithm.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_NORMAL \
+		(UINT32_C(0x0) << 7)
 	/*
 	 * Jumbo:
-	 * TPA Packet was placed using jumbo algorithm. This means
-	 * that the first buffer will be filled with data before
-	 * moving to aggregation buffers. Each aggregation buffer
-	 * will be filled before moving to the next aggregation
-	 * buffer.
+	 * Packet was placed using jumbo algorithm.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
 		(UINT32_C(0x1) << 7)
 	/*
 	 * Header/Data Separation:
 	 * Packet was placed using Header/Data separation algorithm.
 	 * The separation location is indicated by the itype field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
 	/*
-	 * GRO/Jumbo:
-	 * Packet will be placed using GRO/Jumbo where the first
-	 * packet is filled with data. Subsequent packets will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * Truncation:
+	 * Packet was placed using truncation algorithm. The
+	 * placed (truncated) length is indicated in the payload_offset
+	 * field. The original length is indicated in the len field.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
-		(UINT32_C(0x5) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION \
+		(UINT32_C(0x3) << 7)
+	#define RX_PKT_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_PKT_V2_CMPL_FLAGS_PLACEMENT_TRUNCATION
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_PKT_V2_CMPL_FLAGS_RSS_VALID                 UINT32_C(0x400)
 	/*
-	 * GRO/Header-Data Separation:
-	 * Packet will be placed using GRO/HDS where the header
-	 * is in the first packet.
-	 * Payload of each packet will be
-	 * placed such that any one packet does not span two
-	 * aggregation buffers unless it starts at the beginning of
-	 * an aggregation buffer.
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
-		(UINT32_C(0x6) << 7)
-	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* This bit is '1' if the RSS field in this completion is valid. */
-	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
-	/* unused is 1 b */
-	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	#define RX_PKT_V2_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_MASK                UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * Not Known:
+	 * Indicates that the packet type was not known.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_NOT_KNOWN \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * IP Packet:
+	 * Indicates that the packet was an IP packet, but further
+	 * classification was not possible.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_IP \
+		(UINT32_C(0x1) << 12)
 	/*
 	 * TCP Packet:
 	 * Indicates that the packet was IP and TCP.
+	 * This indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_TCP \
 		(UINT32_C(0x2) << 12)
-	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
-		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
 	/*
-	 * This value indicates the amount of packet data written to the
-	 * buffer the opaque field in this completion corresponds to.
+	 * UDP Packet:
+	 * Indicates that the packet was IP and UDP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_UDP \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * FCoE Packet:
+	 * Indicates that the packet was recognized as a FCoE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_FCOE \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * RoCE Packet:
+	 * Indicates that the packet was recognized as a RoCE.
+	 * This also indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ROCE \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * ICMP Packet:
+	 * Indicates that the packet was recognized as ICMP.
+	 * This indicates that the payload_offset field is valid.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_ICMP \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * PtP packet wo/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_WO_TIMESTAMP \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * PtP packet w/timestamp:
+	 * Indicates that the packet was recognized as a PtP
+	 * packet and that a timestamp was taken for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP \
+		(UINT32_C(0x9) << 12)
+	#define RX_PKT_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_PKT_V2_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP
+	/*
+	 * This is the length of the data for the packet stored in the
+	 * buffer(s) identified by the opaque value. This includes
+	 * the packet BD and any associated buffer BDs. This does not include
+	 * the length of any data places in aggregation BDs.
 	 */
 	uint16_t	len;
 	/*
@@ -3102,19 +3519,597 @@ struct rx_tpa_start_cmpl {
 	 * corresponds to.
 	 */
 	uint32_t	opaque;
+	uint8_t	agg_bufs_v1;
 	/*
 	 * This value is written by the NIC such that it will be different
 	 * for each pass through the completion queue. The even passes
 	 * will write 1. The odd passes will write 0.
 	 */
-	uint8_t	v1;
+	#define RX_PKT_V2_CMPL_V1           UINT32_C(0x1)
 	/*
-	 * This value is written by the NIC such that it will be different
-	 * for each pass through the completion queue. The even passes
-	 * will write 1. The odd passes will write 0.
+	 * This value is the number of aggregation buffers that follow this
+	 * entry in the completion ring that are a part of this packet.
+	 * If the value is zero, then the packet is completely contained
+	 * in the buffer space provided for the packet in the RX ring.
 	 */
-	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
-	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
+	#define RX_PKT_V2_CMPL_AGG_BUFS_MASK UINT32_C(0x3e)
+	#define RX_PKT_V2_CMPL_AGG_BUFS_SFT 1
+	/* unused1 is 2 b */
+	#define RX_PKT_V2_CMPL_UNUSED1_MASK UINT32_C(0xc0)
+	#define RX_PKT_V2_CMPL_UNUSED1_SFT  6
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	uint16_t	metadata1_payload_offset;
+	/*
+	 * This is data from the CFA as indicated by the meta_format field.
+	 * If truncation placement is not used, this value indicates the offset
+	 * in bytes from the beginning of the packet where the inner payload
+	 * starts. This value is valid for TCP, UDP, FCoE, and RoCE packets. If
+	 * truncation placement is used, this value represents the placed
+	 * (truncated) length of the packet.
+	 */
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_MASK    UINT32_C(0x1ff)
+	#define RX_PKT_V2_CMPL_PAYLOAD_OFFSET_SFT     0
+	/* This is data from the CFA as indicated by the meta_format field. */
+	#define RX_PKT_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_PKT_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_PKT_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC. When vee_cmpl_mode
+	 * is set in VNIC context, this is the lower 32b of the host address
+	 * from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/* Last 16 bytes of RX Packet V2 Completion Record */
+/* rx_pkt_v2_cmpl_hi (size:128b/16B) */
+struct rx_pkt_v2_cmpl_hi {
+	uint32_t	flags2;
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_LAST \
+		RX_PKT_V2_CMPL_HI_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set.
+	 */
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_PKT_V2_CMPL_HI_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_PKT_V2_CMPL_HI_V2 \
+		UINT32_C(0x1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_SFT                               1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packet that was found after part of the
+	 * packet was already placed. The packet should be treated as
+	 * invalid.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_SFT                   1
+	/* No buffer error */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit: Packet did not fit into packet buffer provided.
+	 * For regular placement, this means the packet did not fit in
+	 * the buffer provided. For HDS and jumbo placement, this means
+	 * that the packet could not be placed into 8 physical buffers
+	 * (if fixed-size buffers are used), or that the packet could
+	 * not be placed in the number of physical buffers configured
+	 * for the VNIC (if variable-size buffers are used)
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Not On Chip: All BDs needed for the packet were not on-chip
+	 * when the packet arrived. For regular placement, this error is
+	 * not valid. For HDS and jumbo placement, this means that not
+	 * enough agg BDs were posted to place the packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
+		(UINT32_C(0x2) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This indicates that there was an error in the outer tunnel
+	 * portion of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_MASK \
+		UINT32_C(0x70)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_SFT                   4
+	/*
+	 * No additional error occurred on the outer tunnel portion
+	 * of the packet or the packet does not have a outer tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the outer tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * Indicates that header length is out of range in the outer
+	 * tunnel header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the outer tunnel l3 header length. Valid for IPv4, or
+	 * IPv6 outer tunnel packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the outer tunnel UDP header length for a outer
+	 * tunnel UDP packet that is not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 4)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have
+	 * failed (e.g. TTL = 0) in the outer tunnel header. Valid for
+	 * IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L3_BAD_TTL \
+		(UINT32_C(0x5) << 4)
+	/*
+	 * Indicates that the IP checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_IP_CS_ERROR \
+		(UINT32_C(0x6) << 4)
+	/*
+	 * Indicates that the L4 checksum failed its check in the outer
+	 * tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR \
+		(UINT32_C(0x7) << 4)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_OT_PKT_ERROR_OT_L4_CS_ERROR
+	/*
+	 * This indicates that there was a CRC error on either an FCoE
+	 * or RoCE packet. The itype indicates the packet type.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_CRC_ERROR \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that there was an error in the tunnel portion
+	 * of the packet when this field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_MASK \
+		UINT32_C(0xe00)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_SFT                    9
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * of the packet or the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 9)
+	/*
+	 * Indicates that IP header version does not match expectation
+	 * from L2 Ethertype for IPv4 and IPv6 in the tunnel header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 9)
+	/*
+	 * Indicates that header length is out of range in the tunnel
+	 * header. Valid for IPv4.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 9)
+	/*
+	 * Indicates that physical packet is shorter than that claimed
+	 * by the tunnel l3 header length. Valid for IPv4, or IPv6 tunnel
+	 * packet packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_TOTAL_ERROR \
+		(UINT32_C(0x3) << 9)
+	/*
+	 * Indicates that the physical packet is shorter than that claimed
+	 * by the tunnel UDP header length for a tunnel UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_UDP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 9)
+	/*
+	 * Indicates that the IPv4 TTL or IPv6 hop limit check have failed
+	 * (e.g. TTL = 0) in the tunnel header. Valid for IPv4, and IPv6.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \
+		(UINT32_C(0x5) << 9)
+	/*
+	 * Indicates that the IP checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \
+		(UINT32_C(0x6) << 9)
+	/*
+	 * Indicates that the L4 checksum failed its check in the tunnel
+	 * header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \
+		(UINT32_C(0x7) << 9)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR
+	/*
+	 * This indicates that there was an error in the inner
+	 * portion of the packet when this
+	 * field is non-zero.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_MASK \
+		UINT32_C(0xf000)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_SFT                      12
+	/*
+	 * No additional error occurred on the tunnel portion
+	 * or the packet of the packet does not have a tunnel.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_NO_ERROR \
+		(UINT32_C(0x0) << 12)
+	/*
+	 * Indicates that IP header version does not match
+	 * expectation from L2 Ethertype for IPv4 and IPv6 or that
+	 * option other than VFT was parsed on
+	 * FCoE packet.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_VERSION \
+		(UINT32_C(0x1) << 12)
+	/*
+	 * indicates that header length is out of range. Valid for
+	 * IPv4 and RoCE
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_HDR_LEN \
+		(UINT32_C(0x2) << 12)
+	/*
+	 * indicates that the IPv4 TTL or IPv6 hop limit check
+	 * have failed (e.g. TTL = 0). Valid for IPv4, and IPv6
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L3_BAD_TTL \
+		(UINT32_C(0x3) << 12)
+	/*
+	 * Indicates that physical packet is shorter than that
+	 * claimed by the l3 header length. Valid for IPv4,
+	 * IPv6 packet or RoCE packets.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_TOTAL_ERROR \
+		(UINT32_C(0x4) << 12)
+	/*
+	 * Indicates that the physical packet is shorter than that
+	 * claimed by the UDP header length for a UDP packet that is
+	 * not fragmented.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_UDP_TOTAL_ERROR \
+		(UINT32_C(0x5) << 12)
+	/*
+	 * Indicates that TCP header length > IP payload. Valid for
+	 * TCP packets only.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN \
+		(UINT32_C(0x6) << 12)
+	/* Indicates that TCP header length < 5. Valid for TCP. */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_HDR_LEN_TOO_SMALL \
+		(UINT32_C(0x7) << 12)
+	/*
+	 * Indicates that TCP option headers result in a TCP header
+	 * size that does not match data offset in TCP header. Valid
+	 * for TCP.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \
+		(UINT32_C(0x8) << 12)
+	/*
+	 * Indicates that the IP checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \
+		(UINT32_C(0x9) << 12)
+	/*
+	 * Indicates that the L4 checksum failed its check in the
+	 * inner header.
+	 */
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \
+		(UINT32_C(0xa) << 12)
+	#define RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_LAST \
+		RX_PKT_V2_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format=1, this value is the VLAN VID. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_VID_SFT 0
+	/* When meta_format=1, this value is the VLAN DE. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format=1, this value is the VLAN PRI. */
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_PKT_V2_CMPL_HI_METADATA0_PRI_SFT 13
+	/*
+	 * The timestamp field contains the 32b timestamp for the packet from
+	 * the MAC.
+	 */
+	uint32_t	timestamp;
+} __rte_packed;
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_cmpl (size:128b/16B) */
+struct rx_tpa_start_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_CMPL_TYPE_MASK                UINT32_C(0x3f)
+	#define RX_TPA_START_CMPL_TYPE_SFT                 0
+	/*
+	 * RX L2 TPA Start Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 */
+	#define RX_TPA_START_CMPL_TYPE_RX_TPA_START          UINT32_C(0x13)
+	#define RX_TPA_START_CMPL_TYPE_LAST \
+		RX_TPA_START_CMPL_TYPE_RX_TPA_START
+	#define RX_TPA_START_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
+	#define RX_TPA_START_CMPL_FLAGS_SFT                6
+	/* This bit will always be '0' for TPA start completions. */
+	#define RX_TPA_START_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_SFT       7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	#define RX_TPA_START_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_CMPL_FLAGS_RSS_VALID           UINT32_C(0x400)
+	/* unused is 1 b */
+	#define RX_TPA_START_CMPL_FLAGS_UNUSED              UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_SFT           12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_CMPL_LAST RX_TPA_START_CMPL_V1
 	/*
 	 * This is the RSS hash type for the packet. The value is packed
 	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
@@ -3285,6 +4280,430 @@ struct rx_tpa_start_cmpl_hi {
 	#define RX_TPA_START_CMPL_INNER_L4_SIZE_SFT   27
 } __rte_packed;
 
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ * RX L2 TPA Start V2 Completion Record (32 bytes split to 2 16-byte
+ * struct)
+ */
+/* rx_tpa_start_v2_cmpl (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl {
+	uint16_t	flags_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define RX_TPA_START_V2_CMPL_TYPE_SFT                       0
+	/*
+	 * RX L2 TPA Start V2 Completion:
+	 * Completion at the beginning of a TPA operation.
+	 * Length = 32B
+	 * This is the new version of the RX_TPA_START completion used
+	 * in SR2 and later chips.
+	 */
+	#define RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2 \
+		UINT32_C(0xd)
+	#define RX_TPA_START_V2_CMPL_TYPE_LAST \
+		RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2
+	#define RX_TPA_START_V2_CMPL_FLAGS_MASK \
+		UINT32_C(0xffc0)
+	#define RX_TPA_START_V2_CMPL_FLAGS_SFT                      6
+	/*
+	 * When this bit is '1', it indicates a packet that has an error
+	 * of some type. Type of error is indicated in error_flags.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ERROR \
+		UINT32_C(0x40)
+	/* This field indicates how the packet was placed in the buffer. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_MASK \
+		UINT32_C(0x380)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_SFT             7
+	/*
+	 * Jumbo:
+	 * TPA Packet was placed using jumbo algorithm. This means
+	 * that the first buffer will be filled with data before
+	 * moving to aggregation buffers. Each aggregation buffer
+	 * will be filled before moving to the next aggregation
+	 * buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_JUMBO \
+		(UINT32_C(0x1) << 7)
+	/*
+	 * Header/Data Separation:
+	 * Packet was placed using Header/Data separation algorithm.
+	 * The separation location is indicated by the itype field.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_HDS \
+		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
+	/*
+	 * GRO/Jumbo:
+	 * Packet will be placed using GRO/Jumbo where the first
+	 * packet is filled with data. Subsequent packets will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+		(UINT32_C(0x5) << 7)
+	/*
+	 * GRO/Header-Data Separation:
+	 * Packet will be placed using GRO/HDS where the header
+	 * is in the first packet.
+	 * Payload of each packet will be
+	 * placed such that any one packet does not span two
+	 * aggregation buffers unless it starts at the beginning of
+	 * an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
+	#define RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* This bit is '1' if the RSS field in this completion is valid. */
+	#define RX_TPA_START_V2_CMPL_FLAGS_RSS_VALID \
+		UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement. It
+	 * starts at the first 32B boundary after the end of the header for
+	 * HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_PKT_METADATA_PRESENT \
+		UINT32_C(0x800)
+	/*
+	 * This value indicates what the inner packet determined for the
+	 * packet was.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_SFT                 12
+	/*
+	 * TCP Packet:
+	 * Indicates that the packet was IP and TCP.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP \
+		(UINT32_C(0x2) << 12)
+	#define RX_TPA_START_V2_CMPL_FLAGS_ITYPE_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS_ITYPE_TCP
+	/*
+	 * This value indicates the amount of packet data written to the
+	 * buffer the opaque field in this completion corresponds to.
+	 */
+	uint16_t	len;
+	/*
+	 * This is a copy of the opaque field from the RX BD this completion
+	 * corresponds to. If the VNIC is configured to not use an Rx BD for
+	 * the TPA Start completion, then this is a copy of the opaque field
+	 * from the first BD used to place the TPA Start packet.
+	 */
+	uint32_t	opaque;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	uint8_t	v1;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V1 UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_LAST RX_TPA_START_V2_CMPL_V1
+	/*
+	 * This is the RSS hash type for the packet. The value is packed
+	 * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+	 *
+	 * The value of tuple_extrac_op provides the information about
+	 * what fields the hash was computed on.
+	 * * 0: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of inner
+	 * IP and TCP or UDP headers. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 1: The RSS hash was computed over source IP address and destination
+	 * IP address of inner IP header. Note: For non-tunneled packets,
+	 * the packet headers are considered inner packet headers for the RSS
+	 * hash computation purpose.
+	 * * 2: The RSS hash was computed over source IP address,
+	 * destination IP address, source port, and destination port of
+	 * IP and TCP or UDP headers of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 * * 3: The RSS hash was computed over source IP address and
+	 * destination IP address of IP header of outer tunnel headers.
+	 * Note: For non-tunneled packets, this value is not applicable.
+	 *
+	 * Note that 4-tuples values listed above are applicable
+	 * for layer 4 protocols supported and enabled for RSS in the hardware,
+	 * HWRM firmware, and drivers. For example, if RSS hash is supported and
+	 * enabled for TCP traffic only, then the values of tuple_extract_op
+	 * corresponding to 4-tuples are only valid for TCP traffic.
+	 */
+	uint8_t	rss_hash_type;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	uint16_t	agg_id;
+	/*
+	 * This is the aggregation ID that the completion is associated
+	 * with. Use this number to correlate the TPA start completion
+	 * with the TPA end completion.
+	 */
+	#define RX_TPA_START_V2_CMPL_AGG_ID_MASK            UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_AGG_ID_SFT             0
+	#define RX_TPA_START_V2_CMPL_METADATA1_MASK         UINT32_C(0xf000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_SFT          12
+	/* When meta_format != 0, this value is the VLAN TPID_SEL. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_MASK UINT32_C(0x7000)
+	#define RX_TPA_START_V2_CMPL_METADATA1_TPID_SEL_SFT  12
+	/* When meta_format != 0, this value is the VLAN valid. */
+	#define RX_TPA_START_V2_CMPL_METADATA1_VALID         UINT32_C(0x8000)
+	/*
+	 * This value is the RSS hash value calculated for the packet
+	 * based on the mode bits and key value in the VNIC.
+	 * When vee_cmpl_mode is set in VNIC context, this is the lower
+	 * 32b of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	rss_hash;
+} __rte_packed;
+
+/*
+ * Last 16 bytes of RX L2 TPA Start V2 Completion Record
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
+/* rx_tpa_start_v2_cmpl_hi (size:128b/16B) */
+struct rx_tpa_start_v2_cmpl_hi {
+	uint32_t	flags2;
+	/* This indicates that the aggregation was done using GRO rules. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_AGG_GRO \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '0', the cs_ok field has the following definition:-
+	 * ip_cs_ok[2:0] = The number of header groups with a valid IP checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. -
+	 * l4_cs_ok[5:3] = The number of header groups with a valid L4 checksum
+	 * in the delivered packet, counted from the outer-most header group to
+	 * the inner-most header group, stopping at the first error. When this
+	 * bit is '1', the cs_ok field has the following definition: -
+	 * hdr_cnt[2:0] = The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet. - ip_cs_all_ok[3] =This bit
+	 * will be '1' if all the parsed header groups with an IP checksum are
+	 * valid. - l4_cs_all_ok[4] = This bit will be '1' if all the parsed
+	 * header groups with an L4 checksum are valid.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_ALL_OK_MODE \
+		UINT32_C(0x8)
+	/* This value indicates what format the metadata field is. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_MASK \
+		UINT32_C(0xf0)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_SFT            4
+	/* There is no metadata information. Values are zero. */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_NONE \
+		(UINT32_C(0x0) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information: - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],
+	 * de, vid[11:0]} The metadata2 field contains the table scope
+	 * and action record pointer. - metadata2[25:0] contains the
+	 * action record pointer. - metadata2[31:26] contains the table
+	 * scope.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_ACT_REC_PTR \
+		(UINT32_C(0x1) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the Tunnel ID
+	 * value, justified to LSB. i
+	 * - VXLAN = VNI[23:0] -> VXLAN Network ID
+	 * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier
+	 * - NVGRE = TNI[23:0] -> Tenant Network ID
+	 * - GRE = KEY[31:0] -> key field with bit mask. zero if K=0
+	 * - IPv4 = 0 (not populated)
+	 * - IPv6 = Flow Label[19:0]
+	 * - PPPoE = sessionID[15:0]
+	 * - MPLs = Outer label[19:0]
+	 * - UPAR = Selected[31:0] with bit mask
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_TUNNEL_ID \
+		(UINT32_C(0x2) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0],de, vid[11:0]}
+	 * The metadata2 field contains the 32b metadata from the prepended
+	 * header (chdr_data).
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_CHDR_DATA \
+		(UINT32_C(0x3) << 4)
+	/*
+	 * The {metadata1, metadata0} fields contain the vtag
+	 * information:
+	 * - vtag[19:0] = {valid, tpid_sel[2:0], pri[2:0], de, vid[11:0]}
+	 * The metadata2 field contains the outer_l3_offset,
+	 * inner_l2_offset, inner_l3_offset, and inner_l4_size.
+	 * - metadata2[8:0] contains the outer_l3_offset.
+	 * - metadata2[17:9] contains the inner_l2_offset.
+	 * - metadata2[26:18] contains the inner_l3_offset.
+	 * - metadata2[31:27] contains the inner_l4_size.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET \
+		(UINT32_C(0x4) << 4)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_LAST \
+		RX_TPA_START_V2_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET
+	/*
+	 * This field indicates the IP type for the inner-most IP header.
+	 * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+	 * This value is only valid if itype indicates a packet
+	 * with an IP header.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_IP_TYPE \
+		UINT32_C(0x100)
+	/*
+	 * This indicates that the complete 1's complement checksum was
+	 * calculated for the packet in the affregation.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_CALC \
+		UINT32_C(0x200)
+	/*
+	 * This field indicates the status of IP and L4 CS calculations done
+	 * by the chip. The format of this field is indicated by the
+	 * cs_all_ok_mode bit.
+	 * CS status for TPA packets is always valid. This means that "all_ok"
+	 * status will always be set. The ok count status will be set
+	 * appropriately for the packet header, such that all existing CS
+	 * values are ok.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_MASK \
+		UINT32_C(0xfc00)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_CS_OK_SFT                  10
+	/*
+	 * This value is the complete 1's complement checksum calculated from
+	 * the start of the outer L3 header to the end of the packet (not
+	 * including the ethernet crc). It is valid when the
+	 * 'complete_checksum_calc' flag is set. For TPA Start completions,
+	 * the complete checksum is calculated for the first packet in the
+	 * aggregation only.
+	 */
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_MASK \
+		UINT32_C(0xffff0000)
+	#define RX_TPA_START_V2_CMPL_FLAGS2_COMPLETE_CHECKSUM_SFT      16
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 * - meta_format 0 - none - metadata2 = 0 - not valid/not stripped
+	 * - meta_format 1 - act_rec_ptr - metadata2 = {table_scope[5:0],
+	 *   act_rec_ptr[25:0]}
+	 * - meta_format 2 - tunnel_id - metadata2 = tunnel_id[31:0]
+	 * - meta_format 3 - chdr_data - metadata2 = updated_chdr_data[31:0]
+	 * - meta_format 4 - hdr_offsets - metadata2 = hdr_offsets[31:0]
+	 * When vee_cmpl_mode is set in VNIC context, this is the upper 32b
+	 * of the host address from the first BD used to place the packet.
+	 */
+	uint32_t	metadata2;
+	uint16_t	errors_v2;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define RX_TPA_START_V2_CMPL_V2 \
+		UINT32_C(0x1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_MASK \
+		UINT32_C(0xfffe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_SFT                     1
+	/*
+	 * This error indicates that there was some sort of problem with
+	 * the BDs for the packetThe packet should be treated as
+	 * invalid.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_MASK \
+		UINT32_C(0xe)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_SFT         1
+	/* No buffer error */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
+		(UINT32_C(0x0) << 1)
+	/*
+	 * Did Not Fit:
+	 * Packet did not fit into packet buffer provided. This means
+	 * that the TPA Start packet was too big to be placed into the
+	 * per-packet maximum number of physical buffers configured for
+	 * the VNIC, or that it was too big to be placed into the
+	 * per-aggregation maximum number of physical buffers configured
+	 * for the VNIC. This error only occurs when the VNIC is
+	 * configured for variable size receive buffers.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_DID_NOT_FIT \
+		(UINT32_C(0x1) << 1)
+	/*
+	 * Bad Format:
+	 * BDs were not formatted correctly.
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+		(UINT32_C(0x3) << 1)
+	/*
+	 * Flush:
+	 * There was a bad_format error on the previous operation
+	 */
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
+		(UINT32_C(0x5) << 1)
+	#define RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_LAST \
+		RX_TPA_START_V2_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+	/*
+	 * This is data from the CFA block as indicated by the meta_format
+	 * field.
+	 */
+	uint16_t	metadata0;
+	/* When meta_format != 0, this value is the VLAN VID. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_MASK UINT32_C(0xfff)
+	#define RX_TPA_START_V2_CMPL_METADATA0_VID_SFT 0
+	/* When meta_format != 0, this value is the VLAN DE. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_DE      UINT32_C(0x1000)
+	/* When meta_format != 0, this value is the VLAN PRI. */
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_MASK UINT32_C(0xe000)
+	#define RX_TPA_START_V2_CMPL_METADATA0_PRI_SFT 13
+	/*
+	 * This field contains the outer_l3_offset, inner_l2_offset,
+	 * inner_l3_offset, and inner_l4_size.
+	 *
+	 * hdr_offsets[8:0] contains the outer_l3_offset.
+	 * hdr_offsets[17:9] contains the inner_l2_offset.
+	 * hdr_offsets[26:18] contains the inner_l3_offset.
+	 * hdr_offsets[31:27] contains the inner_l4_size.
+	 */
+	uint32_t	hdr_offsets;
+} __rte_packed;
+
 /*
  * This TPA completion structure is used on devices where the
  * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
@@ -3299,27 +4718,27 @@ struct rx_tpa_end_cmpl {
 	 * records. Odd values indicate 32B
 	 * records.
 	 */
-	#define RX_TPA_END_CMPL_TYPE_MASK                UINT32_C(0x3f)
-	#define RX_TPA_END_CMPL_TYPE_SFT                 0
+	#define RX_TPA_END_CMPL_TYPE_MASK                      UINT32_C(0x3f)
+	#define RX_TPA_END_CMPL_TYPE_SFT                       0
 	/*
 	 * RX L2 TPA End Completion:
 	 * Completion at the end of a TPA operation.
 	 * Length = 32B
 	 */
-	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END            UINT32_C(0x15)
+	#define RX_TPA_END_CMPL_TYPE_RX_TPA_END                  UINT32_C(0x15)
 	#define RX_TPA_END_CMPL_TYPE_LAST \
 		RX_TPA_END_CMPL_TYPE_RX_TPA_END
-	#define RX_TPA_END_CMPL_FLAGS_MASK               UINT32_C(0xffc0)
-	#define RX_TPA_END_CMPL_FLAGS_SFT                6
+	#define RX_TPA_END_CMPL_FLAGS_MASK                     UINT32_C(0xffc0)
+	#define RX_TPA_END_CMPL_FLAGS_SFT                      6
 	/*
 	 * When this bit is '1', it indicates a packet that has an
 	 * error of some type. Type of error is indicated in
 	 * error_flags.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ERROR               UINT32_C(0x40)
+	#define RX_TPA_END_CMPL_FLAGS_ERROR                     UINT32_C(0x40)
 	/* This field indicates how the packet was placed in the buffer. */
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK      UINT32_C(0x380)
-	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT       7
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_MASK            UINT32_C(0x380)
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_SFT             7
 	/*
 	 * Jumbo:
 	 * TPA Packet was placed using jumbo algorithm. This means
@@ -3337,6 +4756,15 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_HDS \
 		(UINT32_C(0x2) << 7)
+	/*
+	 * IOC/Jumbo:
+	 * Packet will be placed using In-Order Completion/Jumbo where
+	 * the first packet of the aggregation is placed using Jumbo
+	 * Placement. Subsequent packets will be placed such that each
+	 * packet starts at the beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_JUMBO \
+		(UINT32_C(0x4) << 7)
 	/*
 	 * GRO/Jumbo:
 	 * Packet will be placed using GRO/Jumbo where the first
@@ -3358,11 +4786,28 @@ struct rx_tpa_end_cmpl {
 	 */
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS \
 		(UINT32_C(0x6) << 7)
+	/*
+	 * IOC/Header-Data Separation:
+	 * Packet will be placed using In-Order Completion/HDS where
+	 * the header is in the first packet buffer. Payload of each
+	 * packet will be placed such that each packet starts at the
+	 * beginning of an aggregation buffer.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS \
+		(UINT32_C(0x7) << 7)
 	#define RX_TPA_END_CMPL_FLAGS_PLACEMENT_LAST \
-		RX_TPA_END_CMPL_FLAGS_PLACEMENT_GRO_HDS
-	/* unused is 2 b */
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_MASK         UINT32_C(0xc00)
-	#define RX_TPA_END_CMPL_FLAGS_UNUSED_SFT          10
+		RX_TPA_END_CMPL_FLAGS_PLACEMENT_IOC_HDS
+	/* unused is 1 b */
+	#define RX_TPA_END_CMPL_FLAGS_UNUSED                    UINT32_C(0x400)
+	/*
+	 * This bit is '1' if metadata has been added to the end of the
+	 * packet in host memory. Metadata starts at the first 32B boundary
+	 * after the end of the packet for regular and jumbo placement.
+	 * It starts at the first 32B boundary after the end of the header
+	 * for HDS placement. The length of the metadata is indicated in the
+	 * metadata itself.
+	 */
+	#define RX_TPA_END_CMPL_FLAGS_PKT_METADATA_PRESENT      UINT32_C(0x800)
 	/*
 	 * This value indicates what the inner packet determined for the
 	 * packet was.
@@ -3372,8 +4817,9 @@ struct rx_tpa_end_cmpl {
 	 *     field is valid and contains the TCP checksum.
 	 *     This also indicates that the payload_offset field is valid.
 	 */
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK          UINT32_C(0xf000)
-	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT           12
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_MASK \
+		UINT32_C(0xf000)
+	#define RX_TPA_END_CMPL_FLAGS_ITYPE_SFT                 12
 	/*
 	 * This value is zero for TPA End completions.
 	 * There is no data in the buffer that corresponds to the opaque
@@ -4243,6 +5689,52 @@ struct rx_abuf_cmpl {
 	uint32_t	unused_2;
 } __rte_packed;
 
+/* VEE FLUSH Completion Record (16 bytes) */
+/* vee_flush (size:128b/16B) */
+struct vee_flush {
+	uint32_t	downstream_path_type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define VEE_FLUSH_TYPE_MASK           UINT32_C(0x3f)
+	#define VEE_FLUSH_TYPE_SFT            0
+	/*
+	 * VEE Flush Completion:
+	 * This completion is inserted manually by the Primate and processed
+	 * by the VEE hardware to ensure that all completions on a VEE
+	 * function have been processed by the VEE hardware before FLR
+	 * process is completed.
+	 */
+	#define VEE_FLUSH_TYPE_VEE_FLUSH        UINT32_C(0x1c)
+	#define VEE_FLUSH_TYPE_LAST            VEE_FLUSH_TYPE_VEE_FLUSH
+	/* downstream_path is 1 b */
+	#define VEE_FLUSH_DOWNSTREAM_PATH     UINT32_C(0x40)
+	/* This completion is associated with VEE Transmit */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_TX    (UINT32_C(0x0) << 6)
+	/* This completion is associated with VEE Receive */
+	#define VEE_FLUSH_DOWNSTREAM_PATH_RX    (UINT32_C(0x1) << 6)
+	#define VEE_FLUSH_DOWNSTREAM_PATH_LAST VEE_FLUSH_DOWNSTREAM_PATH_RX
+	/*
+	 * This is an opaque value that is passed through the completion
+	 * to the VEE handler SW and is used to indicate what VEE VQ or
+	 * function has completed FLR processing.
+	 */
+	uint32_t	opaque;
+	uint32_t	v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes will
+	 * write 1. The odd passes will write 0.
+	 */
+	#define VEE_FLUSH_V     UINT32_C(0x1)
+	/* unused3 is 32 b */
+	uint32_t	unused_3;
+} __rte_packed;
+
 /* eject_cmpl (size:128b/16B) */
 struct eject_cmpl {
 	uint16_t	type;
@@ -6562,7 +8054,7 @@ struct hwrm_async_event_cmpl_deferred_response {
 	/*
 	 * The PF's mailbox is clear to issue another command.
 	 * A command with this seq_id is still in progress
-	 * and will return a regular HWRM completion when done.
+	 * and will return a regualr HWRM completion when done.
 	 * 'event_data1' field, if non-zero, contains the estimated
 	 * execution time for the command.
 	 */
@@ -7476,6 +8968,8 @@ struct hwrm_func_qcaps_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -7729,6 +9223,12 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
 		UINT32_C(0x40000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_QCAPS command
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_DBG_QCAPS_CMD_SUPPORTED \
+		UINT32_C(0x80000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7854,6 +9354,19 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
 		UINT32_C(0x2)
+	/*
+	 * If 1, the device can report extended hw statistics (including
+	 * additional tpa statistics).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_EXT_HW_STATS_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then the core firmware has support to enable/
+	 * disable hot reset support for interface dynamically through
+	 * HWRM_FUNC_CFG.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x8)
 	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -7904,6 +9417,8 @@ struct hwrm_func_qcfg_input {
 	 * Function ID of the function that is being queried.
 	 * 0xFF... (All Fs) if the query is for the requesting
 	 * function.
+	 * 0xFFFE (REQUESTING_PARENT_FID) This is a special FID
+	 * to be used by a trusted VF to query its parent PF.
 	 */
 	uint16_t	fid;
 	uint8_t	unused_0[6];
@@ -8013,6 +9528,15 @@ struct hwrm_func_qcfg_output {
 	 */
 	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x100)
+	/*
+	 * If set to 1, then the firmware and all currently registered driver
+	 * instances support hot reset. The hot reset support will be updated
+	 * dynamically based on the driver interface advertisement.
+	 * If set to 0, then the adapter is not currently able to initiate
+	 * hot reset.
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_HOT_RESET_ALLOWED \
+		UINT32_C(0x200)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -8565,6 +10089,17 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
 		UINT32_C(0x2000000)
+	/*
+	 * If this bit is set to 0, then the interface does not support hot
+	 * reset capability which it advertised with the hot_reset_support
+	 * flag in HWRM_FUNC_DRV_RGTR. If any of the function has set this
+	 * flag to 0, adapter cannot do the hot reset. In this state, if the
+	 * firmware receives a hot reset request, firmware must fail the
+	 * request. If this bit is set to 1, then interface is renabling the
+	 * hot reset capability.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS_HOT_RESET_IF_EN_DIS \
+		UINT32_C(0x4000000)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the mtu field to be
@@ -8704,6 +10239,12 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_ENABLES_ADMIN_LINK_STATE \
 		UINT32_C(0x400000)
+	/*
+	 * This bit must be '1' for the hot_reset_if_en_dis field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_HOT_RESET_IF_SUPPORT \
+		UINT32_C(0x800000)
 	/*
 	 * The maximum transmission unit of the function.
 	 * The HWRM should make sure that the mtu of
@@ -9036,15 +10577,21 @@ struct hwrm_func_qstats_input {
 	/* This flags indicates the type of statistics request. */
 	uint8_t	flags;
 	/* This value is not used to avoid backward compatibility issues. */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED    UINT32_C(0x0)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
 	/*
 	 * flags should be set to 1 when request is for only RoCE statistics.
 	 * This will be honored only if the caller_fid is a privileged PF.
 	 * In all other cases FID and caller_fid should be the same.
 	 */
-	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY UINT32_C(0x1)
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
 	#define HWRM_FUNC_QSTATS_INPUT_FLAGS_LAST \
-		HWRM_FUNC_QSTATS_INPUT_FLAGS_ROCE_ONLY
+		HWRM_FUNC_QSTATS_INPUT_FLAGS_COUNTER_MASK
 	uint8_t	unused_0[5];
 } __rte_packed;
 
@@ -9130,6 +10677,132 @@ struct hwrm_func_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/************************
+ * hwrm_func_qstats_ext *
+ ************************/
+
+
+/* hwrm_func_qstats_ext_input (size:192b/24B) */
+struct hwrm_func_qstats_ext_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being queried.
+	 * 0xFF... (All Fs) if the query is for the requesting
+	 * function.
+	 * A privileged PF can query for other function's statistics.
+	 */
+	uint16_t	fid;
+	/* This flags indicates the type of statistics request. */
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * flags should be set to 1 when request is for only RoCE statistics.
+	 * This will be honored only if the caller_fid is a privileged PF.
+	 * In all other cases FID and caller_fid should be the same.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_ROCE_ONLY    UINT32_C(0x1)
+	/*
+	 * flags should be set to 2 when request is for the counter mask
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
+} __rte_packed;
+
+/* hwrm_func_qstats_ext_output (size:1472b/184B) */
+struct hwrm_func_qstats_ext_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on received path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***********************
  * hwrm_func_clr_stats *
  ***********************/
@@ -10116,7 +11789,7 @@ struct hwrm_func_backing_store_qcaps_output {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -11039,7 +12712,7 @@ struct hwrm_func_backing_store_cfg_input {
 	 *
 	 * TQM slowpath rings should be sized as follows:
 	 *
-	 * num_entries = num_vnics + num_l2_tx_rings + num_roce_qps + tqm_min_size
+	 * num_entries = num_vnics + num_l2_tx_rings + 2 * num_roce_qps + tqm_min_size
 	 *
 	 * Where:
 	 *   num_vnics is the number of VNICs allocated in the VNIC backing store
@@ -16149,7 +17822,18 @@ struct hwrm_port_qstats_input {
 	uint64_t	resp_addr;
 	/* Port ID of port that is being queried. */
 	uint16_t	port_id;
-	uint8_t	unused_0[6];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0[5];
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -16382,7 +18066,7 @@ struct rx_port_stats_ext {
  * Port Rx Statistics extended PFC WatchDog Format.
  * StormDetect and StormRevert event determination is based
  * on an integration period and a percentage threshold.
- * StormDetect event - when percentage of XOFF frames received
+ * StormDetect event - when percentage of XOFF frames receieved
  * within an integration period exceeds the configured threshold.
  * StormRevert event - when percentage of XON frames received
  * within an integration period exceeds the configured threshold.
@@ -16843,7 +18527,18 @@ struct hwrm_port_qstats_ext_input {
 	 * statistics block in bytes
 	 */
 	uint16_t	rx_stat_size;
-	uint8_t	unused_0[2];
+	uint8_t	flags;
+	/* This value is not used to avoid backward compatibility issues. */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_UNUSED       UINT32_C(0x0)
+	/*
+	 * This bit is set to 1 when request is for the counter mask,
+	 * representing width of each of the stats counters, rather than
+	 * counters themselves.
+	 */
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x1)
+	#define HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_LAST \
+		HWRM_PORT_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
+	uint8_t	unused_0;
 	/*
 	 * This is the host address where
 	 * Tx port statistics will be stored
@@ -25312,95 +27007,104 @@ struct hwrm_ring_free_input {
 	/* Ring Type. */
 	uint8_t	ring_type;
 	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
+	/* TX Ring (TR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	/* RX Ring (RR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	/* RoCE Notification Completion Ring (ROCE_CR) */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	/* RX Aggregation Ring */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
+	/* Notification Queue */
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
+	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
+		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
+	uint8_t	unused_0;
+	/* Physical number of ring allocated. */
+	uint16_t	ring_id;
+	uint8_t	unused_1[4];
+} __rte_packed;
+
+/* hwrm_ring_free_output (size:128b/16B) */
+struct hwrm_ring_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/*******************
+ * hwrm_ring_reset *
+ *******************/
+
+
+/* hwrm_ring_reset_input (size:192b/24B) */
+struct hwrm_ring_reset_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Ring Type. */
+	uint8_t	ring_type;
+	/* L2 Completion Ring (CR) */
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL     UINT32_C(0x0)
 	/* TX Ring (TR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_TX        UINT32_C(0x1)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX          UINT32_C(0x1)
 	/* RX Ring (RR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX        UINT32_C(0x2)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX          UINT32_C(0x2)
 	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
-	/* RX Aggregation Ring */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG    UINT32_C(0x4)
-	/* Notification Queue */
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_NQ        UINT32_C(0x5)
-	#define HWRM_RING_FREE_INPUT_RING_TYPE_LAST \
-		HWRM_RING_FREE_INPUT_RING_TYPE_NQ
-	uint8_t	unused_0;
-	/* Physical number of ring allocated. */
-	uint16_t	ring_id;
-	uint8_t	unused_1[4];
-} __rte_packed;
-
-/* hwrm_ring_free_output (size:128b/16B) */
-struct hwrm_ring_free_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL   UINT32_C(0x3)
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * Rx Ring Group.  This is to reset rx and aggregation in an atomic
+	 * operation. Completion ring associated with this ring group is
+	 * not reset.
 	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/*******************
- * hwrm_ring_reset *
- *******************/
-
-
-/* hwrm_ring_reset_input (size:192b/24B) */
-struct hwrm_ring_reset_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
-	 */
-	uint64_t	resp_addr;
-	/* Ring Type. */
-	uint8_t	ring_type;
-	/* L2 Completion Ring (CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_L2_CMPL   UINT32_C(0x0)
-	/* TX Ring (TR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_TX        UINT32_C(0x1)
-	/* RX Ring (RR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX        UINT32_C(0x2)
-	/* RoCE Notification Completion Ring (ROCE_CR) */
-	#define HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL UINT32_C(0x3)
+	#define HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP UINT32_C(0x6)
 	#define HWRM_RING_RESET_INPUT_RING_TYPE_LAST \
-		HWRM_RING_RESET_INPUT_RING_TYPE_ROCE_CMPL
+		HWRM_RING_RESET_INPUT_RING_TYPE_RX_RING_GRP
 	uint8_t	unused_0;
-	/* Physical number of the ring. */
+	/*
+	 * Physical number of the ring. When ring type is rx_ring_grp, ring id
+	 * actually refers to ring group id.
+	 */
 	uint16_t	ring_id;
 	uint8_t	unused_1[4];
 } __rte_packed;
@@ -25615,7 +27319,18 @@ struct hwrm_ring_cmpl_ring_qaggint_params_input {
 	uint64_t	resp_addr;
 	/* Physical number of completion ring. */
 	uint16_t	ring_id;
-	uint8_t	unused_0[6];
+	uint16_t	flags;
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_MASK \
+		UINT32_C(0x3)
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_UNUSED_0_SFT 0
+	/*
+	 * Set this flag to 1 when querying parameters on a notification
+	 * queue. Set this flag to 0 when querying parameters on a
+	 * completion queue or completion ring.
+	 */
+	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
+		UINT32_C(0x4)
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_ring_cmpl_ring_qaggint_params_output (size:256b/32B) */
@@ -25652,19 +27367,19 @@ struct hwrm_ring_cmpl_ring_qaggint_params_output {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA when in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
+	 * Maximum wait time spent aggregating
 	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
@@ -25738,7 +27453,7 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	/*
 	 * Set this flag to 1 when configuring parameters on a
 	 * notification queue. Set this flag to 0 when configuring
-	 * parameters on a completion queue.
+	 * parameters on a completion queue or completion ring.
 	 */
 	#define HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_IS_NQ \
 		UINT32_C(0x4)
@@ -25753,20 +27468,20 @@ struct hwrm_ring_cmpl_ring_cfg_aggint_params_input {
 	 */
 	uint16_t	num_cmpl_dma_aggr_during_int;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
+	 * Timer used to aggregate completions before
 	 * DMA during the normal mode (not in interrupt mode).
 	 */
 	uint16_t	cmpl_aggr_dma_tmr;
 	/*
-	 * Timer in unit of 80-nsec used to aggregate completions before
-	 * DMA during the interrupt mode.
+	 * Timer used to aggregate completions before
+	 * DMA while in interrupt mode.
 	 */
 	uint16_t	cmpl_aggr_dma_tmr_during_int;
-	/* Minimum time (in unit of 80-nsec) between two interrupts. */
+	/* Minimum time between two interrupts. */
 	uint16_t	int_lat_tmr_min;
 	/*
-	 * Maximum wait time (in unit of 80-nsec) spent aggregating
-	 * cmpls before signaling the interrupt after the
+	 * Maximum wait time spent aggregating
+	 * completions before signaling the interrupt after the
 	 * interrupt is enabled.
 	 */
 	uint16_t	int_lat_tmr_max;
@@ -33339,78 +35054,246 @@ struct hwrm_tf_version_get_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-} __rte_packed;
-
-/* hwrm_tf_version_get_output (size:128b/16B) */
-struct hwrm_tf_version_get_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	/* Version Major number. */
-	uint8_t	major;
-	/* Version Minor number. */
-	uint8_t	minor;
-	/* Version Update number. */
-	uint8_t	update;
-	/* unused. */
-	uint8_t	unused0[4];
+} __rte_packed;
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_open_output (size:192b/24B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware.
+	 */
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the first client on
+	 * the newly created session.
+	 */
+	uint32_t	fw_session_client_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* unused. */
+	uint8_t	unused1[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __rte_packed;
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/****************************
+ * hwrm_tf_session_register *
+ ****************************/
+
+
+/* hwrm_tf_session_register_input (size:704b/88B) */
+struct hwrm_tf_session_register_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field is
-	 * written last.
-	 */
-	uint8_t	valid;
-} __rte_packed;
-
-/************************
- * hwrm_tf_session_open *
- ************************/
-
-
-/* hwrm_tf_session_open_input (size:640b/80B) */
-struct hwrm_tf_session_open_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
-	 */
-	uint16_t	target_id;
-	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * Unique session identifier for the session that the
+	 * register request want to create a new client on. This
+	 * value originates from the first open request.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
 	 */
-	uint64_t	resp_addr;
-	/* Name of the session. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session client. */
+	uint8_t	session_client_name[64];
 } __rte_packed;
 
-/* hwrm_tf_session_open_output (size:128b/16B) */
-struct hwrm_tf_session_open_output {
+/* hwrm_tf_session_register_output (size:128b/16B) */
+struct hwrm_tf_session_register_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33420,12 +35303,11 @@ struct hwrm_tf_session_open_output {
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
 	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session.
+	 * Unique session client identifier for the session created
+	 * by the firmware. It includes the session the client it
+	 * attached to and session client info.
 	 */
-	uint32_t	fw_session_id;
+	uint32_t	fw_session_client_id;
 	/* unused. */
 	uint8_t	unused0[3];
 	/*
@@ -33439,13 +35321,13 @@ struct hwrm_tf_session_open_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/**************************
- * hwrm_tf_session_attach *
- **************************/
+/******************************
+ * hwrm_tf_session_unregister *
+ ******************************/
 
 
-/* hwrm_tf_session_attach_input (size:704b/88B) */
-struct hwrm_tf_session_attach_input {
+/* hwrm_tf_session_unregister_input (size:192b/24B) */
+struct hwrm_tf_session_unregister_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -33475,24 +35357,19 @@ struct hwrm_tf_session_attach_input {
 	 */
 	uint64_t	resp_addr;
 	/*
-	 * Unique session identifier for the session that the attach
-	 * request want to attach to. This value originates from the
-	 * shared session memory that the attach request opened by
-	 * way of the 'attach name' that was passed in to the core
-	 * attach API.
-	 * The fw_session_id of the attach session includes PCIe bus
-	 * info to distinguish the PF and session info to identify
-	 * the associated TruFlow session.
+	 * Unique session identifier for the session that the
+	 * unregister request want to close a session client on.
 	 */
-	uint32_t	attach_fw_session_id;
-	/* unused. */
-	uint32_t	unused0;
-	/* Name of the session it self. */
-	uint8_t	session_name[64];
+	uint32_t	fw_session_id;
+	/*
+	 * Unique session client identifier for the session that the
+	 * unregister request want to close.
+	 */
+	uint32_t	fw_session_client_id;
 } __rte_packed;
 
-/* hwrm_tf_session_attach_output (size:128b/16B) */
-struct hwrm_tf_session_attach_output {
+/* hwrm_tf_session_unregister_output (size:128b/16B) */
+struct hwrm_tf_session_unregister_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -33501,16 +35378,8 @@ struct hwrm_tf_session_attach_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/*
-	 * Unique session identifier for the session created by the
-	 * firmware. It includes PCIe bus info to distinguish the PF
-	 * and session info to identify the associated TruFlow
-	 * session. This fw_session_id is unique to the attach
-	 * request.
-	 */
-	uint32_t	fw_session_id;
 	/* unused. */
-	uint8_t	unused0[3];
+	uint8_t	unused0[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33746,15 +35615,17 @@ struct hwrm_tf_session_resc_qcaps_input {
 	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * Defines the size of the provided qcaps_addr array
 	 * buffer. The size should be set to the Resource Manager
-	 * provided max qcaps value that is device specific. This is
-	 * the max size possible.
+	 * provided max number of qcaps entries which is device
+	 * specific. Resource Manager gets the max size from HCAPI
+	 * RM.
 	 */
-	uint16_t	size;
+	uint16_t	qcaps_size;
 	/*
-	 * This is the DMA address for the qcaps output data
-	 * array. Array is of tf_rm_cap type and is device specific.
+	 * This is the DMA address for the qcaps output data array
+	 * buffer. Array is of tf_rm_resc_req_entry type and is
+	 * device specific.
 	 */
 	uint64_t	qcaps_addr;
 } __rte_packed;
@@ -33772,29 +35643,28 @@ struct hwrm_tf_session_resc_qcaps_output {
 	/* Control flags. */
 	uint32_t	flags;
 	/* Session reservation strategy. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_SFT \
 		0
 	/* Static partitioning. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_STATIC \
 		UINT32_C(0x0)
 	/* Strategy 1. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_1 \
 		UINT32_C(0x1)
 	/* Strategy 2. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_2 \
 		UINT32_C(0x2)
 	/* Strategy 3. */
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3 \
 		UINT32_C(0x3)
-	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
-		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_3
 	/*
-	 * Size of the returned tf_rm_cap data array. The value
-	 * cannot exceed the size defined by the input msg. The data
-	 * array is returned using the qcaps_addr specified DMA
-	 * address also provided by the input msg.
+	 * Size of the returned qcaps_addr data array buffer. The
+	 * value cannot exceed the size defined by the input msg,
+	 * qcaps_size.
 	 */
 	uint16_t	size;
 	/* unused. */
@@ -33817,7 +35687,7 @@ struct hwrm_tf_session_resc_qcaps_output {
  ******************************/
 
 
-/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+/* hwrm_tf_session_resc_alloc_input (size:320b/40B) */
 struct hwrm_tf_session_resc_alloc_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -33860,16 +35730,25 @@ struct hwrm_tf_session_resc_alloc_input {
 	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
 		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
 	/*
-	 * Defines the size, in bytes, of the provided num_addr
-	 * buffer.
+	 * Defines the array size of the provided req_addr and
+	 * resv_addr array buffers. Should be set to the number of
+	 * request entries.
 	 */
-	uint16_t	size;
+	uint16_t	req_size;
+	/*
+	 * This is the DMA address for the request input data array
+	 * buffer. Array is of tf_rm_resc_req_entry type. Size of the
+	 * array buffer is provided by the 'req_size' field in this
+	 * message.
+	 */
+	uint64_t	req_addr;
 	/*
-	 * This is the DMA address for the num input data array
-	 * buffer. Array is of tf_rm_num type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * This is the DMA address for the resc output data array
+	 * buffer. Array is of tf_rm_resc_entry type. Size of the array
+	 * buffer is provided by the 'req_size' field in this
+	 * message.
 	 */
-	uint64_t	num_addr;
+	uint64_t	resc_addr;
 } __rte_packed;
 
 /* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
@@ -33882,8 +35761,15 @@ struct hwrm_tf_session_resc_alloc_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
+	/*
+	 * Size of the returned tf_rm_resc_entry data array. The value
+	 * cannot exceed the req_size defined by the input msg. The data
+	 * array is returned using the resv_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
 	/* unused. */
-	uint8_t	unused0[7];
+	uint8_t	unused0[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM. This field should be read as '1'
@@ -33946,11 +35832,12 @@ struct hwrm_tf_session_resc_free_input {
 	 * Defines the size, in bytes, of the provided free_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	free_size;
 	/*
 	 * This is the DMA address for the free input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size field of this message.
+	 * buffer.  Array is of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'free_size' field of this
+	 * message.
 	 */
 	uint64_t	free_addr;
 } __rte_packed;
@@ -34029,11 +35916,12 @@ struct hwrm_tf_session_resc_flush_input {
 	 * Defines the size, in bytes, of the provided flush_addr
 	 * buffer.
 	 */
-	uint16_t	size;
+	uint16_t	flush_size;
 	/*
 	 * This is the DMA address for the flush input data array
-	 * buffer.  Array of tf_rm_res type. Size of the buffer is
-	 * provided by the 'size' field in this message.
+	 * buffer.  Array of tf_rm_resc_entry type. Size of the
+	 * buffer is provided by the 'flush_size' field in this
+	 * message.
 	 */
 	uint64_t	flush_addr;
 } __rte_packed;
@@ -34062,12 +35950,9 @@ struct hwrm_tf_session_resc_flush_output {
 } __rte_packed;
 
 /* TruFlow RM capability of a resource. */
-/* tf_rm_cap (size:64b/8B) */
-struct tf_rm_cap {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_req_entry (size:64b/8B) */
+struct tf_rm_resc_req_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Minimum value. */
 	uint16_t	min;
@@ -34075,25 +35960,10 @@ struct tf_rm_cap {
 	uint16_t	max;
 } __rte_packed;
 
-/* TruFlow RM number of a resource. */
-/* tf_rm_num (size:64b/8B) */
-struct tf_rm_num {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
-	uint32_t	type;
-	/* Number of resources. */
-	uint32_t	num;
-} __rte_packed;
-
 /* TruFlow RM reservation information. */
-/* tf_rm_res (size:64b/8B) */
-struct tf_rm_res {
-	/*
-	 * Type of the resource, defined globally in the
-	 * hwrm_tf_resc_type enum.
-	 */
+/* tf_rm_resc_entry (size:64b/8B) */
+struct tf_rm_resc_entry {
+	/* Type of the resource, defined globally in HCAPI RM. */
 	uint32_t	type;
 	/* Start offset. */
 	uint16_t	start;
@@ -34925,6 +36795,162 @@ struct hwrm_tf_ext_em_qcfg_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/*********************
+ * hwrm_tf_em_insert *
+ *********************/
+
+
+/* hwrm_tf_em_insert_input (size:832b/104B) */
+struct hwrm_tf_em_insert_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware Session Id. */
+	uint32_t	fw_session_id;
+	/* Control Flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX
+	/* Reported match strength. */
+	uint16_t	strength;
+	/* Index to action. */
+	uint32_t	action_ptr;
+	/* Index of EM record. */
+	uint32_t	em_record_idx;
+	/* EM Key value. */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
+/* hwrm_tf_em_insert_output (size:128b/16B) */
+struct hwrm_tf_em_insert_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* EM record pointer index. */
+	uint16_t	rptr_index;
+	/* EM record offset 0~3. */
+	uint8_t	rptr_entry;
+	/* Number of word entries consumed by the key. */
+	uint8_t	num_of_entries;
+	/* unused. */
+	uint32_t	unused0;
+} __rte_packed;
+
+/*********************
+ * hwrm_tf_em_delete *
+ *********************/
+
+
+/* hwrm_tf_em_delete_input (size:832b/104B) */
+struct hwrm_tf_em_delete_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Session Id. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX
+	/* Unused0 */
+	uint16_t	unused0;
+	/* EM internal flow hanndle. */
+	uint64_t	flow_handle;
+	/* EM Key value */
+	uint64_t	em_key[8];
+	/* Number of bits in em_key. */
+	uint16_t	em_key_bitlen;
+	/* unused. */
+	uint16_t	unused1[3];
+} __rte_packed;
+
+/* hwrm_tf_em_delete_output (size:128b/16B) */
+struct hwrm_tf_em_delete_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Original stack allocation index. */
+	uint16_t	em_index;
+	/* unused. */
+	uint16_t	unused0[3];
+} __rte_packed;
+
 /********************
  * hwrm_tf_tcam_set *
  ********************/
@@ -35582,10 +37608,10 @@ struct ctx_hw_stats {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35598,10 +37624,10 @@ struct ctx_hw_stats {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35618,7 +37644,11 @@ struct ctx_hw_stats {
 	uint64_t	tpa_aborts;
 } __rte_packed;
 
-/* Periodic statistics context DMA to host. */
+/*
+ * Extended periodic statistics context DMA to host. On cards that
+ * support TPA v2, additional TPA related stats exist and can be retrieved
+ * by DMA of ctx_hw_stats_ext, rather than legacy ctx_hw_stats structure.
+ */
 /* ctx_hw_stats_ext (size:1344b/168B) */
 struct ctx_hw_stats_ext {
 	/* Number of received unicast packets */
@@ -35627,10 +37657,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	rx_mcast_pkts;
 	/* Number of received broadcast packets */
 	uint64_t	rx_bcast_pkts;
-	/* Number of discarded packets on received path */
+	/* Number of discarded packets on receive path */
 	uint64_t	rx_discard_pkts;
-	/* Number of dropped packets on received path */
-	uint64_t	rx_drop_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
 	/* Number of received bytes for multicast traffic */
@@ -35643,10 +37673,10 @@ struct ctx_hw_stats_ext {
 	uint64_t	tx_mcast_pkts;
 	/* Number of transmitted broadcast packets */
 	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
 	/* Number of discarded packets on transmit path */
 	uint64_t	tx_discard_pkts;
-	/* Number of dropped packets on transmit path */
-	uint64_t	tx_drop_pkts;
 	/* Number of transmitted bytes for unicast traffic */
 	uint64_t	tx_ucast_bytes;
 	/* Number of transmitted bytes for multicast traffic */
@@ -35912,7 +37942,14 @@ struct hwrm_stat_ctx_query_input {
 	uint64_t	resp_addr;
 	/* ID of the statistics context that is being queried. */
 	uint32_t	stat_ctx_id;
-	uint8_t	unused_0[4];
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK     UINT32_C(0x1)
+	uint8_t	unused_0[3];
 } __rte_packed;
 
 /* hwrm_stat_ctx_query_output (size:1408b/176B) */
@@ -35949,7 +37986,7 @@ struct hwrm_stat_ctx_query_output {
 	uint64_t	rx_bcast_pkts;
 	/* Number of received packets with error */
 	uint64_t	rx_err_pkts;
-	/* Number of dropped packets on received path */
+	/* Number of dropped packets on receive path */
 	uint64_t	rx_drop_pkts;
 	/* Number of received bytes for unicast traffic */
 	uint64_t	rx_ucast_bytes;
@@ -35976,6 +38013,117 @@ struct hwrm_stat_ctx_query_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/***************************
+ * hwrm_stat_ext_ctx_query *
+ ***************************/
+
+
+/* hwrm_stat_ext_ctx_query_input (size:192b/24B) */
+struct hwrm_stat_ext_ctx_query_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* ID of the extended statistics context that is being queried. */
+	uint32_t	stat_ctx_id;
+	uint8_t	flags;
+	/*
+	 * This bit is set to 1 when request is for a counter mask,
+	 * representing the width of each of the stats counters, rather
+	 * than counters themselves.
+	 */
+	#define HWRM_STAT_EXT_CTX_QUERY_INPUT_FLAGS_COUNTER_MASK \
+		UINT32_C(0x1)
+	uint8_t	unused_0[3];
+} __rte_packed;
+
+/* hwrm_stat_ext_ctx_query_output (size:1472b/184B) */
+struct hwrm_stat_ext_ctx_query_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of received unicast packets */
+	uint64_t	rx_ucast_pkts;
+	/* Number of received multicast packets */
+	uint64_t	rx_mcast_pkts;
+	/* Number of received broadcast packets */
+	uint64_t	rx_bcast_pkts;
+	/* Number of discarded packets on receive path */
+	uint64_t	rx_discard_pkts;
+	/* Number of packets on receive path with error */
+	uint64_t	rx_error_pkts;
+	/* Number of received bytes for unicast traffic */
+	uint64_t	rx_ucast_bytes;
+	/* Number of received bytes for multicast traffic */
+	uint64_t	rx_mcast_bytes;
+	/* Number of received bytes for broadcast traffic */
+	uint64_t	rx_bcast_bytes;
+	/* Number of transmitted unicast packets */
+	uint64_t	tx_ucast_pkts;
+	/* Number of transmitted multicast packets */
+	uint64_t	tx_mcast_pkts;
+	/* Number of transmitted broadcast packets */
+	uint64_t	tx_bcast_pkts;
+	/* Number of packets on transmit path with error */
+	uint64_t	tx_error_pkts;
+	/* Number of discarded packets on transmit path */
+	uint64_t	tx_discard_pkts;
+	/* Number of transmitted bytes for unicast traffic */
+	uint64_t	tx_ucast_bytes;
+	/* Number of transmitted bytes for multicast traffic */
+	uint64_t	tx_mcast_bytes;
+	/* Number of transmitted bytes for broadcast traffic */
+	uint64_t	tx_bcast_bytes;
+	/* Number of TPA eligible packets */
+	uint64_t	rx_tpa_eligible_pkt;
+	/* Number of TPA eligible bytes */
+	uint64_t	rx_tpa_eligible_bytes;
+	/* Number of TPA packets */
+	uint64_t	rx_tpa_pkt;
+	/* Number of TPA bytes */
+	uint64_t	rx_tpa_bytes;
+	/* Number of TPA errors */
+	uint64_t	rx_tpa_errors;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /***************************
  * hwrm_stat_ctx_eng_query *
  ***************************/
@@ -37565,6 +39713,13 @@ struct hwrm_nvm_install_update_input {
 	 */
 	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_ALLOWED_TO_DEFRAG \
 		UINT32_C(0x4)
+	/*
+	 * If set to 1, FW will verify the package in the "UPDATE" NVM item
+	 * without installing it. This flag is for FW internal use only.
+	 * Users should not set this flag. The request will otherwise fail.
+	 */
+	#define HWRM_NVM_INSTALL_UPDATE_INPUT_FLAGS_VERIFY_ONLY \
+		UINT32_C(0x8)
 	uint8_t	unused_0[2];
 } __rte_packed;
 
@@ -38115,6 +40270,72 @@ struct hwrm_nvm_validate_option_cmd_err {
 	uint8_t	unused_0[7];
 } __rte_packed;
 
+/****************
+ * hwrm_oem_cmd *
+ ****************/
+
+
+/* hwrm_oem_cmd_input (size:1024b/128B) */
+struct hwrm_oem_cmd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific command data. */
+	uint32_t	oem_data[26];
+} __rte_packed;
+
+/* hwrm_oem_cmd_output (size:768b/96B) */
+struct hwrm_oem_cmd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	IANA;
+	uint32_t	unused_0;
+	/* This field contains the vendor specific response data. */
+	uint32_t	oem_data[18];
+	uint8_t	unused_1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /*****************
  * hwrm_fw_reset *
  ******************/
@@ -38338,6 +40559,55 @@ struct hwrm_port_ts_query_output {
 	uint8_t		valid;
 } __rte_packed;
 
+/*
+ * This structure is fixed at the beginning of the ChiMP SRAM (GRC
+ * offset: 0x31001F0). Host software is expected to read from this
+ * location for a defined signature. If it exists, the software can
+ * assume the presence of this structure and the validity of the
+ * FW_STATUS location in the next field.
+ */
+/* hcomm_status (size:64b/8B) */
+struct hcomm_status {
+	uint32_t	sig_ver;
+	/*
+	 * This field defines the version of the structure. The latest
+	 * version value is 1.
+	 */
+	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
+	#define HCOMM_STATUS_VER_SFT		0
+	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
+	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
+	/*
+	 * This field is to store the signature value to indicate the
+	 * presence of the structure.
+	 */
+	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
+	#define HCOMM_STATUS_SIGNATURE_SFT	8
+	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
+	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
+	uint32_t	fw_status_loc;
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
+	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
+	/* PCIE configuration space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
+	/* GRC space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
+	/* BAR0 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
+	/* BAR1 space */
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
+	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
+		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
+	/*
+	 * This offset where the fw_status register is located. The value
+	 * is generally 4-byte aligned.
+	 */
+	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
+	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
+} __rte_packed;
+/* This is the GRC offset where the hcomm_status struct resides. */
+#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
+
 /**************************
  * hwrm_cfa_counter_qcaps *
  **************************/
@@ -38622,53 +40892,4 @@ struct hwrm_cfa_counter_qstats_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/*
- * This structure is fixed at the beginning of the ChiMP SRAM (GRC
- * offset: 0x31001F0). Host software is expected to read from this
- * location for a defined signature. If it exists, the software can
- * assume the presence of this structure and the validity of the
- * FW_STATUS location in the next field.
- */
-/* hcomm_status (size:64b/8B) */
-struct hcomm_status {
-	uint32_t	sig_ver;
-	/*
-	 * This field defines the version of the structure. The latest
-	 * version value is 1.
-	 */
-	#define HCOMM_STATUS_VER_MASK		UINT32_C(0xff)
-	#define HCOMM_STATUS_VER_SFT		0
-	#define HCOMM_STATUS_VER_LATEST		UINT32_C(0x1)
-	#define HCOMM_STATUS_VER_LAST		HCOMM_STATUS_VER_LATEST
-	/*
-	 * This field is to store the signature value to indicate the
-	 * presence of the structure.
-	 */
-	#define HCOMM_STATUS_SIGNATURE_MASK	UINT32_C(0xffffff00)
-	#define HCOMM_STATUS_SIGNATURE_SFT	8
-	#define HCOMM_STATUS_SIGNATURE_VAL	(UINT32_C(0x484353) << 8)
-	#define HCOMM_STATUS_SIGNATURE_LAST	HCOMM_STATUS_SIGNATURE_VAL
-	uint32_t	fw_status_loc;
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_MASK	UINT32_C(0x3)
-	#define HCOMM_STATUS_TRUE_ADDR_SPACE_SFT	0
-	/* PCIE configuration space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_PCIE_CFG	UINT32_C(0x0)
-	/* GRC space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_GRC	UINT32_C(0x1)
-	/* BAR0 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR0	UINT32_C(0x2)
-	/* BAR1 space */
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1	UINT32_C(0x3)
-	#define HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_LAST	\
-		HCOMM_STATUS_FW_STATUS_LOC_ADDR_SPACE_BAR1
-	/*
-	 * This offset where the fw_status register is located. The value
-	 * is generally 4-byte aligned.
-	 */
-	#define HCOMM_STATUS_TRUE_OFFSET_MASK		UINT32_C(0xfffffffc)
-	#define HCOMM_STATUS_TRUE_OFFSET_SFT		2
-} __rte_packed;
-/* This is the GRC offset where the hcomm_status struct resides. */
-#define HCOMM_STATUS_STRUCT_LOC		0x31001F0UL
-
 #endif /* _HSI_STRUCT_DEF_DPDK_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 341909573..439950e02 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -86,6 +86,7 @@ struct tf_tbl_type_get_output;
 struct tf_em_internal_insert_input;
 struct tf_em_internal_insert_output;
 struct tf_em_internal_delete_input;
+struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -949,6 +950,8 @@ typedef struct tf_em_internal_insert_output {
 	uint16_t			 rptr_index;
 	/* EM record offset 0~3 */
 	uint8_t			  rptr_entry;
+	/* Number of word entries consumed by the key */
+	uint8_t			  num_of_entries;
 } tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
 
 /* Input params for EM INTERNAL rule delete */
@@ -969,4 +972,10 @@ typedef struct tf_em_internal_delete_input {
 	uint16_t			 em_key_bitlen;
 } tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
 
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_output {
+	/* Original stack allocation index */
+	uint16_t			 em_index;
+} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
index e5abcc2f2..b1fd2cd43 100644
--- a/drivers/net/bnxt/tf_core/lookup3.h
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -152,7 +152,6 @@ static inline uint32_t hashword(const uint32_t *k,
 		final(a, b, c);
 		/* Falls through. */
 	case 0:	    /* case 0: nothing left to add */
-		/* FALLTHROUGH */
 		break;
 	}
 	/*------------------------------------------------- report the result */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
index 9cfbd244f..954806377 100644
--- a/drivers/net/bnxt/tf_core/stack.c
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -27,6 +27,14 @@ stack_init(int num_entries, uint32_t *items, struct stack *st)
 	return 0;
 }
 
+/*
+ * Return the address of the items
+ */
+uint32_t *stack_items(struct stack *st)
+{
+	return st->items;
+}
+
 /* Return the size of the stack
  */
 int32_t
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
index ebd055592..6732e0313 100644
--- a/drivers/net/bnxt/tf_core/stack.h
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -36,6 +36,16 @@ int stack_init(int num_entries,
 	       uint32_t *items,
 	       struct stack *st);
 
+/** Return the address of the stack contents
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    pointer to the stack contents
+ */
+uint32_t *stack_items(struct stack *st);
+
 /** Return the size of the stack
  *
  *  [in] st
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index cf9f36adb..1f6c33ab5 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -45,6 +45,100 @@ static void tf_seeds_init(struct tf_session *session)
 	}
 }
 
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	tfp_free(ptr);
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -54,6 +148,7 @@ tf_open_session(struct tf                    *tfp,
 	struct tfp_calloc_parms alloc_parms;
 	unsigned int domain, bus, slot, device;
 	uint8_t fw_session_id;
+	int dir;
 
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
@@ -110,7 +205,7 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup;
 	}
 
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+	tfp->session = alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -175,6 +270,16 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize EM pool */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EM Pool initialization failed\n");
+			goto cleanup_close;
+		}
+	}
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -239,6 +344,7 @@ tf_close_session(struct tf *tfp)
 	int rc_close = 0;
 	struct tf_session *tfs;
 	union tf_session_id session_id;
+	int dir;
 
 	if (tfp == NULL || tfp->session == NULL)
 		return -EINVAL;
@@ -268,6 +374,10 @@ tf_close_session(struct tf *tfp)
 
 	/* Final cleanup as we're last user of the session */
 	if (tfs->ref_count == 0) {
+		/* Free EM pool */
+		for (dir = 0; dir < TF_DIR_MAX; dir++)
+			tf_free_em_pool(tfs, dir);
+
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
 		tfp->session = NULL;
@@ -301,16 +411,25 @@ int tf_insert_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
 	/* Process the EM entry per Table Scope type */
-	return tf_insert_eem_entry((struct tf_session *)tfp->session->core_data,
-				   tbl_scope_cb,
-				   parms);
+	if (parms->mem == TF_MEM_EXTERNAL) {
+		/* External EEM */
+		return tf_insert_eem_entry((struct tf_session *)
+					   (tfp->session->core_data),
+					   tbl_scope_cb,
+					   parms);
+	} else if (parms->mem == TF_MEM_INTERNAL) {
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
+	}
+
+	return -EINVAL;
 }
 
 /** Delete EM hash entry API
@@ -327,13 +446,16 @@ int tf_delete_em_entry(struct tf *tfp,
 	if (tfp == NULL || parms == NULL)
 		return -EINVAL;
 
-	tbl_scope_cb =
-		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
-				  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
+					 (tfp->session->core_data),
+					 parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL)
 		return -EINVAL;
 
-	return tf_delete_eem_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tfp, parms);
+	else
+		return tf_delete_em_internal_entry(tfp, parms);
 }
 
 /** allocate identifier resource
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1eedd80e7..81ff7602f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -44,44 +44,7 @@ enum tf_mem {
 };
 
 /**
- * The size of the external action record (Wh+/Brd2)
- *
- * Currently set to 512.
- *
- * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
- * + stats (16) = 304 aligned on a 16B boundary
- *
- * Theoretically, the size should be smaller. ~304B
- */
-#define TF_ACTION_RECORD_SZ 512
-
-/**
- * External pool size
- *
- * Defines a single pool of external action records of
- * fixed size.  Currently, this is an index.
- */
-#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
-
-/**
- *  External pool entry count
- *
- *  Defines the number of entries in the external action pool
- */
-#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
-
-/**
- * Number of external pools
- */
-#define TF_EXT_POOL_CNT_MAX 1
-
-/**
- * External pool Id
- */
-#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
-#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
-
-/** EEM record AR helper
+ * EEM record AR helper
  *
  * Helper to handle the Action Record Pointer in the EEM Record Entry.
  *
@@ -109,7 +72,8 @@ enum tf_mem {
  */
 
 
-/** Session Version defines
+/**
+ * Session Version defines
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -119,7 +83,8 @@ enum tf_mem {
 #define TF_SESSION_VER_MINOR  0   /**< Minor Version */
 #define TF_SESSION_VER_UPDATE 0   /**< Update Version */
 
-/** Session Name
+/**
+ * Session Name
  *
  * Name of the TruFlow control channel interface.  Expects
  * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
@@ -128,7 +93,8 @@ enum tf_mem {
 
 #define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Session ID define */
 
-/** Session Identifier
+/**
+ * Session Identifier
  *
  * Unique session identifier which includes PCIe bus info to
  * distinguish the PF and session info to identify the associated
@@ -146,7 +112,8 @@ union tf_session_id {
 	} internal;
 };
 
-/** Session Version
+/**
+ * Session Version
  *
  * The version controls the format of the tf_session and
  * tf_session_info structure. This is to assure upgrade between
@@ -160,8 +127,8 @@ struct tf_session_version {
 	uint8_t update;
 };
 
-/** Session supported device types
- *
+/**
+ * Session supported device types
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
@@ -171,6 +138,147 @@ enum tf_device_type {
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
+/** Identifier resource types
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for SR2
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC,
+	TF_IDENT_TYPE_MAX
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/* Internal */
+
+	/** Wh+/SR Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Wh+/SR/Th Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Wh+/SR Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Wh+/SR Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 32 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Wh+/SR Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Wh+/SR Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Wh+/SR Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Wh+/SR Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Wh+/SR Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Wh+/SR Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Wh+/SR Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** SR2 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** SR2 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** SR2 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** SR2 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** SR2 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** SR2 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** SR2 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** SR2 VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+	/** Th/SR2 EM Flexible Key builder */
+	TF_TBL_TYPE_EM_FKB,
+	/** Th/SR2 WC Flexible Key builder */
+	TF_TBL_TYPE_WC_FKB,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	TF_TBL_TYPE_MAX
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	/** L2 Context TCAM */
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	/** Profile TCAM */
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	/** Wildcard TCAM */
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	/** Source Properties TCAM */
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	/** Connection Tracking Rule TCAM */
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	/** Virtual Edge Bridge TCAM */
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+};
+
+/**
+ * EM Resources
+ * These defines are provisioned during
+ * tf_open_session()
+ */
+enum tf_em_tbl_type {
+	/** The number of internal EM records for the session */
+	TF_EM_TBL_TYPE_EM_RECORD,
+	/** The number of table scopes reequested */
+	TF_EM_TBL_TYPE_TBL_SCOPE,
+	TF_EM_TBL_TYPE_MAX
+};
+
 /** TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
@@ -309,6 +417,30 @@ struct tf_open_session_parms {
 	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
 	 */
 	enum tf_device_type device_type;
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
 };
 
 /**
@@ -417,31 +549,6 @@ int tf_close_session(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
-	 *  and can be used in WC TCAM or EM keys to virtualize further
-	 *  lookups.
-	 */
-	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
-	 *  to enable virtualization of the profile TCAM.
-	 */
-	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
-	 *  to enable virtualization of the WC TCAM hardware.
-	 */
-	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
-	 *  to enable virtualization of the EM hardware. (not required for Brd4
-	 *  as it has table scope)
-	 */
-	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
-	 *  enable virtualization of further lookups.
-	 */
-	TF_IDENT_TYPE_L2_FUNC
-};
-
 /** tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
@@ -631,19 +738,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-/**
- * TCAM table type
- */
-enum tf_tcam_tbl_type {
-	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	TF_TCAM_TBL_TYPE_PROF_TCAM,
-	TF_TCAM_TBL_TYPE_WC_TCAM,
-	TF_TCAM_TBL_TYPE_SP_TCAM,
-	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
-	TF_TCAM_TBL_TYPE_VEB_TCAM,
-	TF_TCAM_TBL_TYPE_MAX
-
-};
 
 /**
  * @page tcam TCAM Access
@@ -813,7 +907,8 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** get TCAM entry
+/*
+ * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -824,7 +919,8 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/** tf_free_tcam_entry parameter definition
+/*
+ * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
 	/**
@@ -845,8 +941,7 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free TCAM entry
- *
+/*
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -873,84 +968,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  */
 
 /**
- * Enumeration of TruFlow table types. A table type is used to identify a
- * resource object.
- *
- * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
- * the only table type that is connected with a table scope.
- */
-enum tf_tbl_type {
-	/** Wh+/Brd2 Action Record */
-	TF_TBL_TYPE_FULL_ACT_RECORD,
-	/** Multicast Groups */
-	TF_TBL_TYPE_MCAST_GROUPS,
-	/** Action Encap 8 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_8B,
-	/** Action Encap 16 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_16B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_32B,
-	/** Action Encap 64 Bytes */
-	TF_TBL_TYPE_ACT_ENCAP_64B,
-	/** Action Source Properties SMAC */
-	TF_TBL_TYPE_ACT_SP_SMAC,
-	/** Action Source Properties SMAC IPv4 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
-	/** Action Source Properties SMAC IPv6 */
-	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
-	/** Action Statistics 64 Bits */
-	TF_TBL_TYPE_ACT_STATS_64,
-	/** Action Modify L4 Src Port */
-	TF_TBL_TYPE_ACT_MODIFY_SPORT,
-	/** Action Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_DPORT,
-	/** Action Modify IPv4 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
-	/** Action _Modify L4 Dest Port */
-	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
-
-	/* HW */
-
-	/** Meter Profiles */
-	TF_TBL_TYPE_METER_PROF,
-	/** Meter Instance */
-	TF_TBL_TYPE_METER_INST,
-	/** Mirror Config */
-	TF_TBL_TYPE_MIRROR_CONFIG,
-	/** UPAR */
-	TF_TBL_TYPE_UPAR,
-	/** Brd4 Epoch 0 table */
-	TF_TBL_TYPE_EPOCH0,
-	/** Brd4 Epoch 1 table  */
-	TF_TBL_TYPE_EPOCH1,
-	/** Brd4 Metadata  */
-	TF_TBL_TYPE_METADATA,
-	/** Brd4 CT State  */
-	TF_TBL_TYPE_CT_STATE,
-	/** Brd4 Range Profile  */
-	TF_TBL_TYPE_RANGE_PROF,
-	/** Brd4 Range Entry  */
-	TF_TBL_TYPE_RANGE_ENTRY,
-	/** Brd4 LAG Entry  */
-	TF_TBL_TYPE_LAG,
-	/** Brd4 only VNIC/SVIF Table */
-	TF_TBL_TYPE_VNIC_SVIF,
-
-	/* External */
-
-	/** External table type - initially 1 poolsize entries.
-	 * All External table types are associated with a table
-	 * scope. Internal types are not.
-	 */
-	TF_TBL_TYPE_EXT,
-	TF_TBL_TYPE_MAX
-};
-
-/** tf_alloc_tbl_entry parameter definition
+ * tf_alloc_tbl_entry parameter definition
  */
 struct tf_alloc_tbl_entry_parms {
 	/**
@@ -993,7 +1011,8 @@ struct tf_alloc_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** allocate index table entries
+/**
+ * allocate index table entries
  *
  * Internal types:
  *
@@ -1023,7 +1042,8 @@ struct tf_alloc_tbl_entry_parms {
 int tf_alloc_tbl_entry(struct tf *tfp,
 		       struct tf_alloc_tbl_entry_parms *parms);
 
-/** tf_free_tbl_entry parameter definition
+/**
+ * tf_free_tbl_entry parameter definition
  */
 struct tf_free_tbl_entry_parms {
 	/**
@@ -1049,7 +1069,8 @@ struct tf_free_tbl_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/** free index table entry
+/**
+ * free index table entry
  *
  * Used to free a previously allocated table entry.
  *
@@ -1075,7 +1096,8 @@ struct tf_free_tbl_entry_parms {
 int tf_free_tbl_entry(struct tf *tfp,
 		      struct tf_free_tbl_entry_parms *parms);
 
-/** tf_set_tbl_entry parameter definition
+/**
+ * tf_set_tbl_entry parameter definition
  */
 struct tf_set_tbl_entry_parms {
 	/**
@@ -1104,7 +1126,8 @@ struct tf_set_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** set index table entry
+/**
+ * set index table entry
  *
  * Used to insert an application programmed index table entry into a
  * previous allocated table location.  A shadow copy of the table
@@ -1115,7 +1138,8 @@ struct tf_set_tbl_entry_parms {
 int tf_set_tbl_entry(struct tf *tfp,
 		     struct tf_set_tbl_entry_parms *parms);
 
-/** tf_get_tbl_entry parameter definition
+/**
+ * tf_get_tbl_entry parameter definition
  */
 struct tf_get_tbl_entry_parms {
 	/**
@@ -1140,7 +1164,8 @@ struct tf_get_tbl_entry_parms {
 	uint32_t idx;
 };
 
-/** get index table entry
+/**
+ * get index table entry
  *
  * Used to retrieve a previous set index table entry.
  *
@@ -1163,7 +1188,8 @@ int tf_get_tbl_entry(struct tf *tfp,
  * @ref tf_search_em_entry
  *
  */
-/** tf_insert_em_entry parameter definition
+/**
+ * tf_insert_em_entry parameter definition
  */
 struct tf_insert_em_entry_parms {
 	/**
@@ -1239,6 +1265,10 @@ struct tf_delete_em_entry_parms {
 	 * 2 element array with 2 ids. (Brd4 only)
 	 */
 	uint16_t *epochs;
+	/**
+	 * [out] The index of the entry
+	 */
+	uint16_t index;
 	/**
 	 * [in] structure containing flow delete handle information
 	 */
@@ -1291,7 +1321,8 @@ struct tf_search_em_entry_parms {
 	uint64_t flow_handle;
 };
 
-/** insert em hash entry in internal table memory
+/**
+ * insert em hash entry in internal table memory
  *
  * Internal:
  *
@@ -1328,7 +1359,8 @@ struct tf_search_em_entry_parms {
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms);
 
-/** delete em hash entry table memory
+/**
+ * delete em hash entry table memory
  *
  * Internal:
  *
@@ -1353,7 +1385,8 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms);
 
-/** search em hash entry table memory
+/**
+ * search em hash entry table memory
  *
  * Internal:
 
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index bd8e2ba8a..fd1797e39 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -287,7 +287,7 @@ static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
 }
 
 static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t	       *in_key,
+				    uint8_t *in_key,
 				    struct tf_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
@@ -308,7 +308,7 @@ static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
  * EEXIST  - Key does exist in table at "index" in table "table".
  * TF_ERR     - Something went horribly wrong.
  */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 					  enum tf_dir dir,
 					  struct tf_eem_64b_entry *entry,
 					  uint32_t key0_hash,
@@ -368,8 +368,8 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session	   *session,
-			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
@@ -457,6 +457,96 @@ int tf_insert_eem_entry(struct tf_session	   *session,
 	return -EINVAL;
 }
 
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms)
+{
+	int       rc;
+	uint32_t  gfid;
+	uint16_t  rptr_index = 0;
+	uint8_t   rptr_entry = 0;
+	uint8_t   num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG
+		   (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc != 0)
+		return -1;
+
+	PMD_DRV_LOG
+		   (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int tf_delete_em_internal_entry(struct tf *tfp,
+				struct tf_delete_em_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+
 /** delete EEM hash entry API
  *
  * returns:
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 8a3584fbd..c1805df73 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -12,6 +12,20 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+/*
+ * Used to build GFID:
+ *
+ *   15           2  0
+ *  +--------------+--+
+ *  |   Index      |E |
+ *  +--------------+--+
+ *
+ * E = Entry (bucket inndex)
+ */
+#define TF_EM_INTERNAL_INDEX_SHIFT 2
+#define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
+#define TF_EM_INTERNAL_ENTRY_MASK  0x3
+
 /** EEM Entry header
  *
  */
@@ -53,6 +67,17 @@ struct tf_eem_64b_entry {
 	struct tf_eem_entry_hdr hdr;
 };
 
+/** EM Entry
+ *  Each EM entry is 512-bit (64-bytes) but ordered differently to
+ *  EEM.
+ */
+struct tf_em_64b_entry {
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+};
+
 /**
  * Allocates EEM Table scope
  *
@@ -106,9 +131,15 @@ int tf_insert_eem_entry(struct tf_session *session,
 			struct tf_tbl_scope_cb *tbl_scope_cb,
 			struct tf_insert_em_entry_parms *parms);
 
+int tf_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *parms);
+
 int tf_delete_eem_entry(struct tf *tfp,
 			struct tf_delete_em_entry_parms *parms);
 
+int tf_delete_em_internal_entry(struct tf                       *tfp,
+				struct tf_delete_em_entry_parms *parms);
+
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
index 417a99cda..1491539ca 100644
--- a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -90,6 +90,18 @@ do {									\
 		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
 } while (0)
 
+#define TF_GET_NUM_KEY_ENTRIES_FROM_FLOW_HANDLE(flow_handle,		\
+					  num_key_entries)		\
+	(num_key_entries =						\
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		     TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT))		\
+
+#define TF_GET_ENTRY_NUM_FROM_FLOW_HANDLE(flow_handle,		\
+					  entry_num)		\
+	(entry_num =						\
+		(((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT))		\
+
 /*
  * 32 bit Flow ID handlers
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index beecafdeb..554a8491d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -16,6 +16,7 @@
 #include "tf_msg.h"
 #include "hsi_struct_def_dpdk.h"
 #include "hwrm_tf.h"
+#include "tf_em.h"
 
 /**
  * Endian converts min and max values from the HW response to the query
@@ -1013,15 +1014,94 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_insert_input req = { 0 };
+	struct tf_em_internal_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
+		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_INSERT,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends EM delete insert request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_em_internal_delete_input req = { 0 };
+	struct tf_em_internal_delete_output resp = { 0 };
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.tf_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_EM_RULE_DELETE,
+		 req,
+		resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 /**
  * Sends EM operation request to Firmware
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op)
+		 int dir,
+		 uint16_t op)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_input req = {0};
 	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
 	struct tfp_send_msg_parms parms = { 0 };
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 030d1881e..89f7370cc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -121,6 +121,19 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends EM internal insert request to Firmware
+ */
+int tf_msg_insert_em_internal_entry(struct tf *tfp,
+				    struct tf_insert_em_entry_parms *params,
+				    uint16_t *rptr_index,
+				    uint8_t *rptr_entry,
+				    uint8_t *num_of_entries);
+/**
+ * Sends EM internal delete request to Firmware
+ */
+int tf_msg_delete_em_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *em_parms);
 /**
  * Sends EM mem register request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 50ef2d530..c9f4f8f04 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -13,12 +13,25 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "stack.h"
 
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
+/**
+ * Number of EM entries. Static for now will be removed
+ * when parameter added at a later date. At this stage we
+ * are using fixed size entries so that each stack entry
+ * represents 4 RT (f/n)blocks. So we take the total block
+ * allocation for truflow and divide that by 4.
+ */
+#define TF_SESSION_TOTAL_FN_BLOCKS (1024 * 8) /* 8K blocks */
+#define TF_SESSION_EM_ENTRY_SIZE 4 /* 4 blocks per entry */
+#define TF_SESSION_EM_POOL_SIZE \
+	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
+
 /** Session
  *
  * Shared memory containing private TruFlow session information.
@@ -289,6 +302,11 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/**
+	 * EM Pools
+	 */
+	struct stack em_pool[TF_DIR_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d900c9c09..dda72c3d5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -156,7 +156,7 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
 		if (tfp_calloc(&parms) != 0)
 			goto cleanup;
 
-		tp->pg_pa_tbl[i] = (uint64_t)(uintptr_t)parms.mem_pa;
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
 		tp->pg_va_tbl[i] = parms.mem_va;
 
 		memset(tp->pg_va_tbl[i], 0, pg_size);
@@ -792,7 +792,8 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	index = parms->idx;
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1179,7 +1180,8 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1330,7 +1332,8 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
 			    parms->dir,
@@ -1801,3 +1804,91 @@ tf_free_tbl_entry(struct tf *tfp,
 			    rc);
 	return rc;
 }
+
+
+static void
+tf_dump_link_page_table(struct tf_em_page_tbl *tp,
+			struct tf_em_page_tbl *tp_next)
+{
+	uint64_t *pg_va;
+	uint32_t i;
+	uint32_t j;
+	uint32_t k = 0;
+
+	printf("pg_count:%d pg_size:0x%x\n",
+	       tp->pg_count,
+	       tp->pg_size);
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+		printf("\t%p\n", (void *)pg_va);
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
+			if (((pg_va[j] & 0x7) ==
+			     tfp_cpu_to_le_64(PTU_PTE_LAST |
+					      PTU_PTE_VALID)))
+				return;
+
+			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
+				printf("** Invalid entry **\n");
+				return;
+			}
+
+			if (++k >= tp_next->pg_count) {
+				printf("** Shouldn't get here **\n");
+				return;
+			}
+		}
+	}
+}
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
+
+void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
+{
+	struct tf_session      *session;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_page_tbl *tp;
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_table *tbl;
+	int i;
+	int j;
+	int dir;
+
+	printf("called %s\n", __func__);
+
+	/* find session struct */
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* find control block for table scope */
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		PMD_DRV_LOG(ERR, "No table scope\n");
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
+
+		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
+			printf
+	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
+			       j,
+			       tbl->type,
+			       tbl->num_entries,
+			       tbl->entry_size,
+			       tbl->num_lvl);
+			if (tbl->pg_tbl[0].pg_va_tbl &&
+			    tbl->pg_tbl[0].pg_pa_tbl)
+				printf("%p %p\n",
+			       tbl->pg_tbl[0].pg_va_tbl[0],
+			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
+			for (i = 0; i < tbl->num_lvl - 1; i++) {
+				printf("Level:%d\n", i);
+				tp = &tbl->pg_tbl[i];
+				tp_next = &tbl->pg_tbl[i + 1];
+				tf_dump_link_page_table(tp, tp_next);
+			}
+			printf("\n");
+		}
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index bdc6288ee..7a5443678 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -76,38 +76,51 @@ struct tf_tbl_scope_cb {
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
  */
-#define BNXT_PAGE_SHIFT 22 /** 2M */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
 
-#if (BNXT_PAGE_SHIFT < 12)				/** < 4K >> 4K */
-#define TF_EM_PAGE_SHIFT 12
+#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_PAGE_SHIFT <= 13)			/** 4K, 8K */
-#define TF_EM_PAGE_SHIFT 13
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
-#define TF_EM_PAGE_SHIFT 15
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
-#elif (BNXT_PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
-#define TF_EM_PAGE_SHIFT 16
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
-#define TF_EM_PAGE_SHIFT 18
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_PAGE_SHIFT <= 21)			/** 1M */
-#define TF_EM_PAGE_SHIFT 20
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_PAGE_SHIFT <= 22)			/** 2M, 4M */
-#define TF_EM_PAGE_SHIFT 21
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
-#define TF_EM_PAGE_SHIFT 22
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#else						/** >= 1G >> 1G */
-#define TF_EM_PAGE_SHIFT	30
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
 #define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8d5e94e1a..fe49b6304 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -3,14 +3,23 @@
  * All rights reserved.
  */
 
-/* This header file defines the Portability structures and APIs for
+/*
+ * This header file defines the Portability structures and APIs for
  * TruFlow.
  */
 
 #ifndef _TFP_H_
 #define _TFP_H_
 
+#include <rte_config.h>
 #include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+
+/**
+ * DPDK/Driver specific log level for the BNXT Eth driver.
+ */
+extern int bnxt_logtype_driver;
 
 /** Spinlock
  */
@@ -18,13 +27,21 @@ struct tfp_spinlock_parms {
 	rte_spinlock_t slock;
 };
 
+#define TFP_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, bnxt_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define TFP_DRV_LOG(level, fmt, args...) \
+	TFP_DRV_LOG_RAW(level, fmt, ## args)
+
 /**
  * @file
  *
  * TrueFlow Portability API Header File
  */
 
-/** send message parameter definition
+/**
+ * send message parameter definition
  */
 struct tfp_send_msg_parms {
 	/**
@@ -62,7 +79,8 @@ struct tfp_send_msg_parms {
 	uint32_t *resp_data;
 };
 
-/** calloc parameter definition
+/**
+ * calloc parameter definition
  */
 struct tfp_calloc_parms {
 	/**
@@ -96,43 +114,15 @@ struct tfp_calloc_parms {
  * @ref tfp_send_msg_tunneled
  *
  * @ref tfp_calloc
- * @ref tfp_free
  * @ref tfp_memcpy
+ * @ref tfp_free
  *
  * @ref tfp_spinlock_init
  * @ref tfp_spinlock_lock
  * @ref tfp_spinlock_unlock
  *
- * @ref tfp_cpu_to_le_16
- * @ref tfp_le_to_cpu_16
- * @ref tfp_cpu_to_le_32
- * @ref tfp_le_to_cpu_32
- * @ref tfp_cpu_to_le_64
- * @ref tfp_le_to_cpu_64
- * @ref tfp_cpu_to_be_16
- * @ref tfp_be_to_cpu_16
- * @ref tfp_cpu_to_be_32
- * @ref tfp_be_to_cpu_32
- * @ref tfp_cpu_to_be_64
- * @ref tfp_be_to_cpu_64
  */
 
-#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
-#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
-#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
-#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
-#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
-#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
-#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
-#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
-#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
-#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
-#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
-#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
-#define tfp_bswap_16(val) rte_bswap16(val)
-#define tfp_bswap_32(val) rte_bswap32(val)
-#define tfp_bswap_64(val) rte_bswap64(val)
-
 /**
  * Provides communication capability from the TrueFlow API layer to
  * the TrueFlow firmware. The portability layer internally provides
@@ -162,9 +152,24 @@ int tfp_send_msg_direct(struct tf *tfp,
  *   -1             - Global error like not supported
  *   -EINVAL        - Parameter Error
  */
-int tfp_send_msg_tunneled(struct tf                 *tfp,
+int tfp_send_msg_tunneled(struct tf *tfp,
 			  struct tfp_send_msg_parms *parms);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
@@ -179,10 +184,58 @@ int tfp_send_msg_tunneled(struct tf                 *tfp,
  *   -EINVAL        - Parameter error
  */
 int tfp_calloc(struct tfp_calloc_parms *parms);
-
-void tfp_free(void *addr);
 void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_free(void *addr);
+
 void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
 void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
+
+/*
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (8 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 09/51] net/bnxt: add support for exact match Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-06 18:47           ` Ferruh Yigit
  2020-07-06 19:11           ` Ferruh Yigit
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 11/51] net/bnxt: add multi device support Ajit Khaparde
                           ` (42 subsequent siblings)
  52 siblings, 2 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Modify Exact Match insert and delete to use the HWRM messages directly.
Remove tunneled EM insert and delete message types.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h | 70 ++----------------------------
 drivers/net/bnxt/tf_core/tf_msg.c  | 66 ++++++++++++++++------------
 2 files changed, 43 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 439950e02..d342c695c 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -23,8 +23,6 @@ typedef enum tf_subtype {
 	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
 	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
 	HWRM_TFT_TBL_SCOPE_CFG = 731,
-	HWRM_TFT_EM_RULE_INSERT = 739,
-	HWRM_TFT_EM_RULE_DELETE = 740,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
@@ -83,10 +81,6 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_em_internal_insert_input;
-struct tf_em_internal_insert_output;
-struct tf_em_internal_delete_input;
-struct tf_em_internal_delete_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -351,7 +345,7 @@ typedef struct tf_session_hw_resc_alloc_output {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -453,7 +447,7 @@ typedef struct tf_session_hw_resc_free_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -555,7 +549,7 @@ typedef struct tf_session_hw_resc_flush_input {
 	uint16_t			 range_prof_start;
 	/* Number range profiles allocated */
 	uint16_t			 range_prof_stride;
-	/* Starting index of range entries allocated to the session */
+	/* Starting index of range enntries allocated to the session */
 	uint16_t			 range_entries_start;
 	/* Number of range entries allocated */
 	uint16_t			 range_entries_stride;
@@ -922,60 +916,4 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
-/* Input params for EM internal rule insert */
-typedef struct tf_em_internal_insert_input {
-	/* Firmware Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* strength */
-	uint16_t			 strength;
-	/* index to action */
-	uint32_t			 action_ptr;
-	/* index of em record */
-	uint32_t			 em_record_idx;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
-
-/* Output params for EM internal rule insert */
-typedef struct tf_em_internal_insert_output {
-	/* EM record pointer index */
-	uint16_t			 rptr_index;
-	/* EM record offset 0~3 */
-	uint8_t			  rptr_entry;
-	/* Number of word entries consumed by the key */
-	uint8_t			  num_of_entries;
-} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_input {
-	/* Session Id */
-	uint32_t			 tf_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* EM internal flow hanndle */
-	uint64_t			 flow_handle;
-	/* EM Key value */
-	uint64_t			 em_key[8];
-	/* number of bits in em_key */
-	uint16_t			 em_key_bitlen;
-} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
-
-/* Input params for EM INTERNAL rule delete */
-typedef struct tf_em_internal_delete_output {
-	/* Original stack allocation index */
-	uint16_t			 em_index;
-} tf_em_internal_delete_output_t, *ptf_em_internal_delete_output_t;
-
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 554a8491d..c8f6b88d3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1023,32 +1023,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				uint8_t *rptr_entry,
 				uint8_t *num_of_entries)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_insert_input req = { 0 };
-	struct tf_em_internal_insert_output resp = { 0 };
+	int                         rc;
+	struct tfp_send_msg_parms        parms = { 0 };
+	struct hwrm_tf_em_insert_input   req = { 0 };
+	struct hwrm_tf_em_insert_output  resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
 		TF_LKUP_RECORD_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_INSERT,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1056,7 +1062,7 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 	*rptr_index = resp.rptr_index;
 	*num_of_entries = resp.num_of_entries;
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
@@ -1065,32 +1071,38 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms)
 {
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_em_internal_delete_input req = { 0 };
-	struct tf_em_internal_delete_output resp = { 0 };
+	int                             rc;
+	struct tfp_send_msg_parms       parms = { 0 };
+	struct hwrm_tf_em_delete_input  req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
-	req.tf_session_id =
+	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(em_parms->dir);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
 	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_EM_RULE_DELETE,
-		 req,
-		resp);
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
 	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
 
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
+	return 0;
 }
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 11/51] net/bnxt: add multi device support
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (9 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
                           ` (41 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Randy Schacher, Venkat Duvvuru

From: Michael Wildt <michael.wildt@broadcom.com>

Introduce new modules for Device, Resource Manager, Identifier,
Table Types, and TCAM for multi device support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   8 +
 drivers/net/bnxt/tf_core/Makefile             |   9 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 266 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c            |   2 +
 drivers/net/bnxt/tf_core/tf_core.h            |  56 +--
 drivers/net/bnxt/tf_core/tf_device.c          |  50 +++
 drivers/net/bnxt/tf_core/tf_device.h          | 331 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  24 ++
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  64 +++
 drivers/net/bnxt/tf_core/tf_identifier.c      |  47 +++
 drivers/net/bnxt/tf_core/tf_identifier.h      | 140 +++++++
 drivers/net/bnxt/tf_core/tf_rm.c              |  54 +--
 drivers/net/bnxt/tf_core/tf_rm.h              |  18 -
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 102 +++++
 drivers/net/bnxt/tf_core/tf_rm_new.h          | 368 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_session.c         |  31 ++
 drivers/net/bnxt/tf_core/tf_session.h         |  54 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tbl.h      | 240 ++++++++++++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |  63 +++
 drivers/net/bnxt/tf_core/tf_shadow_tcam.h     | 239 ++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c             |   1 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  78 ++++
 drivers/net/bnxt/tf_core/tf_tbl_type.h        | 309 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_tcam.c            |  78 ++++
 drivers/net/bnxt/tf_core/tf_tcam.h            | 314 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_util.c            | 145 +++++++
 drivers/net/bnxt/tf_core/tf_util.h            |  41 ++
 28 files changed, 3101 insertions(+), 94 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_util.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 5c7859cb5..a50cb261d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,14 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_device_p4.c',
+	'tf_core/tf_identifier.c',
+	'tf_core/tf_shadow_tbl.c',
+	'tf_core/tf_shadow_tcam.c',
+	'tf_core/tf_tbl_type.c',
+	'tf_core/tf_tcam.c',
+	'tf_core/tf_util.c',
+	'tf_core/tf_rm_new.c',
 
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index aa2d964e9..7a3c325a6 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,3 +14,12 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
new file mode 100644
index 000000000..c0c1e754e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -0,0 +1,266 @@
+/*
+ * Copyright(c) 2001-2020, Broadcom. All rights reserved. The
+ * term Broadcom refers to Broadcom Inc. and/or its subsidiaries.
+ * Proprietary and Confidential Information.
+ *
+ * This source file is the property of Broadcom Corporation, and
+ * may not be copied or distributed in any isomorphic form without
+ * the prior written consent of Broadcom Corporation.
+ *
+ * DO NOT MODIFY!!! This file is automatically generated.
+ */
+
+#ifndef _CFA_RESOURCE_TYPES_H_
+#define _CFA_RESOURCE_TYPES_H_
+
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+/* Wildcard TCAM Profile Id */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+/* Meter Profile */
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+/* Source Properties TCAM */
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+/* L2 Func */
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+/* EPOCH */
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+/* Metadata */
+#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+/* Connection Tracking Rule TCAM */
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+/* Range Profile */
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+/* Range */
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+/* Link Aggrigation */
+#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+/* Exact Match Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+/* Wildcard Flexible Key Builder */
+#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+/* VEB TCAM */
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
+#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+
+
+/* SRAM Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
+/* SRAM Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
+/* SRAM Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
+/* SRAM Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
+/* SRAM Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
+/* SRAM Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
+/* SRAM Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
+/* SRAM 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
+/* SRAM Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
+/* SRAM Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
+/* SRAM Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
+/* SRAM Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
+/* SRAM Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
+/* SRAM Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+/* Flow State */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+/* Full Action Records */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+/* Action Record Format 0 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+/* Action Record Format 2 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+/* Action Record Format 3 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+/* Action Record Format 4 */
+#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+/* L2 Context TCAM */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+/* Profile Func */
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+/* Profile TCAM */
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+/* Exact Match Profile Id */
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+/* Wildcard Profile Id */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+/* Wildcard TCAM */
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+/* Meter profile */
+#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+/* Meter */
+#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+/* Source Property TCAM */
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+
+
+#endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 1f6c33ab5..6e15a4c5c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -6,6 +6,7 @@
 #include <stdio.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
 #include "tf_em.h"
@@ -229,6 +230,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	session->dev = NULL;
 	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 81ff7602f..becc50c7f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -371,6 +371,35 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * tf_session_resources parameter definition.
+ */
+struct tf_session_resources {
+	/** [in] Requested Identifier Resources
+	 *
+	 * The number of identifier resources requested for the session.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	/** [in] Requested Index Table resource counts
+	 *
+	 * The number of index table resources requested for the session.
+	 * The index used is tf_tbl_type.
+	 */
+	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested TCAM Table resource counts
+	 *
+	 * The number of TCAM table resources requested for the session.
+	 * The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
+	/** [in] Requested EM resource counts
+	 *
+	 * The number of internal EM table resources requested for the session
+	 * The index used is tf_em_tbl_type.
+	 */
+	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+};
 
 /**
  * tf_open_session parameters definition.
@@ -414,33 +443,14 @@ struct tf_open_session_parms {
 	union tf_session_id session_id;
 	/** [in] device type
 	 *
-	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
 	enum tf_device_type device_type;
-	/** [in] Requested Identifier Resources
-	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
-	 */
-	uint16_t identifer_cnt[TF_IDENT_TYPE_MAX];
-	/** [in] Requested Index Table resource counts
-	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
-	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX];
-	/** [in] Requested TCAM Table resource counts
-	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
-	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX];
-	/** [in] Requested EM resource counts
+	/** [in] resources
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * Resource allocation
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX];
+	struct tf_session_resources resources;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
new file mode 100644
index 000000000..3b368313e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_device_p4.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+struct tf;
+
+/**
+ * Device specific bind function
+ */
+static int
+dev_bind_p4(struct tf *tfp __rte_unused,
+	    struct tf_session_resources *resources __rte_unused,
+	    struct tf_dev_info *dev_info)
+{
+	/* Initialize the modules */
+
+	dev_info->ops = &tf_dev_ops_p4;
+	return 0;
+}
+
+int
+dev_bind(struct tf *tfp __rte_unused,
+	 enum tf_device_type type,
+	 struct tf_session_resources *resources,
+	 struct tf_dev_info *dev_info)
+{
+	switch (type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_bind_p4(tfp,
+				   resources,
+				   dev_info);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "Device type not supported\n");
+		return -ENOTSUP;
+	}
+}
+
+int
+dev_unbind(struct tf *tfp __rte_unused,
+	   struct tf_dev_info *dev_handle __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
new file mode 100644
index 000000000..8b63ff178
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -0,0 +1,331 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_H_
+#define _TF_DEVICE_H_
+
+#include "tf_core.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * The Device module provides a general device template. A supported
+ * device type should implement one or more of the listed function
+ * pointers according to its capabilities.
+ *
+ * If a device function pointer is NULL the device capability is not
+ * supported.
+ */
+
+/**
+ * TF device information
+ */
+struct tf_dev_info {
+	const struct tf_dev_ops *ops;
+};
+
+/**
+ * @page device Device
+ *
+ * @ref tf_dev_bind
+ *
+ * @ref tf_dev_unbind
+ */
+
+/**
+ * Device bind handles the initialization of the specified device
+ * type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] type
+ *   Device type
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int dev_bind(struct tf *tfp,
+	     enum tf_device_type type,
+	     struct tf_session_resources *resources,
+	     struct tf_dev_info *dev_handle);
+
+/**
+ * Device release handles cleanup of the device specific information.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dev_handle
+ *   Device handle
+ */
+int dev_unbind(struct tf *tfp,
+	       struct tf_dev_info *dev_handle);
+
+/**
+ * Truflow device specific function hooks structure
+ *
+ * The following device hooks can be defined; unless noted otherwise,
+ * they are optional and can be filled with a null pointer. The
+ * purpose of these hooks is to support Truflow device operations for
+ * different device variants.
+ */
+struct tf_dev_ops {
+	/**
+	 * Allocation of an identifier element.
+	 *
+	 * This API allocates the specified identifier element from a
+	 * device specific identifier DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ident)(struct tf *tfp,
+				  struct tf_ident_alloc_parms *parms);
+
+	/**
+	 * Free of an identifier element.
+	 *
+	 * This API free's a previous allocated identifier element from a
+	 * device specific identifier DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to identifier free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ident)(struct tf *tfp,
+				 struct tf_ident_free_parms *parms);
+
+	/**
+	 * Allocation of a table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
+				     struct tf_tbl_type_alloc_parms *parms);
+
+	/**
+	 * Free of a table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tbl_type)(struct tf *tfp,
+				    struct tf_tbl_type_free_parms *parms);
+
+	/**
+	 * Searches for the specified table type element in a shadow DB.
+	 *
+	 * This API searches for the specified table type element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the table type
+	 * DB and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tbl_type)
+			(struct tf *tfp,
+			struct tf_tbl_type_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_set_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table type get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tbl_type)(struct tf *tfp,
+				   struct tf_tbl_type_get_parms *parms);
+
+	/**
+	 * Allocation of a tcam element.
+	 *
+	 * This API allocates the specified tcam element from a device
+	 * specific tcam DB. The allocated element is returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_tcam)(struct tf *tfp,
+				 struct tf_tcam_alloc_parms *parms);
+
+	/**
+	 * Free of a tcam element.
+	 *
+	 * This API free's a previous allocated tcam element from a
+	 * device specific tcam DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_tcam)(struct tf *tfp,
+				struct tf_tcam_free_parms *parms);
+
+	/**
+	 * Searches for the specified tcam element in a shadow DB.
+	 *
+	 * This API searches for the specified tcam element in a
+	 * device specific shadow DB. If the element is found the
+	 * reference count for the element is updated. If the element
+	 * is not found a new element is allocated from the tcam DB
+	 * and then inserted into the shadow DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam allocation and search parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_search_tcam)
+			(struct tf *tfp,
+			struct tf_tcam_alloc_search_parms *parms);
+
+	/**
+	 * Sets the specified tcam element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_tcam)(struct tf *tfp,
+			       struct tf_tcam_set_parms *parms);
+
+	/**
+	 * Retrieves the specified tcam element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to tcam get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_tcam)(struct tf *tfp,
+			       struct tf_tcam_get_parms *parms);
+};
+
+/**
+ * Supported device operation structures
+ */
+extern const struct tf_dev_ops tf_dev_ops_p4;
+
+#endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
new file mode 100644
index 000000000..c3c4d1e05
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "tf_device.h"
+#include "tf_identifier.h"
+#include "tf_tbl_type.h"
+#include "tf_tcam.h"
+
+const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_alloc_ident = tf_ident_alloc,
+	.tf_dev_free_ident = tf_ident_free,
+	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
+	.tf_dev_free_tbl_type = tf_tbl_type_free,
+	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
+	.tf_dev_set_tbl_type = tf_tbl_type_set,
+	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tcam = tf_tcam_alloc,
+	.tf_dev_free_tcam = tf_tcam_free,
+	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_set_tcam = tf_tcam_set,
+	.tf_dev_get_tcam = tf_tcam_get,
+};
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
new file mode 100644
index 000000000..84d90e3a7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_DEVICE_P4_H_
+#define _TF_DEVICE_P4_H_
+
+#include <cfa_resource_types.h>
+
+#include "tf_core.h"
+#include "tf_rm_new.h"
+
+struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+};
+
+struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
+	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+};
+
+struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
+	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
+	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+};
+
+#endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
new file mode 100644
index 000000000..726d0b406
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_identifier.h"
+
+struct tf;
+
+/**
+ * Identifier DBs.
+ */
+/* static void *ident_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+int
+tf_ident_bind(struct tf *tfp __rte_unused,
+	      struct tf_ident_cfg *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_alloc(struct tf *tfp __rte_unused,
+	       struct tf_ident_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_ident_free(struct tf *tfp __rte_unused,
+	      struct tf_ident_free_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
new file mode 100644
index 000000000..b77c91b9d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_IDENTIFIER_H_
+#define _TF_IDENTIFIER_H_
+
+#include "tf_core.h"
+
+/**
+ * The Identifier module provides processing of Identifiers.
+ */
+
+struct tf_ident_cfg {
+	/**
+	 * Number of identifier types in each of the configuration
+	 * arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Identifier allcoation parameter definition
+ */
+struct tf_ident_alloc_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/**
+ * Identifier free parameter definition
+ */
+struct tf_ident_free_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/**
+ * @page ident Identity Management
+ *
+ * @ref tf_ident_bind
+ *
+ * @ref tf_ident_unbind
+ *
+ * @ref tf_ident_alloc
+ *
+ * @ref tf_ident_free
+ */
+
+/**
+ * Initializes the Identifier module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_bind(struct tf *tfp,
+		  struct tf_ident_cfg *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_unbind(struct tf *tfp);
+
+/**
+ * Allocates a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_alloc(struct tf *tfp,
+		   struct tf_ident_alloc_parms *parms);
+
+/**
+ * Free's a single identifier type.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_ident_free(struct tf *tfp,
+		  struct tf_ident_free_parms *parms);
+
+#endif /* _TF_IDENTIFIER_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 38b1e71cd..2264704d2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -9,6 +9,7 @@
 
 #include "tf_rm.h"
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_session.h"
 #include "tf_resources.h"
 #include "tf_msg.h"
@@ -76,59 +77,6 @@
 			(dtype) = type ## _TX;	\
 	} while (0)
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
-{
-	switch (dir) {
-	case TF_DIR_RX:
-		return "RX";
-	case TF_DIR_TX:
-		return "TX";
-	default:
-		return "Invalid direction";
-	}
-}
-
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
-{
-	switch (id_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		return "l2_ctxt_remap";
-	case TF_IDENT_TYPE_PROF_FUNC:
-		return "prof_func";
-	case TF_IDENT_TYPE_WC_PROF:
-		return "wc_prof";
-	case TF_IDENT_TYPE_EM_PROF:
-		return "em_prof";
-	case TF_IDENT_TYPE_L2_FUNC:
-		return "l2_func";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
-{
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		return "l2_ctxt_tcam";
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		return "prof_tcam";
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		return "wc_tcam";
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		return "veb_tcam";
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		return "sp_tcam";
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		return "ct_rule_tcam";
-	default:
-		return "Invalid tcam table type";
-	}
-}
-
 const char
 *tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index e69d443a8..1a09f13a7 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -124,24 +124,6 @@ struct tf_rm_db {
 	struct tf_rm_resc tx;
 };
 
-/**
- * Helper function converting direction to text string
- */
-const char
-*tf_dir_2_str(enum tf_dir dir);
-
-/**
- * Helper function converting identifier to text string
- */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
-
-/**
- * Helper function converting tcam type to text string
- */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
-
 /**
  * Helper function used to convert HW HCAPI resource type to a string.
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
new file mode 100644
index 000000000..51bb9ba3a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_rm_new.h"
+
+/**
+ * Resource query single entry. Used when accessing HCAPI RM on the
+ * firmware.
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Generic RM Element data type that an RM DB is build upon.
+ */
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type type;
+
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
+
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
+
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
+
+/**
+ * TF RM DB definition
+ */
+struct tf_rm_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
+
+int
+tf_rm_create_db(struct tf *tfp __rte_unused,
+		struct tf_rm_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free_db(struct tf *tfp __rte_unused,
+	      struct tf_rm_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
new file mode 100644
index 000000000..72dba0984
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -0,0 +1,368 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_core.h"
+#include "bitalloc.h"
+
+struct tf;
+
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
+ *
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
+ */
+
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
+	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	TF_RM_TYPE_MAX
+};
+
+/**
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
+ */
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Allocation information for a single element.
+ */
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_entry entry;
+};
+
+/**
+ * Create RM DB parameters
+ */
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements in the parameter structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure
+	 */
+	struct tf_rm_element_cfg *parms;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Free RM DB parameters
+ */
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+};
+
+/**
+ * Allocate RM parameters for a single element
+ */
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+};
+
+/**
+ * Free RM parameters for a single element
+ */
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+};
+
+/**
+ * Is Allocated parameters for a single element
+ */
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	uint8_t *allocated;
+};
+
+/**
+ * Get Allocation information for a single element
+ */
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *tf_rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * @page rm Resource Manager
+ *
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ */
+
+/**
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
+
+/**
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
+
+/**
+ * Allocates a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to allocate parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
+
+/**
+ * Free's a single element for the type specified, within the DB.
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EpINVAL) on failure.
+ */
+int tf_rm_free(struct tf_rm_free_parms *parms);
+
+/**
+ * Performs an allocation verification check on a specified element.
+ *
+ * [in] parms
+ *   Pointer to is allocated parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
+
+/**
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
+ *
+ * [in] parms
+ *   Pointer to get info parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
+
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
new file mode 100644
index 000000000..c74994546
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session *tfs)
+{
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		TFP_DRV_LOG(ERR, "Session not created\n");
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	return 0;
+}
+
+int
+tf_session_get_device(struct tf_session *tfs,
+		      struct tf_device *tfd)
+{
+	if (tfs->dev == NULL) {
+		TFP_DRV_LOG(ERR, "Device not created\n");
+		return -EINVAL;
+	}
+	tfd = tfs->dev;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index c9f4f8f04..b1cc7a4a7 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -11,10 +11,21 @@
 
 #include "bitalloc.h"
 #include "tf_core.h"
+#include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
 #include "stack.h"
 
+/**
+ * The Session module provides session control support. A session is
+ * to the ULP layer known as a session_info instance. The session
+ * private data is the actual session.
+ *
+ * Session manages:
+ *   - The device and all the resources related to the device.
+ *   - Any session sharing between ULP applications
+ */
+
 /** Session defines
  */
 #define TF_SESSIONS_MAX	          1          /** max # sessions */
@@ -90,6 +101,9 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Device */
+	struct tf_dev_info *dev;
+
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
 
@@ -309,4 +323,44 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * @page session Session Management
+ *
+ * @ref tf_session_get_session
+ *
+ * @ref tf_session_get_device
+ */
+
+/**
+ * Looks up the private session information from the TF session info.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session(struct tf *tfp,
+			   struct tf_session *tfs);
+
+/**
+ * Looks up the device information from the TF Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfd
+ *   Pointer to the device
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_device(struct tf_session *tfs,
+			  struct tf_dev_info *tfd);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
new file mode 100644
index 000000000..8f2b6de70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tbl.h"
+
+/**
+ * Shadow table DB element
+ */
+struct tf_shadow_tbl_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of table type entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow table DB definition
+ */
+struct tf_shadow_tbl_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tbl_element *db;
+};
+
+int
+tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
new file mode 100644
index 000000000..dfd336e53
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TBL_H_
+#define _TF_SHADOW_TBL_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow Table module provides shadow DB handling for table based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow table DB is intended to be used by the Table Type module
+ * only.
+ */
+
+/**
+ * Shadow DB configuration information for a single table type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of table types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tbl_cfg_parms {
+	/**
+	 * TF Table type
+	 */
+	enum tf_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow table DB creation parameters
+ */
+struct tf_shadow_tbl_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tbl_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table DB free parameters
+ */
+struct tf_shadow_tbl_free_db_parms {
+	/**
+	 * Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+};
+
+/**
+ * Shadow table search parameters
+ */
+struct tf_shadow_tbl_search_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Table type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table insert parameters
+ */
+struct tf_shadow_tbl_insert_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow table remove parameters
+ */
+struct tf_shadow_tbl_remove_parms {
+	/**
+	 * [in] Shadow table DB handle
+	 */
+	void *tf_shadow_tbl_db;
+	/**
+	 * [in] Tbl type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tbl Shadow table DB
+ *
+ * @ref tf_shadow_tbl_create_db
+ *
+ * @ref tf_shadow_tbl_free_db
+ *
+ * @reg tf_shadow_tbl_search
+ *
+ * @reg tf_shadow_tbl_insert
+ *
+ * @reg tf_shadow_tbl_remove
+ */
+
+/**
+ * Creates and fills a Shadow table DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms);
+
+/**
+ * Closes the Shadow table DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms);
+
+/**
+ * Search Shadow table db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow table DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow table DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TBL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.c b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
new file mode 100644
index 000000000..c61b833d7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_shadow_tcam.h"
+
+/**
+ * Shadow tcam DB element
+ */
+struct tf_shadow_tcam_element {
+	/**
+	 * Hash table
+	 */
+	void *hash;
+
+	/**
+	 * Reference count, array of number of tcam entries
+	 */
+	uint16_t *ref_count;
+};
+
+/**
+ * Shadow tcam DB definition
+ */
+struct tf_shadow_tcam_db {
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_shadow_tcam_element *db;
+};
+
+int
+tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.h b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
new file mode 100644
index 000000000..e2c4e06c0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SHADOW_TCAM_H_
+#define _TF_SHADOW_TCAM_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Shadow tcam module provides shadow DB handling for tcam based
+ * TF types. A shadow DB provides the capability that allows for reuse
+ * of TF resources.
+ *
+ * A Shadow tcam DB is intended to be used by the Tcam module only.
+ */
+
+/**
+ * Shadow DB configuration information for a single tcam type.
+ *
+ * During Device initialization the HCAPI device specifics are learned
+ * and as well as the RM DB creation. From that those initial steps
+ * this structure can be populated.
+ *
+ * NOTE:
+ * If used in an array of tcam types then such array must be ordered
+ * by the TF type is represents.
+ */
+struct tf_shadow_tcam_cfg_parms {
+	/**
+	 * TF tcam type
+	 */
+	enum tf_tcam_tbl_type type;
+
+	/**
+	 * Number of entries the Shadow DB needs to hold
+	 */
+	int num_entries;
+
+	/**
+	 * Element width for this table type
+	 */
+	int element_width;
+};
+
+/**
+ * Shadow tcam DB creation parameters
+ */
+struct tf_shadow_tcam_create_db_parms {
+	/**
+	 * [in] Configuration information for the shadow db
+	 */
+	struct tf_shadow_tcam_cfg_parms *cfg;
+	/**
+	 * [in] Number of elements in the parms structure
+	 */
+	uint16_t num_elements;
+	/**
+	 * [out] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam DB free parameters
+ */
+struct tf_shadow_tcam_free_db_parms {
+	/**
+	 * Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+};
+
+/**
+ * Shadow tcam search parameters
+ */
+struct tf_shadow_tcam_search_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [out] Index of the found element returned if hit
+	 */
+	uint16_t *index;
+	/**
+	 * [out] Reference count incremented if hit
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam insert parameters
+ */
+struct tf_shadow_tcam_insert_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Pointer to entry blob value in remap table to match
+	 */
+	uint8_t *entry;
+	/**
+	 * [in] Size of the entry blob passed in bytes
+	 */
+	uint16_t entry_sz;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after insert
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * Shadow tcam remove parameters
+ */
+struct tf_shadow_tcam_remove_parms {
+	/**
+	 * [in] Shadow tcam DB handle
+	 */
+	void *tf_shadow_tcam_db;
+	/**
+	 * [in] TCAM tbl type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry to update
+	 */
+	uint16_t index;
+	/**
+	 * [out] Reference count after removal
+	 */
+	uint16_t *ref_cnt;
+};
+
+/**
+ * @page shadow_tcam Shadow tcam DB
+ *
+ * @ref tf_shadow_tcam_create_db
+ *
+ * @ref tf_shadow_tcam_free_db
+ *
+ * @reg tf_shadow_tcam_search
+ *
+ * @reg tf_shadow_tcam_insert
+ *
+ * @reg tf_shadow_tcam_remove
+ */
+
+/**
+ * Creates and fills a Shadow tcam DB. The DB is indexed per the
+ * parms structure.
+ *
+ * [in] parms
+ *   Pointer to create db parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_create_db(struct tf_shadow_tcam_create_db_parms *parms);
+
+/**
+ * Closes the Shadow tcam DB and frees all allocated
+ * resources per the associated database.
+ *
+ * [in] parms
+ *   Pointer to the free DB parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_free_db(struct tf_shadow_tcam_free_db_parms *parms);
+
+/**
+ * Search Shadow tcam db for matching result
+ *
+ * [in] parms
+ *   Pointer to the search parameters
+ *
+ * Returns
+ *   - (0) if successful, element was found.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_search(struct tf_shadow_tcam_search_parms *parms);
+
+/**
+ * Inserts an element into the Shadow tcam DB. Will fail if the
+ * elements ref_count is different from 0. Ref_count after insert will
+ * be incremented.
+ *
+ * [in] parms
+ *   Pointer to insert parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_insert(struct tf_shadow_tcam_insert_parms *parms);
+
+/**
+ * Removes an element from the Shadow tcam DB. Will fail if the
+ * elements ref_count is 0. Ref_count after removal will be
+ * decremented.
+ *
+ * [in] parms
+ *   Pointer to remove parameter
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_shadow_tcam_remove(struct tf_shadow_tcam_remove_parms *parms);
+
+#endif /* _TF_SHADOW_TCAM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index dda72c3d5..17399a5b2 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,6 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
+#include "tf_util.h"
 #include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
new file mode 100644
index 000000000..a57a5ddf2
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tbl_type.h"
+
+struct tf;
+
+/**
+ * Table Type DBs.
+ */
+/* static void *tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Table Type Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_type_bind(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc(struct tf *tfp __rte_unused,
+		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_free(struct tf *tfp __rte_unused,
+		 struct tf_tbl_type_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
+			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_set(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_type_get(struct tf *tfp __rte_unused,
+		struct tf_tbl_type_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
new file mode 100644
index 000000000..c880b368b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
+
+#include "tf_core.h"
+
+struct tf;
+
+/**
+ * The Table Type module provides processing of Internal TF table types.
+ */
+
+/**
+ * Table Type configuration parameters
+ */
+struct tf_tbl_type_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * Table Type allocation parameters
+ */
+struct tf_tbl_type_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type free parameters
+ */
+struct tf_tbl_type_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+struct tf_tbl_type_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type set parameters
+ */
+struct tf_tbl_type_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table Type get parameters
+ */
+struct tf_tbl_type_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tbl_type Table Type
+ *
+ * @ref tf_tbl_type_bind
+ *
+ * @ref tf_tbl_type_unbind
+ *
+ * @ref tf_tbl_type_alloc
+ *
+ * @ref tf_tbl_type_free
+ *
+ * @ref tf_tbl_type_alloc_search
+ *
+ * @ref tf_tbl_type_set
+ *
+ * @ref tf_tbl_type_get
+ */
+
+/**
+ * Initializes the Table Type module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_bind(struct tf *tfp,
+		     struct tf_tbl_type_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc(struct tf *tfp,
+		      struct tf_tbl_type_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_free(struct tf *tfp,
+		     struct tf_tbl_type_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_alloc_search(struct tf *tfp,
+			     struct tf_tbl_type_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_set(struct tf *tfp,
+		    struct tf_tbl_type_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_type_get(struct tf *tfp,
+		    struct tf_tbl_type_get_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
new file mode 100644
index 000000000..3ad99dd0d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_tcam.h"
+
+struct tf;
+
+/**
+ * TCAM DBs.
+ */
+/* static void *tcam_db[TF_DIR_MAX]; */
+
+/**
+ * TCAM Shadow DBs
+ */
+/* static void *shadow_tcam_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t init; */
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+int
+tf_tcam_bind(struct tf *tfp __rte_unused,
+	     struct tf_tcam_cfg_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_unbind(struct tf *tfp __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc(struct tf *tfp __rte_unused,
+	      struct tf_tcam_alloc_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_free(struct tf *tfp __rte_unused,
+	     struct tf_tcam_free_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_alloc_search(struct tf *tfp __rte_unused,
+		     struct tf_tcam_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_set(struct tf *tfp __rte_unused,
+	    struct tf_tcam_set_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tcam_get(struct tf *tfp __rte_unused,
+	    struct tf_tcam_get_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
new file mode 100644
index 000000000..1420c9ed5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -0,0 +1,314 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TCAM_H_
+#define _TF_TCAM_H_
+
+#include "tf_core.h"
+
+/**
+ * The TCAM module provides processing of Internal TCAM types.
+ */
+
+/**
+ * TCAM configuration parameters
+ */
+struct tf_tcam_cfg_parms {
+	/**
+	 * Number of tcam types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+
+	/**
+	 * TCAM configuration array
+	 */
+	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
+
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+};
+
+/**
+ * TCAM allocation parameters
+ */
+struct tf_tcam_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM free parameters
+ */
+struct tf_tcam_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/**
+ * TCAM allocate search parameters
+ */
+struct tf_tcam_alloc_search_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/**
+ * TCAM set parameters
+ */
+struct tf_tcam_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * TCAM get parameters
+ */
+struct tf_tcam_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tcam_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page tcam TCAM
+ *
+ * @ref tf_tcam_bind
+ *
+ * @ref tf_tcam_unbind
+ *
+ * @ref tf_tcam_alloc
+ *
+ * @ref tf_tcam_free
+ *
+ * @ref tf_tcam_alloc_search
+ *
+ * @ref tf_tcam_set
+ *
+ * @ref tf_tcam_get
+ *
+ */
+
+/**
+ * Initializes the TCAM module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_bind(struct tf *tfp,
+		 struct tf_tcam_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested tcam type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc(struct tf *tfp,
+		  struct tf_tcam_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_free(struct tf *tfp,
+		 struct tf_tcam_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_alloc_search(struct tf *tfp,
+			 struct tf_tcam_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_set(struct tf *tfp,
+		struct tf_tcam_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tcam_get(struct tf *tfp,
+		struct tf_tcam_get_parms *parms);
+
+#endif /* _TF_TCAM_H */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
new file mode 100644
index 000000000..a9010543d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include "tf_util.h"
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+{
+	switch (tbl_type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		return "Full Action record";
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		return "Multicast Groups";
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		return "Encap 8B";
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		return "Encap 16B";
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+		return "Encap 32B";
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		return "Encap 64B";
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		return "Source Properties SMAC";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		return "Source Properties SMAC IPv4";
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		return "Source Properties SMAC IPv6";
+	case TF_TBL_TYPE_ACT_STATS_64:
+		return "Stats 64B";
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		return "NAT Source Port";
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		return "NAT Destination Port";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		return "NAT IPv4 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		return "NAT IPv4 Destination";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+		return "NAT IPv6 Source";
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+		return "NAT IPv6 Destination";
+	case TF_TBL_TYPE_METER_PROF:
+		return "Meter Profile";
+	case TF_TBL_TYPE_METER_INST:
+		return "Meter";
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		return "Mirror";
+	case TF_TBL_TYPE_UPAR:
+		return "UPAR";
+	case TF_TBL_TYPE_EPOCH0:
+		return "EPOCH0";
+	case TF_TBL_TYPE_EPOCH1:
+		return "EPOCH1";
+	case TF_TBL_TYPE_METADATA:
+		return "Metadata";
+	case TF_TBL_TYPE_CT_STATE:
+		return "Connection State";
+	case TF_TBL_TYPE_RANGE_PROF:
+		return "Range Profile";
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		return "Range";
+	case TF_TBL_TYPE_LAG:
+		return "Link Aggregation";
+	case TF_TBL_TYPE_VNIC_SVIF:
+		return "VNIC SVIF";
+	case TF_TBL_TYPE_EM_FKB:
+		return "EM Flexible Key Builder";
+	case TF_TBL_TYPE_WC_FKB:
+		return "WC Flexible Key Builder";
+	case TF_TBL_TYPE_EXT:
+		return "External";
+	default:
+		return "Invalid tbl type";
+	}
+}
+
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+{
+	switch (em_type) {
+	case TF_EM_TBL_TYPE_EM_RECORD:
+		return "EM Record";
+	case TF_EM_TBL_TYPE_TBL_SCOPE:
+		return "Table Scope";
+	default:
+		return "Invalid EM type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
new file mode 100644
index 000000000..4099629ea
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_UTIL_H_
+#define _TF_UTIL_H_
+
+#include "tf_core.h"
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function converting tbl type to text string
+ */
+const char
+*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+
+/**
+ * Helper function converting em tbl type to text string
+ */
+const char
+*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+
+#endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 12/51] net/bnxt: support bulk table get and mirror
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (10 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 11/51] net/bnxt: add multi device support Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 13/51] net/bnxt: update multi device design support Ajit Khaparde
                           ` (40 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add new bulk table type get using FW
  to DMA the data back to host.
- Add flag to allow records to be cleared if possible
- Set mirror using tf_alloc_tbl_entry

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/hwrm_tf.h      |  37 ++++++++-
 drivers/net/bnxt/tf_core/tf_common.h    |  54 +++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |   2 +
 drivers/net/bnxt/tf_core/tf_core.h      |  55 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  70 ++++++++++++----
 drivers/net/bnxt/tf_core/tf_msg.h       |  15 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |   5 +-
 drivers/net/bnxt/tf_core/tf_tbl.c       | 103 ++++++++++++++++++++++++
 8 files changed, 319 insertions(+), 22 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_common.h

diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index d342c695c..c04d1034a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Broadcom
+ * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -27,7 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -81,6 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
+struct tf_tbl_type_get_bulk_input;
+struct tf_tbl_type_get_bulk_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -902,6 +905,8 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -916,4 +921,32 @@ typedef struct tf_tbl_type_get_output {
 	uint8_t			  data[TF_BULK_RECV];
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_bulk_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+	/* When set to 1, indicates the clear entry on read */
+#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Starting index to get from */
+	uint32_t			 start_index;
+	/* Number of entries to get */
+	uint32_t			 num_entries;
+	/* Host memory where data will be stored */
+	uint64_t			 host_addr;
+} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_bulk_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
new file mode 100644
index 000000000..2aa4b8640
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_COMMON_H_
+#define _TF_COMMON_H_
+
+/* Helper to check the parms */
+#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "%s: session error\n", \
+				    tf_dir_2_str((parms)->dir)); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_PARMS(tfp, parms) do {	\
+		if ((parms) == NULL || (tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#define TF_CHECK_TFP_SESSION(tfp) do { \
+		if ((tfp) == NULL) { \
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
+			return -EINVAL; \
+		} \
+		if ((tfp)->session == NULL || \
+		    (tfp)->session->core_data == NULL) { \
+			TFP_DRV_LOG(ERR, "Session error\n"); \
+			return -EINVAL; \
+		} \
+	} while (0)
+
+#endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6e15a4c5c..a8236aec9 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -16,6 +16,8 @@
 #include "bitalloc.h"
 #include "bnxt.h"
 #include "rand.h"
+#include "tf_common.h"
+#include "hwrm_tf.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index becc50c7f..96a1a794f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1165,7 +1165,7 @@ struct tf_get_tbl_entry_parms {
 	 */
 	uint8_t *data;
 	/**
-	 * [out] Entry size
+	 * [in] Entry size
 	 */
 	uint16_t data_sz_in_bytes;
 	/**
@@ -1188,6 +1188,59 @@ struct tf_get_tbl_entry_parms {
 int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
+/**
+ * tf_get_bulk_tbl_entry parameter definition
+ */
+struct tf_get_bulk_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Clear hardware entries on reads only
+	 * supported for TF_TBL_TYPE_ACT_STATS_64
+	 */
+	bool clear_on_read;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * Bulk get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_bulk_tbl_entry(struct tf *tfp,
+		     struct tf_get_bulk_tbl_entry_parms *parms);
+
 /**
  * @page exact_match Exact Match Table
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c8f6b88d3..c755c8555 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1216,12 +1216,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 static int
-tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
 {
 	struct tfp_calloc_parms alloc_parms;
 	int rc;
@@ -1229,15 +1225,10 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	/* Allocate session */
 	alloc_parms.nitems = 1;
 	alloc_parms.size = size;
-	alloc_parms.alignment = 0;
+	alloc_parms.alignment = 4096;
 	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate tcam dma entry, rc:%d\n",
-			    rc);
+	if (rc)
 		return -ENOMEM;
-	}
 
 	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
 	buf->va_addr = alloc_parms.mem_va;
@@ -1245,6 +1236,52 @@ tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
 	return 0;
 }
 
+int
+tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *params)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_bulk_input req = { 0 };
+	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	int data_size = 0;
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16((params->dir) |
+		((params->clear_on_read) ?
+		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+
+	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < data_size)
+		return -EINVAL;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
 		      struct tf_set_tcam_entry_parms *parms)
@@ -1282,9 +1319,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	} else {
 		/* use dma buffer */
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_get_dma_buf(&buf, data_size);
-		if (rc != 0)
-			return rc;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
 		data = buf.va_addr;
 		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
 	}
@@ -1303,8 +1340,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
+cleanup:
 	if (buf.va_addr != NULL)
 		tfp_free(buf.va_addr);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 89f7370cc..8d050c402 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -267,4 +267,19 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 			 uint8_t *data,
 			 uint32_t index);
 
+/**
+ * Sends bulk get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to table get bulk parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 05e131f8b..9b7f5a069 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -149,11 +149,10 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_RX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_TX                         1
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 17399a5b2..26313ed3c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
@@ -794,6 +795,7 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -915,6 +917,76 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_bulk_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->starting_idx;
+
+	/*
+	 * Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->starting_idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return rc;
+}
+
 #if (TF_SHADOW == 1)
 /**
  * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
@@ -1182,6 +1254,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
+	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
 		PMD_DRV_LOG(ERR,
 			    "dir:%d, Type not supported, type:%d\n",
@@ -1663,6 +1736,36 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
+/* API defined in tf_core.h */
+int
+tf_get_bulk_tbl_entry(struct tf *tfp,
+		 struct tf_get_bulk_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type,
+				    strerror(-rc));
+	}
+
+	return rc;
+}
+
 /* API defined in tf_core.h */
 int
 tf_alloc_tbl_scope(struct tf *tfp,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 13/51] net/bnxt: update multi device design support
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (11 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
                           ` (39 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Implement the modules RM, Device (WH+), Identifier.
- Update Session module.
- Implement new HWRMs for RM direct messaging.
- Add new parameter check macro's and clean up the header includes for
  i.e. tfp such that bnxt.h is not directly included in the new modules.
- Add cfa_resource_types, required for RM design.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |   2 +
 drivers/net/bnxt/tf_core/Makefile             |   1 +
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 291 ++++++------
 drivers/net/bnxt/tf_core/tf_common.h          |  24 +
 drivers/net/bnxt/tf_core/tf_core.c            | 286 +++++++++++-
 drivers/net/bnxt/tf_core/tf_core.h            |  12 +-
 drivers/net/bnxt/tf_core/tf_device.c          | 150 +++++-
 drivers/net/bnxt/tf_core/tf_device.h          |  79 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |  78 +++-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |  79 ++--
 drivers/net/bnxt/tf_core/tf_identifier.c      | 142 +++++-
 drivers/net/bnxt/tf_core/tf_identifier.h      |  25 +-
 drivers/net/bnxt/tf_core/tf_msg.c             | 268 +++++++++--
 drivers/net/bnxt/tf_core/tf_msg.h             |  59 +++
 drivers/net/bnxt/tf_core/tf_rm_new.c          | 434 ++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h          |  72 ++-
 drivers/net/bnxt/tf_core/tf_session.c         | 256 ++++++++++-
 drivers/net/bnxt/tf_core/tf_session.h         | 118 ++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   4 +
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |  30 +-
 drivers/net/bnxt/tf_core/tf_tbl_type.h        |  95 ++--
 drivers/net/bnxt/tf_core/tf_tcam.h            |  14 +-
 22 files changed, 2120 insertions(+), 399 deletions(-)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index a50cb261d..1f7df9d06 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -32,6 +32,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
+	'tf_core/tf_session.c',
+	'tf_core/tf_device.c',
 	'tf_core/tf_device_p4.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 7a3c325a6..2c02e29e7 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -14,6 +14,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index c0c1e754e..11e8892f4 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -12,6 +12,11 @@
 
 #ifndef _CFA_RESOURCE_TYPES_H_
 #define _CFA_RESOURCE_TYPES_H_
+/*
+ * This is the constant used to define invalid CFA
+ * resource types across all devices.
+ */
+#define CFA_RESOURCE_TYPE_INVALID 65535
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
@@ -58,209 +63,205 @@
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_SPORT       0x7UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_DPORT       0x8UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV4      0x9UL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV4      0xaUL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_S_IPV6      0xbUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_SRAM_NAT_D_IPV6      0xcUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_SRAM_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_SRAM_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM         0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC            0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM            0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID           0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+/* Exact Match Record */
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM              0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF           0x1aUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER                0x1bUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR               0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM              0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB               0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB               0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P58_LAST                CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM              0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM             0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST                CFA_RESOURCE_TYPE_P45_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
-/* SRAM Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_SRAM_MCG             0x0UL
-/* SRAM Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B        0x1UL
-/* SRAM Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B       0x2UL
-/* SRAM Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B       0x3UL
-/* SRAM Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC          0x4UL
-/* SRAM Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV4     0x5UL
-/* SRAM Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC_IPV6     0x6UL
-/* SRAM 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B     0x7UL
-/* SRAM Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT       0x8UL
-/* SRAM Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT       0x9UL
-/* SRAM Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4      0xaUL
-/* SRAM Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4      0xbUL
-/* SRAM Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6      0xcUL
-/* SRAM Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6      0xdUL
+/* Multicast Group */
+#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+/* Encap 8 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+/* Encap 16 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+/* Encap 64 byte record */
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+/* Source Property MAC */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+/* Source Property MAC and IPv4 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+/* Source Property MAC and IPv6 */
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+/* 64B Counters */
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+/* Network Address Translation Source Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+/* Network Address Translation Destination Port */
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+/* Network Address Translation Source IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+/* Network Address Translation Destination IPv4 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
+/* Network Address Translation Source IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
+/* Network Address Translation Destination IPv6 address */
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_SRAM_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_SRAM_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM         0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC            0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM            0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID           0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC               0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID      0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM              0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF           0x1cUL
-/* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER                0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR               0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM              0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST                CFA_RESOURCE_TYPE_P4_SP_TCAM
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index 2aa4b8640..ec3bca835 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -51,4 +51,28 @@
 		} \
 	} while (0)
 
+
+#define TF_CHECK_PARMS1(parms) do {					\
+		if ((parms) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS2(parms1, parms2) do {				\
+		if ((parms1) == NULL || (parms2) == NULL) {		\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
+#define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
+		if ((parms1) == NULL ||					\
+		    (parms2) == NULL ||					\
+		    (parms3) == NULL) {					\
+			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
+			return -EINVAL;					\
+		}							\
+	} while (0)
+
 #endif /* _TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a8236aec9..81a88e211 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -85,7 +85,7 @@ tf_create_em_pool(struct tf_session *session,
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
 
 	if (rc != 0) {
 		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
@@ -231,7 +231,6 @@ tf_open_session(struct tf                    *tfp,
 		   TF_SESSION_NAME_MAX);
 
 	/* Initialize Session */
-	session->device_type = parms->device_type;
 	session->dev = NULL;
 	tf_rm_init(tfp);
 
@@ -276,7 +275,9 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session, dir, TF_SESSION_EM_POOL_SIZE);
+		rc = tf_create_em_pool(session,
+				       (enum tf_dir)dir,
+				       TF_SESSION_EM_POOL_SIZE);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "EM Pool initialization failed\n");
@@ -313,6 +314,64 @@ tf_open_session(struct tf                    *tfp,
 	return -EINVAL;
 }
 
+int
+tf_open_session_new(struct tf *tfp,
+		    struct tf_open_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_open_session_parms oparms;
+
+	TF_CHECK_PARMS(tfp, parms);
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
+		return -ENOTSUP;
+	}
+
+	/* Verify control channel and build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	oparms.open_cfg = parms;
+
+	rc = tf_session_open_session(tfp, &oparms);
+	/* Logging handled by tf_session_open_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+}
+
 int
 tf_attach_session(struct tf *tfp __rte_unused,
 		  struct tf_attach_session_parms *parms __rte_unused)
@@ -341,6 +400,69 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_attach_session_new(struct tf *tfp,
+		      struct tf_attach_session_parms *parms)
+{
+	int rc;
+	unsigned int domain, bus, slot, device;
+	struct tf_session_attach_session_parms aparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Verify control channel */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Verify 'attach' channel */
+	rc = sscanf(parms->attach_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to scan device attach_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* Prepare return value of session_id, using ctrl_chan_name
+	 * device values as it becomes the session id.
+	 */
+	parms->session_id.internal.domain = domain;
+	parms->session_id.internal.bus = bus;
+	parms->session_id.internal.device = device;
+	aparms.attach_cfg = parms;
+	rc = tf_session_attach_session(tfp,
+				       &aparms);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Attached to session, session_id:%d\n",
+		    parms->session_id.id);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return rc;
+}
+
 int
 tf_close_session(struct tf *tfp)
 {
@@ -380,7 +502,7 @@ tf_close_session(struct tf *tfp)
 	if (tfs->ref_count == 0) {
 		/* Free EM pool */
 		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, dir);
+			tf_free_em_pool(tfs, (enum tf_dir)dir);
 
 		tfp_free(tfp->session->core_data);
 		tfp_free(tfp->session);
@@ -401,6 +523,39 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+int
+tf_close_session_new(struct tf *tfp)
+{
+	int rc;
+	struct tf_session_close_session_parms cparms = { 0 };
+	union tf_session_id session_id = { 0 };
+	uint8_t ref_count;
+
+	TF_CHECK_PARMS1(tfp);
+
+	cparms.ref_count = &ref_count;
+	cparms.session_id = &session_id;
+	rc = tf_session_close_session(tfp,
+				      &cparms);
+	/* Logging handled by tf_session_close_session */
+	if (rc)
+		return rc;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    cparms.session_id->id,
+		    *cparms.ref_count);
+
+	TFP_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    cparms.session_id->internal.domain,
+		    cparms.session_id->internal.bus,
+		    cparms.session_id->internal.device,
+		    cparms.session_id->internal.fw_session_id);
+
+	return rc;
+}
+
 /** insert EM hash entry API
  *
  *    returns:
@@ -539,10 +694,67 @@ int tf_alloc_identifier(struct tf *tfp,
 	return 0;
 }
 
-/** free identifier resource
- *
- * Returns success or failure code.
- */
+int
+tf_alloc_identifier_new(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_alloc_parms aparms;
+	uint16_t id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_ident_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.ident_type = parms->ident_type;
+	aparms.id = &id;
+	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->id = id;
+
+	return 0;
+}
+
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms)
 {
@@ -618,6 +830,64 @@ int tf_free_identifier(struct tf *tfp,
 	return 0;
 }
 
+int
+tf_free_identifier_new(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_ident_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_ident_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_ident == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.ident_type = parms->ident_type;
+	fparms.id = parms->id;
+	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Identifier allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 96a1a794f..74ed24e5a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -380,7 +380,7 @@ struct tf_session_resources {
 	 * The number of identifier resources requested for the session.
 	 * The index used is tf_identifier_type.
 	 */
-	uint16_t identifer_cnt[TF_DIR_MAX][TF_IDENT_TYPE_MAX];
+	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
 	/** [in] Requested Index Table resource counts
 	 *
 	 * The number of index table resources requested for the session.
@@ -480,6 +480,9 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+int tf_open_session_new(struct tf *tfp,
+			struct tf_open_session_parms *parms);
+
 struct tf_attach_session_parms {
 	/** [in] ctrl_chan_name
 	 *
@@ -542,6 +545,8 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
+int tf_attach_session_new(struct tf *tfp,
+			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -551,6 +556,7 @@ int tf_attach_session(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
+int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -602,6 +608,8 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
+int tf_alloc_identifier_new(struct tf *tfp,
+			    struct tf_alloc_identifier_parms *parms);
 
 /** free identifier resource
  *
@@ -613,6 +621,8 @@ int tf_alloc_identifier(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
+int tf_free_identifier_new(struct tf *tfp,
+			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 3b368313e..4c46cadc6 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,45 +6,169 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
-#include "bnxt.h"
 
 struct tf;
 
+/* Forward declarations */
+static int dev_unbind_p4(struct tf *tfp);
+
 /**
- * Device specific bind function
+ * Device specific bind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] shadow_copy
+ *   Flag controlling shadow copy DB creation
+ *
+ * [in] resources
+ *   Pointer to resource allocation information
+ *
+ * [out] dev_handle
+ *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp __rte_unused,
-	    struct tf_session_resources *resources __rte_unused,
-	    struct tf_dev_info *dev_info)
+dev_bind_p4(struct tf *tfp,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
+	int rc;
+	int frc;
+	struct tf_ident_cfg_parms ident_cfg;
+	struct tf_tbl_cfg_parms tbl_cfg;
+	struct tf_tcam_cfg_parms tcam_cfg;
+
 	/* Initialize the modules */
 
-	dev_info->ops = &tf_dev_ops_p4;
+	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
+	ident_cfg.cfg = tf_ident_p4;
+	ident_cfg.shadow_copy = shadow_copy;
+	ident_cfg.resources = resources;
+	rc = tf_ident_bind(tfp, &ident_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier initialization failure\n");
+		goto fail;
+	}
+
+	tbl_cfg.num_elements = TF_TBL_TYPE_MAX;
+	tbl_cfg.cfg = tf_tbl_p4;
+	tbl_cfg.shadow_copy = shadow_copy;
+	tbl_cfg.resources = resources;
+	rc = tf_tbl_bind(tfp, &tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Table initialization failure\n");
+		goto fail;
+	}
+
+	tcam_cfg.num_elements = TF_TCAM_TBL_TYPE_MAX;
+	tcam_cfg.cfg = tf_tcam_p4;
+	tcam_cfg.shadow_copy = shadow_copy;
+	tcam_cfg.resources = resources;
+	rc = tf_tcam_bind(tfp, &tcam_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM initialization failure\n");
+		goto fail;
+	}
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	dev_handle->ops = &tf_dev_ops_p4;
+
 	return 0;
+
+ fail:
+	/* Cleanup of already created modules */
+	frc = dev_unbind_p4(tfp);
+	if (frc)
+		return frc;
+
+	return rc;
+}
+
+/**
+ * Device specific unbind function, WH+
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+dev_unbind_p4(struct tf *tfp)
+{
+	int rc = 0;
+	bool fail = false;
+
+	/* Unbind all the support modules. As this is only done on
+	 * close we only report errors as everything has to be cleaned
+	 * up regardless.
+	 */
+	rc = tf_ident_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Identifier\n");
+		fail = true;
+	}
+
+	rc = tf_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Table Type\n");
+		fail = true;
+	}
+
+	rc = tf_tcam_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, TCAM\n");
+		fail = true;
+	}
+
+	if (fail)
+		return -1;
+
+	return rc;
 }
 
 int
 dev_bind(struct tf *tfp __rte_unused,
 	 enum tf_device_type type,
+	 bool shadow_copy,
 	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_info)
+	 struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
 		return dev_bind_p4(tfp,
+				   shadow_copy,
 				   resources,
-				   dev_info);
+				   dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
-			    "Device type not supported\n");
-		return -ENOTSUP;
+			    "No such device\n");
+		return -ENODEV;
 	}
 }
 
 int
-dev_unbind(struct tf *tfp __rte_unused,
-	   struct tf_dev_info *dev_handle __rte_unused)
+dev_unbind(struct tf *tfp,
+	   struct tf_dev_info *dev_handle)
 {
-	return 0;
+	switch (dev_handle->type) {
+	case TF_DEVICE_TYPE_WH:
+		return dev_unbind_p4(tfp);
+	default:
+		TFP_DRV_LOG(ERR,
+			    "No such device\n");
+		return -ENODEV;
+	}
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 8b63ff178..6aeb6fedb 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -27,6 +27,7 @@ struct tf_session;
  * TF device information
  */
 struct tf_dev_info {
+	enum tf_device_type type;
 	const struct tf_dev_ops *ops;
 };
 
@@ -56,10 +57,12 @@ struct tf_dev_info {
  *
  * Returns
  *   - (0) if successful.
- *   - (-EINVAL) on failure.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_bind(struct tf *tfp,
 	     enum tf_device_type type,
+	     bool shadow_copy,
 	     struct tf_session_resources *resources,
 	     struct tf_dev_info *dev_handle);
 
@@ -71,6 +74,11 @@ int dev_bind(struct tf *tfp,
  *
  * [in] dev_handle
  *   Device handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) parameter failure.
+ *   - (-ENODEV) no such device supported.
  */
 int dev_unbind(struct tf *tfp,
 	       struct tf_dev_info *dev_handle);
@@ -84,6 +92,44 @@ int dev_unbind(struct tf *tfp,
  * different device variants.
  */
 struct tf_dev_ops {
+	/**
+	 * Retrives the MAX number of resource types that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] max_types
+	 *   Pointer to MAX number of types the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_max_types)(struct tf *tfp,
+				    uint16_t *max_types);
+
+	/**
+	 * Retrieves the WC TCAM slice information that the device
+	 * supports.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [out] slice_size
+	 *   Pointer to slice size the device supports
+	 *
+	 * [out] num_slices_per_row
+	 *   Pointer to number of slices per row the device supports
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
+					 uint16_t *slice_size,
+					 uint16_t *num_slices_per_row);
+
 	/**
 	 * Allocation of an identifier element.
 	 *
@@ -134,14 +180,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation parameters
+	 *   Pointer to table allocation parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_tbl_type)(struct tf *tfp,
-				     struct tf_tbl_type_alloc_parms *parms);
+	int (*tf_dev_alloc_tbl)(struct tf *tfp,
+				struct tf_tbl_alloc_parms *parms);
 
 	/**
 	 * Free of a table type element.
@@ -153,14 +199,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type free parameters
+	 *   Pointer to table free parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_free_tbl_type)(struct tf *tfp,
-				    struct tf_tbl_type_free_parms *parms);
+	int (*tf_dev_free_tbl)(struct tf *tfp,
+			       struct tf_tbl_free_parms *parms);
 
 	/**
 	 * Searches for the specified table type element in a shadow DB.
@@ -175,15 +221,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type allocation and search parameters
+	 *   Pointer to table allocation and search parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_alloc_search_tbl_type)
-			(struct tf *tfp,
-			struct tf_tbl_type_alloc_search_parms *parms);
+	int (*tf_dev_alloc_search_tbl)(struct tf *tfp,
+				       struct tf_tbl_alloc_search_parms *parms);
 
 	/**
 	 * Sets the specified table type element.
@@ -195,14 +240,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type set parameters
+	 *   Pointer to table set parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_set_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_set_parms *parms);
+	int (*tf_dev_set_tbl)(struct tf *tfp,
+			      struct tf_tbl_set_parms *parms);
 
 	/**
 	 * Retrieves the specified table type element.
@@ -214,14 +259,14 @@ struct tf_dev_ops {
 	 *   Pointer to TF handle
 	 *
 	 * [in] parms
-	 *   Pointer to table type get parameters
+	 *   Pointer to table get parameters
 	 *
 	 * Returns
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_tbl_type)(struct tf *tfp,
-				   struct tf_tbl_type_get_parms *parms);
+	int (*tf_dev_get_tbl)(struct tf *tfp,
+			       struct tf_tbl_get_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c3c4d1e05..c235976fe 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -3,19 +3,87 @@
  * All rights reserved.
  */
 
+#include <rte_common.h>
+#include <cfa_resource_types.h>
+
 #include "tf_device.h"
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
 
+/**
+ * Device specific function that retrieves the MAX number of HCAPI
+ * types the device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] max_types
+ *   Pointer to the MAX number of HCAPI types supported
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
+			uint16_t *max_types)
+{
+	if (max_types == NULL)
+		return -EINVAL;
+
+	*max_types = CFA_RESOURCE_TYPE_P4_LAST + 1;
+
+	return 0;
+}
+
+/**
+ * Device specific function that retrieves the WC TCAM slices the
+ * device supports.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] slice_size
+ *   Pointer to the WC TCAM slice size
+ *
+ * [out] num_slices_per_row
+ *   Pointer to the WC TCAM row slice configuration
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+static int
+tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
+			     uint16_t *slice_size,
+			     uint16_t *num_slices_per_row)
+{
+#define CFA_P4_WC_TCAM_SLICE_SIZE       12
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+
+	if (slice_size == NULL || num_slices_per_row == NULL)
+		return -EINVAL;
+
+	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
+	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+
+	return 0;
+}
+
+/**
+ * Truflow P4 device specific functions
+ */
 const struct tf_dev_ops tf_dev_ops_p4 = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
-	.tf_dev_alloc_tbl_type = tf_tbl_type_alloc,
-	.tf_dev_free_tbl_type = tf_tbl_type_free,
-	.tf_dev_alloc_search_tbl_type = tf_tbl_type_alloc_search,
-	.tf_dev_set_tbl_type = tf_tbl_type_set,
-	.tf_dev_get_tbl_type = tf_tbl_type_get,
+	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 84d90e3a7..5cd02b298 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,11 +12,12 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, 0 /* CFA_RESOURCE_TYPE_P4_INVALID */ },
+	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-	{ TF_RM_ELEM_CFG_NULL, 0    /* CFA_RESOURCE_TYPE_P4_L2_FUNC */ }
+	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
@@ -24,41 +25,57 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */ },
-	{ TF_RM_ELEM_CFG_NULL, 0  /* CFA_RESOURCE_TYPE_P4_VEB_TCAM */ }
+	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_16B },
-	{ TF_RM_ELEM_CFG_NULL, 0, /* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_SP_MAC },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */ },
-	{ TF_RM_ELEM_CFG_NULL, 0 /* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */ },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SRAM_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_UPAR */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EPOC */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_METADATA */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_CT_STATE */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_PROF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_LAG */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EM_FBK */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_WC_FKB */ },
-	{ TF_RM_ELEM_CFG_NULL, /* CFA_RESOURCE_TYPE_P4_EXT */ }
+	/* CFA_RESOURCE_TYPE_P4_UPAR */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EPOC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_METADATA */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_CT_STATE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_PROF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_RANGE_ENTRY */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_LAG */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_VNIC_SVIF */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EM_FBK */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_WC_FKB */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	/* CFA_RESOURCE_TYPE_P4_EXT */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 726d0b406..e89f9768b 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -6,42 +6,172 @@
 #include <rte_common.h>
 
 #include "tf_identifier.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Identifier DBs.
  */
-/* static void *ident_db[TF_DIR_MAX]; */
+static void *ident_db[TF_DIR_MAX];
 
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 int
-tf_ident_bind(struct tf *tfp __rte_unused,
-	      struct tf_ident_cfg *parms __rte_unused)
+tf_ident_bind(struct tf *tfp,
+	      struct tf_ident_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
+		db_cfg.rm_db = ident_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Identifier DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_ident_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = ident_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		ident_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_ident_alloc(struct tf *tfp __rte_unused,
-	       struct tf_ident_alloc_parms *parms __rte_unused)
+	       struct tf_ident_alloc_parms *parms)
 {
+	int rc;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = (uint32_t *)&parms->id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type);
+		return rc;
+	}
+
 	return 0;
 }
 
 int
 tf_ident_free(struct tf *tfp __rte_unused,
-	      struct tf_ident_free_parms *parms __rte_unused)
+	      struct tf_ident_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Identifier DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = ident_db[parms->dir];
+	aparms.db_index = parms->ident_type;
+	aparms.index = parms->id;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = ident_db[parms->dir];
+	fparms.db_index = parms->ident_type;
+	fparms.index = parms->id;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->ident_type,
+			    parms->id);
+		return rc;
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index b77c91b9d..1c5319b5e 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -12,21 +12,28 @@
  * The Identifier module provides processing of Identifiers.
  */
 
-struct tf_ident_cfg {
+struct tf_ident_cfg_parms {
 	/**
-	 * Number of identifier types in each of the configuration
-	 * arrays
+	 * [in] Number of identifier types in each of the
+	 * configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
-	 * TCAM configuration array
+	 * [in] Identifier configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * [in] Boolean controlling the request shadow copy.
 	 */
-	struct tf_rm_element_cfg *ident_cfg[TF_DIR_MAX];
+	bool shadow_copy;
+	/**
+	 * [in] Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Identifier allcoation parameter definition
+ * Identifier allocation parameter definition
  */
 struct tf_ident_alloc_parms {
 	/**
@@ -40,7 +47,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [out] Identifier allocated
 	 */
-	uint16_t id;
+	uint16_t *id;
 };
 
 /**
@@ -88,7 +95,7 @@ struct tf_ident_free_parms {
  *   - (-EINVAL) on failure.
  */
 int tf_ident_bind(struct tf *tfp,
-		  struct tf_ident_cfg *parms);
+		  struct tf_ident_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c755c8555..e08a96f23 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -6,15 +6,13 @@
 #include <inttypes.h>
 #include <stdbool.h>
 #include <stdlib.h>
-
-#include "bnxt.h"
-#include "tf_core.h"
-#include "tf_session.h"
-#include "tfp.h"
+#include <string.h>
 
 #include "tf_msg_common.h"
 #include "tf_msg.h"
-#include "hsi_struct_def_dpdk.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tfp.h"
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
@@ -140,6 +138,51 @@ tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
 	return rc;
 }
 
+/**
+ * Allocates a DMA buffer that can be used for message transfer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ *
+ * [in] size
+ *   Requested size of the buffer in bytes
+ *
+ * Returns:
+ *    0      - Success
+ *   -ENOMEM - Unable to allocate buffer, no memory
+ */
+static int
+tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 4096;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc)
+		return -ENOMEM;
+
+	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+/**
+ * Free's a previous allocated DMA buffer.
+ *
+ * [in] buf
+ *   Pointer to DMA buffer structure
+ */
+static void
+tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
+{
+	tfp_free(buf->va_addr);
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -154,7 +197,7 @@ tf_msg_session_open(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+	tfp_memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
 
 	parms.tf_type = HWRM_TF_SESSION_OPEN;
 	parms.req_data = (uint32_t *)&req;
@@ -870,6 +913,180 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_session_resc_qcaps(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *query,
+			  enum tf_rm_resc_resv_strategy *resv_strategy)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_qcaps_input req = { 0 };
+	struct hwrm_tf_session_resc_qcaps_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf qcaps_buf = { 0 };
+	struct tf_rm_resc_req_entry *data;
+	int dma_size;
+
+	if (size == 0 || query == NULL || resv_strategy == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Resource QCAPS parameter error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffer */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&qcaps_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.qcaps_size = size;
+	req.qcaps_addr = qcaps_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: QCAPS message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].min = tfp_le_to_cpu_16(data[i].min);
+		query[i].max = tfp_le_to_cpu_16(data[i].max);
+	}
+
+	*resv_strategy = resp.flags &
+	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
+
+	tf_msg_free_dma_buf(&qcaps_buf);
+
+	return rc;
+}
+
+int
+tf_msg_session_resc_alloc(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_req_entry *request,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_alloc_input req = { 0 };
+	struct hwrm_tf_session_resc_alloc_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf req_buf = { 0 };
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_req_entry *req_data;
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_req_entry);
+	rc = tf_msg_alloc_dma_buf(&req_buf, dma_size);
+	if (rc)
+		return rc;
+
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.req_size = size;
+
+	req_data = (struct tf_rm_resc_req_entry *)req_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		req_data[i].type = tfp_cpu_to_le_32(request[i].type);
+		req_data[i].min = tfp_cpu_to_le_16(request[i].min);
+		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
+	}
+
+	req.req_addr = req_buf.pa_addr;
+	req.resp_addr = resv_buf.pa_addr;
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response
+	 * Should always get expected number of entries
+	 */
+	if (resp.size != size) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc message error, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-EINVAL));
+		return -EINVAL;
+	}
+
+	/* Post process the response */
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
+		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
+		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+	}
+
+	tf_msg_free_dma_buf(&req_buf);
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1034,7 +1251,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	memcpy(req.em_key, em_parms->key, ((em_parms->key_sz_in_bits + 7) / 8));
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
@@ -1216,26 +1435,6 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-static int
-tf_msg_alloc_dma_buf(struct tf_msg_dma_buf *buf, int size)
-{
-	struct tfp_calloc_parms alloc_parms;
-	int rc;
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = size;
-	alloc_parms.alignment = 4096;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc)
-		return -ENOMEM;
-
-	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
-	buf->va_addr = alloc_parms.mem_va;
-
-	return 0;
-}
-
 int
 tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 			  struct tf_get_bulk_tbl_entry_parms *params)
@@ -1323,12 +1522,14 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		if (rc)
 			goto cleanup;
 		data = buf.va_addr;
-		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
 	}
 
-	memcpy(&data[0], parms->key, key_bytes);
-	memcpy(&data[key_bytes], parms->mask, key_bytes);
-	memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, key_bytes);
+	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
+	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1343,8 +1544,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 		goto cleanup;
 
 cleanup:
-	if (buf.va_addr != NULL)
-		tfp_free(buf.va_addr);
+	tf_msg_free_dma_buf(&buf);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8d050c402..06f52ef00 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,8 +6,12 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include <rte_common.h>
+#include <hsi_struct_def_dpdk.h>
+
 #include "tf_tbl.h"
 #include "tf_rm.h"
+#include "tf_rm_new.h"
 
 struct tf;
 
@@ -121,6 +125,61 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the query. Should be set to the max
+ *   elements for the device type
+ *
+ * [out] query
+ *   Pointer to an array of query elements
+ *
+ * [out] resv_strategy
+ *   Pointer to the reservation strategy
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_qcaps(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *query,
+			      enum tf_rm_resc_resv_strategy *resv_strategy);
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] req
+ *   Pointer to an array of request elements
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_resc_alloc(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_req_entry *request,
+			      struct tf_rm_resc_entry *resv);
+
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 51bb9ba3a..7cadb231f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -3,20 +3,18 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
 #include <rte_common.h>
 
-#include "tf_rm_new.h"
+#include <cfa_resource_types.h>
 
-/**
- * Resource query single entry. Used when accessing HCAPI RM on the
- * firmware.
- */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
-};
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_session.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_msg.h"
 
 /**
  * Generic RM Element data type that an RM DB is build upon.
@@ -27,7 +25,7 @@ struct tf_rm_element {
 	 * hcapi_type can be ignored. If Null then the element is not
 	 * valid for the device.
 	 */
-	enum tf_rm_elem_cfg_type type;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/**
 	 * HCAPI RM Type for the element.
@@ -50,53 +48,435 @@ struct tf_rm_element {
 /**
  * TF RM DB definition
  */
-struct tf_rm_db {
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
+
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
 	struct tf_rm_element *db;
 };
 
+
+/**
+ * Resource Manager Adjust of base index definitions.
+ */
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
+
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
+{
+	int rc = 0;
+	uint32_t base_index;
+
+	base_index = db[db_index].alloc.entry.start;
+
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
 int
-tf_rm_create_db(struct tf *tfp __rte_unused,
-		struct tf_rm_create_db_parms *parms __rte_unused)
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
+
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against db requirements */
+
+	/* Alloc request, alignment already set */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
+
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			req[i].type = parms->cfg[i].hcapi_type;
+			/* Check that we can get the full amount allocated */
+			if (parms->alloc_num[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[i].min = parms->alloc_num[i];
+				req[i].max = parms->alloc_num[i];
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_num[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		} else {
+			/* Skip the element */
+			req[i].type = CFA_RESOURCE_TYPE_INVALID;
+		}
+	}
+
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       parms->num_elements,
+				       req,
+				       resv);
+	if (rc)
+		return rc;
+
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db = (void *)cparms.mem_va;
+
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
+
+	db = rm_db->db;
+	for (i = 0; i < parms->num_elements; i++) {
+		/* If allocation failed for a single entry the DB
+		 * creation is considered a failure.
+		 */
+		if (parms->alloc_num[i] != resv[i].stride) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    i);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_num[i],
+				    resv[i].stride);
+			goto fail;
+		}
+
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+		db[i].alloc.entry.start = resv[i].start;
+		db[i].alloc.entry.stride = resv[i].stride;
+
+		/* Create pool */
+		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
+			     sizeof(struct bitalloc));
+		/* Alloc request, alignment already set */
+		cparms.nitems = pool_size;
+		cparms.size = sizeof(struct bitalloc);
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+		db[i].pool = (struct bitalloc *)cparms.mem_va;
+	}
+
+	rm_db->num_entries = i;
+	rm_db->dir = parms->dir;
+	parms->rm_db = (void *)rm_db;
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
 	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
 int
 tf_rm_free_db(struct tf *tfp __rte_unused,
-	      struct tf_rm_free_db_parms *parms __rte_unused)
+	      struct tf_rm_free_db_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int i;
+	struct tf_rm_new_db *rm_db;
+
+	/* Traverse the DB and clear each pool.
+	 * NOTE:
+	 *   Firmware is not cleared. It will be cleared on close only.
+	 */
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db->pool);
+
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms __rte_unused)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	int id;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -ENOMEM;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				parms->index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -1;
+	}
+
+	return rc;
 }
 
 int
-tf_rm_free(struct tf_rm_free_parms *parms __rte_unused)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
+
+	return rc;
 }
 
 int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms __rte_unused)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms __rte_unused)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	parms->info = &rm_db->db[parms->db_index].alloc;
+
+	return rc;
 }
 
 int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms __rte_unused)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	return 0;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	if (parms == NULL || parms->rm_db == NULL)
+		return -EINVAL;
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 72dba0984..6d8234ddc 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
 #include "tf_core.h"
 #include "bitalloc.h"
@@ -32,13 +32,16 @@ struct tf;
  * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
  * structure and several new APIs would need to be added to allow for
  * growth of a single TF resource type.
+ *
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
 
 /**
  * Resource reservation single entry result. Used when accessing HCAPI
  * RM on the firmware.
  */
-struct tf_rm_entry {
+struct tf_rm_new_entry {
 	/** Starting index of the allocated resource */
 	uint16_t start;
 	/** Number of allocated elements */
@@ -52,12 +55,32 @@ struct tf_rm_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	TF_RM_ELEM_CFG_NULL,    /**< No configuration */
-	TF_RM_ELEM_CFG_HCAPI,   /**< HCAPI 'controlled' */
-	TF_RM_ELEM_CFG_PRIVATE, /**< Private thus not HCAPI 'controlled' */
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled' */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
 	TF_RM_TYPE_MAX
 };
 
+/**
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
+ */
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
+};
+
 /**
  * RM Element configuration structure, used by the Device to configure
  * how an individual TF type is configured in regard to the HCAPI RM
@@ -68,7 +91,7 @@ struct tf_rm_element_cfg {
 	 * RM Element config controls how the DB for that element is
 	 * processed.
 	 */
-	enum tf_rm_elem_cfg_type cfg;
+	enum tf_rm_elem_cfg_type cfg_type;
 
 	/* If a HCAPI to TF type conversion is required then TF type
 	 * can be added here.
@@ -92,7 +115,7 @@ struct tf_rm_alloc_info {
 	 * In case of dynamic allocation support this would have
 	 * to be changed to linked list of tf_rm_entry instead.
 	 */
-	struct tf_rm_entry entry;
+	struct tf_rm_new_entry entry;
 };
 
 /**
@@ -104,17 +127,21 @@ struct tf_rm_create_db_parms {
 	 */
 	enum tf_dir dir;
 	/**
-	 * [in] Number of elements in the parameter structure
+	 * [in] Number of elements.
 	 */
 	uint16_t num_elements;
 	/**
-	 * [in] Parameter structure
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Allocation number array. Array size is num_elements.
 	 */
-	struct tf_rm_element_cfg *parms;
+	uint16_t *alloc_num;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -128,7 +155,7 @@ struct tf_rm_free_db_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 };
 
 /**
@@ -138,7 +165,7 @@ struct tf_rm_allocate_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -159,7 +186,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -168,7 +195,7 @@ struct tf_rm_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t index;
+	uint16_t index;
 };
 
 /**
@@ -178,7 +205,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -191,7 +218,7 @@ struct tf_rm_is_allocated_parms {
 	/**
 	 * [in] Pointer to flag that indicates the state of the query
 	 */
-	uint8_t *allocated;
+	int *allocated;
 };
 
 /**
@@ -201,7 +228,7 @@ struct tf_rm_get_alloc_info_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -221,7 +248,7 @@ struct tf_rm_get_hcapi_parms {
 	/**
 	 * [in] RM DB Handle
 	 */
-	void *tf_rm_db;
+	void *rm_db;
 	/**
 	 * [in] DB Index, indicates which DB entry to perform the
 	 * action on.
@@ -306,6 +333,7 @@ int tf_rm_free_db(struct tf *tfp,
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
 int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
@@ -317,7 +345,7 @@ int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
  *
  * Returns
  *   - (0) if successful.
- *   - (-EpINVAL) on failure.
+ *   - (-EINVAL) on failure.
  */
 int tf_rm_free(struct tf_rm_free_parms *parms);
 
@@ -365,4 +393,4 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index c74994546..1917f8100 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -3,29 +3,269 @@
  * All rights reserved.
  */
 
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_session.h"
+#include "tf_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session = NULL;
+	struct tfp_calloc_parms cparms;
+	uint8_t fw_session_id;
+	union tf_session_id *session_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->open_cfg->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
+		else
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
+
+		parms->open_cfg->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_info);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session = (struct tf_session_info *)cparms.mem_va;
+
+	/* Allocate core data for the session */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	tfp->session->core_data = cparms.mem_va;
+
+	/* Initialize Session and Device */
+	session = (struct tf_session *)tfp->session->core_data;
+	session->ver.major = 0;
+	session->ver.minor = 0;
+	session->ver.update = 0;
+
+	session_id = &parms->open_cfg->session_id;
+	session->session_id.internal.domain = session_id->internal.domain;
+	session->session_id.internal.bus = session_id->internal.bus;
+	session->session_id.internal.device = session_id->internal.device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+	/* Return the allocated fw session id */
+	session_id->internal.fw_session_id = fw_session_id;
+
+	session->shadow_copy = parms->open_cfg->shadow_copy;
+
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->open_cfg->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	rc = dev_bind(tfp,
+		      parms->open_cfg->device_type,
+		      session->shadow_copy,
+		      &parms->open_cfg->resources,
+		      session->dev);
+	/* Logging handled by dev_bind */
+	if (rc)
+		return rc;
+
+	/* Query for Session Config
+	 */
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	tf_close_session(tfp);
+	return -EINVAL;
+}
+
+int
+tf_session_attach_session(struct tf *tfp __rte_unused,
+			  struct tf_session_attach_session_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	TFP_DRV_LOG(ERR,
+		    "Attach not yet supported, rc:%s\n",
+		    strerror(-rc));
+	return rc;
+}
+
+int
+tf_session_close_session(struct tf *tfp,
+			 struct tf_session_close_session_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs = NULL;
+	struct tf_dev_info *tfd;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (tfs->session_id.id == TF_SESSION_ID_INVALID) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid session id, unable to close, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Record the session we're closing so the caller knows the
+	 * details.
+	 */
+	*parms->session_id = tfs->session_id;
+
+	rc = tf_session_get_device(tfs, &tfd);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* In case we're attached only the session client gets closed */
+	rc = tf_msg_session_close(tfp);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "FW Session close failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	tfs->ref_count--;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		/* Unbind the device */
+		rc = dev_unbind(tfp, tfd);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "Device unbind failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	return 0;
+}
+
 int
 tf_session_get_session(struct tf *tfp,
-		       struct tf_session *tfs)
+		       struct tf_session **tfs)
 {
+	int rc;
+
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		TFP_DRV_LOG(ERR, "Session not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	*tfs = (struct tf_session *)(tfp->session->core_data);
 
 	return 0;
 }
 
 int
 tf_session_get_device(struct tf_session *tfs,
-		      struct tf_device *tfd)
+		      struct tf_dev_info **tfd)
 {
+	int rc;
+
 	if (tfs->dev == NULL) {
-		TFP_DRV_LOG(ERR, "Device not created\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Device not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	*tfd = tfs->dev;
+
+	return 0;
+}
+
+int
+tf_session_get_fw_session_id(struct tf *tfp,
+			     uint8_t *fw_session_id)
+{
+	int rc;
+	struct tf_session *tfs = NULL;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
 	}
-	tfd = tfs->dev;
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*fw_session_id = tfs->session_id.internal.fw_session_id;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index b1cc7a4a7..92792518b 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -63,12 +63,7 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Device type, provided by tf_open_session().
-	 */
-	enum tf_device_type device_type;
-
-	/** Session ID, allocated by FW on tf_open_session().
-	 */
+	/** Session ID, allocated by FW on tf_open_session() */
 	union tf_session_id session_id;
 
 	/**
@@ -101,7 +96,7 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
-	/** Device */
+	/** Device handle */
 	struct tf_dev_info *dev;
 
 	/** Session HW and SRAM resources */
@@ -323,13 +318,97 @@ struct tf_session {
 	struct stack em_pool[TF_DIR_MAX];
 };
 
+/**
+ * Session open parameter definition
+ */
+struct tf_session_open_session_parms {
+	/**
+	 * [in] Pointer to the TF open session configuration
+	 */
+	struct tf_open_session_parms *open_cfg;
+};
+
+/**
+ * Session attach parameter definition
+ */
+struct tf_session_attach_session_parms {
+	/**
+	 * [in] Pointer to the TF attach session configuration
+	 */
+	struct tf_attach_session_parms *attach_cfg;
+};
+
+/**
+ * Session close parameter definition
+ */
+struct tf_session_close_session_parms {
+	uint8_t *ref_count;
+	union tf_session_id *session_id;
+};
+
 /**
  * @page session Session Management
  *
+ * @ref tf_session_open_session
+ *
+ * @ref tf_session_attach_session
+ *
+ * @ref tf_session_close_session
+ *
  * @ref tf_session_get_session
  *
  * @ref tf_session_get_device
+ *
+ * @ref tf_session_get_fw_session_id
+ */
+
+/**
+ * Creates a host session with a corresponding firmware session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
+int tf_session_open_session(struct tf *tfp,
+			    struct tf_session_open_session_parms *parms);
+
+/**
+ * Attaches a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_attach_session(struct tf *tfp,
+			      struct tf_session_attach_session_parms *parms);
+
+/**
+ * Closes a previous created session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in/out] parms
+ *   Pointer to the session close parameters.
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_close_session(struct tf *tfp,
+			     struct tf_session_close_session_parms *parms);
 
 /**
  * Looks up the private session information from the TF session info.
@@ -338,14 +417,14 @@ struct tf_session {
  *   Pointer to TF handle
  *
  * [out] tfs
- *   Pointer to the session
+ *   Pointer pointer to the session
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_session(struct tf *tfp,
-			   struct tf_session *tfs);
+			   struct tf_session **tfs);
 
 /**
  * Looks up the device information from the TF Session.
@@ -354,13 +433,30 @@ int tf_session_get_session(struct tf *tfp,
  *   Pointer to TF handle
  *
  * [out] tfd
- *   Pointer to the device
+ *   Pointer pointer to the device
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
 int tf_session_get_device(struct tf_session *tfs,
-			  struct tf_dev_info *tfd);
+			  struct tf_dev_info **tfd);
+
+/**
+ * Looks up the FW session id of the firmware connection for the
+ * requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_fw_session_id(struct tf *tfp,
+				 uint8_t *fw_session_id);
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 7a5443678..a8bb0edab 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,8 +7,12 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+
+#include "tf_core.h"
 #include "stack.h"
 
+struct tf_session;
+
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
 	PT_LVL_1,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index a57a5ddf2..b79706f97 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -10,12 +10,12 @@
 struct tf;
 
 /**
- * Table Type DBs.
+ * Table DBs.
  */
 /* static void *tbl_db[TF_DIR_MAX]; */
 
 /**
- * Table Type Shadow DBs
+ * Table Shadow DBs
  */
 /* static void *shadow_tbl_db[TF_DIR_MAX]; */
 
@@ -30,49 +30,49 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_type_bind(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp __rte_unused,
+	    struct tf_tbl_cfg_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_unbind(struct tf *tfp __rte_unused)
+tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc(struct tf *tfp __rte_unused,
-		  struct tf_tbl_type_alloc_parms *parms __rte_unused)
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_free(struct tf *tfp __rte_unused,
-		 struct tf_tbl_type_free_parms *parms __rte_unused)
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_alloc_search(struct tf *tfp __rte_unused,
-			 struct tf_tbl_type_alloc_search_parms *parms __rte_unused)
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_set(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp __rte_unused,
+	   struct tf_tbl_set_parms *parms __rte_unused)
 {
 	return 0;
 }
 
 int
-tf_tbl_type_get(struct tf *tfp __rte_unused,
-		struct tf_tbl_type_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp __rte_unused,
+	   struct tf_tbl_get_parms *parms __rte_unused)
 {
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index c880b368b..11f2aa333 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -11,33 +11,39 @@
 struct tf;
 
 /**
- * The Table Type module provides processing of Internal TF table types.
+ * The Table module provides processing of Internal TF table types.
  */
 
 /**
- * Table Type configuration parameters
+ * Table configuration parameters
  */
-struct tf_tbl_type_cfg_parms {
+struct tf_tbl_cfg_parms {
 	/**
 	 * Number of table types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * Table Type element configuration array
 	 */
-	struct tf_rm_element_cfg *tbl_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tbl_type_cfg *tbl_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
- * Table Type allocation parameters
+ * Table allocation parameters
  */
-struct tf_tbl_type_alloc_parms {
+struct tf_tbl_alloc_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -53,9 +59,9 @@ struct tf_tbl_type_alloc_parms {
 };
 
 /**
- * Table Type free parameters
+ * Table free parameters
  */
-struct tf_tbl_type_free_parms {
+struct tf_tbl_free_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -75,7 +81,10 @@ struct tf_tbl_type_free_parms {
 	uint16_t ref_cnt;
 };
 
-struct tf_tbl_type_alloc_search_parms {
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -117,9 +126,9 @@ struct tf_tbl_type_alloc_search_parms {
 };
 
 /**
- * Table Type set parameters
+ * Table set parameters
  */
-struct tf_tbl_type_set_parms {
+struct tf_tbl_set_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -143,9 +152,9 @@ struct tf_tbl_type_set_parms {
 };
 
 /**
- * Table Type get parameters
+ * Table get parameters
  */
-struct tf_tbl_type_get_parms {
+struct tf_tbl_get_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -169,39 +178,39 @@ struct tf_tbl_type_get_parms {
 };
 
 /**
- * @page tbl_type Table Type
+ * @page tbl Table
  *
- * @ref tf_tbl_type_bind
+ * @ref tf_tbl_bind
  *
- * @ref tf_tbl_type_unbind
+ * @ref tf_tbl_unbind
  *
- * @ref tf_tbl_type_alloc
+ * @ref tf_tbl_alloc
  *
- * @ref tf_tbl_type_free
+ * @ref tf_tbl_free
  *
- * @ref tf_tbl_type_alloc_search
+ * @ref tf_tbl_alloc_search
  *
- * @ref tf_tbl_type_set
+ * @ref tf_tbl_set
  *
- * @ref tf_tbl_type_get
+ * @ref tf_tbl_get
  */
 
 /**
- * Initializes the Table Type module with the requested DBs. Must be
+ * Initializes the Table module with the requested DBs. Must be
  * invoked as the first thing before any of the access functions.
  *
  * [in] tfp
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table configuration parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_bind(struct tf *tfp,
-		     struct tf_tbl_type_cfg_parms *parms);
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
 
 /**
  * Cleans up the private DBs and releases all the data.
@@ -216,7 +225,7 @@ int tf_tbl_type_bind(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_unbind(struct tf *tfp);
+int tf_tbl_unbind(struct tf *tfp);
 
 /**
  * Allocates the requested table type from the internal RM DB.
@@ -225,14 +234,14 @@ int tf_tbl_type_unbind(struct tf *tfp);
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table allocation parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc(struct tf *tfp,
-		      struct tf_tbl_type_alloc_parms *parms);
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
 
 /**
  * Free's the requested table type and returns it to the DB. If shadow
@@ -244,14 +253,14 @@ int tf_tbl_type_alloc(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_free(struct tf *tfp,
-		     struct tf_tbl_type_free_parms *parms);
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
 
 /**
  * Supported if Shadow DB is configured. Searches the Shadow DB for
@@ -269,8 +278,8 @@ int tf_tbl_type_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_alloc_search(struct tf *tfp,
-			     struct tf_tbl_type_alloc_search_parms *parms);
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
 
 /**
  * Configures the requested element by sending a firmware request which
@@ -280,14 +289,14 @@ int tf_tbl_type_alloc_search(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table set parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_set(struct tf *tfp,
-		    struct tf_tbl_type_set_parms *parms);
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
 
 /**
  * Retrieves the requested element by sending a firmware request to get
@@ -297,13 +306,13 @@ int tf_tbl_type_set(struct tf *tfp,
  *   Pointer to TF handle, used for HCAPI communication
  *
  * [in] parms
- *   Pointer to parameters
+ *   Pointer to Table get parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_type_get(struct tf *tfp,
-		    struct tf_tbl_type_get_parms *parms);
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
 
 #endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 1420c9ed5..68c25eb1b 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -20,16 +20,22 @@ struct tf_tcam_cfg_parms {
 	 * Number of tcam types in each of the configuration arrays
 	 */
 	uint16_t num_elements;
-
 	/**
 	 * TCAM configuration array
 	 */
-	struct tf_rm_element_cfg *tcam_cfg[TF_DIR_MAX];
-
+	struct tf_rm_element_cfg *cfg;
 	/**
 	 * Shadow table type configuration array
 	 */
-	struct tf_shadow_tcam_cfg *tcam_shadow_cfg[TF_DIR_MAX];
+	struct tf_shadow_tcam_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 14/51] net/bnxt: support two-level priority for TCAMs
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (12 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 13/51] net/bnxt: update multi device design support Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
                           ` (38 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Venkat Duvvuru, Randy Schacher

From: Shahaji Bhosle <sbhosle@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 36 ++++++++++++++++++++++++------
 drivers/net/bnxt/tf_core/tf_core.h |  4 +++-
 drivers/net/bnxt/tf_core/tf_em.c   |  6 ++---
 drivers/net/bnxt/tf_core/tf_tbl.c  |  2 +-
 4 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 81a88e211..eac57e7bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -893,7 +893,7 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
+	int index = 0;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
 
@@ -916,12 +916,34 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority) {
+		for (index = session_pool->size - 1; index >= 0; index--) {
+			if (ba_inuse(session_pool,
+					  index) == BA_ENTRY_FREE) {
+				break;
+			}
+		}
+		if (ba_alloc_index(session_pool,
+				   index) == BA_FAIL) {
+			TFP_DRV_LOG(ERR,
+				    "%s: %s: ba_alloc index %d failed\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+				    index);
+			return -ENOMEM;
+		}
+	} else {
+		index = ba_alloc(session_pool);
+		if (index == BA_FAIL) {
+			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+			return -ENOMEM;
+		}
 	}
 
 	parms->idx = index;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 74ed24e5a..f1ef00b30 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -799,7 +799,9 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested (definition TBD)
+	 * [in] Priority of entry requested
+	 * 0: index from top i.e. highest priority first
+	 * !0: index from bottom i.e lowest priority first
 	 */
 	uint32_t priority;
 	/**
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index fd1797e39..91cbc6299 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -479,8 +479,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG
-		   (ERR,
+		TFP_DRV_LOG(ERR,
 		   "dir:%d, EM entry index allocation failed\n",
 		   parms->dir);
 		return rc;
@@ -495,8 +494,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	PMD_DRV_LOG
-		   (ERR,
+	TFP_DRV_LOG(INFO,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 26313ed3c..4e236d56c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -1967,7 +1967,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
+		TFP_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 15/51] net/bnxt: add HCAPI interface support
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (13 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-07  8:03           ` Ferruh Yigit
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
                           ` (37 subsequent siblings)
  52 siblings, 1 reply; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

Add new hardware shim APIs to support multiple
device generations

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/Makefile           |  10 +
 drivers/net/bnxt/hcapi/hcapi_cfa.h        | 271 +++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 +++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h   | 672 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     | 399 +++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h     | 451 +++++++++++++++
 drivers/net/bnxt/meson.build              |   2 +
 drivers/net/bnxt/tf_core/tf_em.c          |  28 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  94 +--
 drivers/net/bnxt/tf_core/tf_tbl.h         |  24 +-
 10 files changed, 1970 insertions(+), 73 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/Makefile
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
 create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h

diff --git a/drivers/net/bnxt/hcapi/Makefile b/drivers/net/bnxt/hcapi/Makefile
new file mode 100644
index 000000000..65cddd789
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/Makefile
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019-2020 Broadcom Limited.
+# All rights reserved.
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_p4.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_defs.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_p4.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/cfa_p40_hw.h
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
new file mode 100644
index 000000000..f60af4e56
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -0,0 +1,271 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_H_
+#define _HCAPI_CFA_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#include "hcapi_cfa_defs.h"
+
+#define SUPPORT_CFA_HW_P4  1
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL  1
+#endif
+
+/**
+ * Index used for the sram_entries field
+ */
+enum hcapi_cfa_resc_type_sram {
+	HCAPI_CFA_RESC_TYPE_SRAM_FULL_ACTION,
+	HCAPI_CFA_RESC_TYPE_SRAM_MCG,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_8B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_16B,
+	HCAPI_CFA_RESC_TYPE_SRAM_ENCAP_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	HCAPI_CFA_RESC_TYPE_SRAM_COUNTER_64B,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_SPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_DPORT,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_S_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_NAT_D_IPV4,
+	HCAPI_CFA_RESC_TYPE_SRAM_MAX
+};
+
+/**
+ * Index used for the hw_entries field in struct cfa_rm_db
+ */
+enum hcapi_cfa_resc_type_hw {
+	/* common HW resources for all chip variants */
+	HCAPI_CFA_RESC_TYPE_HW_L2_CTXT_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_PROF_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_EM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_EM_REC,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	HCAPI_CFA_RESC_TYPE_HW_WC_TCAM,
+	HCAPI_CFA_RESC_TYPE_HW_METER_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_METER_INST,
+	HCAPI_CFA_RESC_TYPE_HW_MIRROR,
+	HCAPI_CFA_RESC_TYPE_HW_UPAR,
+	/* Wh+/SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_SP_TCAM,
+	/* Thor, SR2 common HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_FKB,
+	/* SR specific HW resources */
+	HCAPI_CFA_RESC_TYPE_HW_TBL_SCOPE,
+	HCAPI_CFA_RESC_TYPE_HW_L2_FUNC,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH0,
+	HCAPI_CFA_RESC_TYPE_HW_EPOCH1,
+	HCAPI_CFA_RESC_TYPE_HW_METADATA,
+	HCAPI_CFA_RESC_TYPE_HW_CT_STATE,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_PROF,
+	HCAPI_CFA_RESC_TYPE_HW_RANGE_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_LAG_ENTRY,
+	HCAPI_CFA_RESC_TYPE_HW_MAX
+};
+
+struct hcapi_cfa_key_result {
+	uint64_t bucket_mem_ptr;
+	uint8_t bucket_idx;
+};
+
+/* common CFA register access macros */
+#define CFA_REG(x)		OFFSETOF(cfa_reg_t, cfa_##x)
+
+#ifndef REG_WR
+#define REG_WR(_p, x, y)  (*((uint32_t volatile *)(x)) = (y))
+#endif
+#ifndef REG_RD
+#define REG_RD(_p, x)  (*((uint32_t volatile *)(x)))
+#endif
+#define CFA_REG_RD(_p, x)	\
+	REG_RD(0, (uint32_t)(_p)->base_addr + CFA_REG(x))
+#define CFA_REG_WR(_p, x, y)	\
+	REG_WR(0, (uint32_t)(_p)->base_addr + CFA_REG(x), y)
+
+
+/* Constants used by Resource Manager Registration*/
+#define RM_CLIENT_NAME_MAX_LEN          32
+
+/**
+ *  Resource Manager Data Structures used for resource requests
+ */
+struct hcapi_cfa_resc_req_entry {
+	uint16_t min;
+	uint16_t max;
+};
+
+struct hcapi_cfa_resc_req {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_req_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the starting index and the number of
+	 * resources in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_req_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_req_db {
+	struct hcapi_cfa_resc_req rx;
+	struct hcapi_cfa_resc_req tx;
+};
+
+struct hcapi_cfa_resc_entry {
+	uint16_t start;
+	uint16_t stride;
+	uint16_t tag;
+};
+
+struct hcapi_cfa_resc {
+	/* Wh+/SR specific onchip Action SRAM resources */
+	/* Validity of each sram type is indicated by the
+	 * corresponding sram type bit in the sram_resc_flags. When
+	 * set to 1, the CFA sram resource type is valid and amount of
+	 * resources for this type is reserved. Each sram resource
+	 * pool is identified by the starting index and number of
+	 * resources in the pool.
+	 */
+	uint32_t sram_resc_flags;
+	struct hcapi_cfa_resc_entry sram_resc[HCAPI_CFA_RESC_TYPE_SRAM_MAX];
+
+	/* Validity of each resource type is indicated by the
+	 * corresponding resource type bit in the hw_resc_flags. When
+	 * set to 1, the CFA resource type is valid and amount of
+	 * resource of this type is reserved. Each resource pool is
+	 * identified by the startin index and the number of resources
+	 * in the pool.
+	 */
+	uint32_t hw_resc_flags;
+	struct hcapi_cfa_resc_entry hw_resc[HCAPI_CFA_RESC_TYPE_HW_MAX];
+};
+
+struct hcapi_cfa_resc_db {
+	struct hcapi_cfa_resc rx;
+	struct hcapi_cfa_resc tx;
+};
+
+/**
+ * This is the main data structure used by the CFA Resource
+ * Manager.  This data structure holds all the state and table
+ * management information.
+ */
+typedef struct hcapi_cfa_rm_data {
+	uint32_t dummy_data;
+} hcapi_cfa_rm_data_t;
+
+/* End RM support */
+
+struct hcapi_cfa_devops;
+
+struct hcapi_cfa_devinfo {
+	uint8_t global_cfg_data[CFA_GLOBAL_CFG_DATA_SZ];
+	struct hcapi_cfa_layout_tbl layouts;
+	struct hcapi_cfa_devops *devops;
+};
+
+int hcapi_cfa_dev_bind(enum hcapi_cfa_ver hw_ver,
+		       struct hcapi_cfa_devinfo *dev_info);
+
+int hcapi_cfa_key_compile_layout(struct hcapi_cfa_key_template *key_template,
+				 struct hcapi_cfa_key_layout *key_layout);
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data, uint16_t bitlen);
+int
+hcapi_cfa_action_compile_layout(struct hcapi_cfa_action_template *act_template,
+				struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_init_obj(uint64_t *act_obj,
+			      struct hcapi_cfa_action_layout *act_layout);
+int hcapi_cfa_action_compute_ptr(uint64_t *act_obj,
+				 struct hcapi_cfa_action_layout *act_layout,
+				 uint32_t base_ptr);
+
+int hcapi_cfa_action_hw_op(struct hcapi_cfa_hwop *op,
+			   uint8_t *act_tbl,
+			   struct hcapi_cfa_data *act_obj);
+int hcapi_cfa_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_rm_register_client(hcapi_cfa_rm_data_t *data,
+				 const char *client_name,
+				 int *client_id);
+int hcapi_cfa_rm_unregister_client(hcapi_cfa_rm_data_t *data,
+				   int client_id);
+int hcapi_cfa_rm_query_resources(hcapi_cfa_rm_data_t *data,
+				 int client_id,
+				 uint16_t chnl_id,
+				 struct hcapi_cfa_resc_req_db *req_db);
+int hcapi_cfa_rm_query_resources_one(hcapi_cfa_rm_data_t *data,
+				     int clien_id,
+				     struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_reserve_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_release_resources(hcapi_cfa_rm_data_t *data,
+				   int client_id,
+				   struct hcapi_cfa_resc_req_db *resc_req,
+				   struct hcapi_cfa_resc_db *resc_db);
+int hcapi_cfa_rm_initialize(hcapi_cfa_rm_data_t *data);
+
+#if SUPPORT_CFA_HW_P4
+
+int hcapi_cfa_p4_dev_hw_op(struct hcapi_cfa_hwop *op, uint16_t tbl_id,
+			    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxt_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_l2ctxtrmp_hwop(struct hcapi_cfa_hwop *op,
+				      struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcam_hwop(struct hcapi_cfa_hwop *op,
+				 struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_prof_tcamrmp_hwop(struct hcapi_cfa_hwop *op,
+				    struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
+			       struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_data *obj_data);
+#endif /* SUPPORT_CFA_HW_P4 */
+/**
+ *  HCAPI CFA device HW operation function callback definition
+ *  This is standardized function callback hook to install different
+ *  CFA HW table programming function callback.
+ */
+
+struct hcapi_cfa_tbl_cb {
+	/**
+	 * This function callback provides the functionality to read/write
+	 * HW table entry from a HW table.
+	 *
+	 * @param[in] op
+	 *   A pointer to the Hardware operation parameter
+	 *
+	 * @param[in] obj_data
+	 *   A pointer to the HW data object for the hardware operation
+	 *
+	 * @return
+	 *   0 for SUCCESS, negative value for FAILURE
+	 */
+	int (*hwop_cb)(struct hcapi_cfa_hwop *op,
+		       struct hcapi_cfa_data *obj_data);
+};
+
+#endif  /* HCAPI_CFA_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
new file mode 100644
index 000000000..39afd4dbc
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
@@ -0,0 +1,92 @@
+/*
+ *   Copyright(c) 2019-2020 Broadcom Limited.
+ *   All rights reserved.
+ */
+
+#include "bitstring.h"
+#include "hcapi_cfa_defs.h"
+#include <errno.h>
+#include "assert.h"
+
+/* HCAPI CFA common PUT APIs */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val)
+{
+	assert(layout);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		bs_put_msb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	else
+		bs_put_lsb(data_buf,
+			   layout->field_array[field_id].bitpos,
+			   layout->field_array[field_id].bitlen, val);
+	return 0;
+}
+
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz)
+{
+	int i;
+	uint16_t bitpos;
+	uint8_t bitlen;
+	uint16_t field_id;
+
+	assert(layout);
+	assert(field_tbl);
+
+	if (layout->is_msb_order) {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_msb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	} else {
+		for (i = 0; i < field_tbl_sz; i++) {
+			field_id = field_tbl[i].field_id;
+			if (field_id > layout->array_sz)
+				return -EINVAL;
+			bitpos = layout->field_array[field_id].bitpos;
+			bitlen = layout->field_array[field_id].bitlen;
+			bs_put_lsb(obj_data, bitpos, bitlen,
+				   field_tbl[i].val);
+		}
+	}
+	return 0;
+}
+
+/* HCAPI CFA common GET APIs */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id,
+			uint64_t *val)
+{
+	assert(layout);
+	assert(val);
+
+	if (field_id > layout->array_sz)
+		/* Invalid field_id */
+		return -EINVAL;
+
+	if (layout->is_msb_order)
+		*val = bs_get_msb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	else
+		*val = bs_get_lsb(obj_data,
+				  layout->field_array[field_id].bitpos,
+				  layout->field_array[field_id].bitlen);
+	return 0;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
new file mode 100644
index 000000000..ea8d99d01
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -0,0 +1,672 @@
+
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+/*!
+ *   \file
+ *   \brief Exported functions for CFA HW programming
+ */
+#ifndef _HCAPI_CFA_DEFS_H_
+#define _HCAPI_CFA_DEFS_H_
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stddef.h>
+
+#define SUPPORT_CFA_HW_ALL 0
+#define SUPPORT_CFA_HW_P4  1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+
+#define CFA_BITS_PER_BYTE (8)
+#define __CFA_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+#define CFA_ALIGN(x, a) __CFA_ALIGN_MASK(x, (a) - 1)
+#define CFA_ALIGN_128(x) CFA_ALIGN(x, 128)
+#define CFA_ALIGN_32(x) CFA_ALIGN(x, 32)
+
+#define NUM_WORDS_ALIGN_32BIT(x)                                               \
+	(CFA_ALIGN_32(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+#define NUM_WORDS_ALIGN_128BIT(x)                                              \
+	(CFA_ALIGN_128(x) / (sizeof(uint32_t) * CFA_BITS_PER_BYTE))
+
+#define CFA_GLOBAL_CFG_DATA_SZ (100)
+
+#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
+#define SUPPORT_CFA_HW_ALL (1)
+#endif
+
+#include "hcapi_cfa_p4.h"
+#define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+#define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
+#define CFA_PROF_MAX_KEY_CFG_SZ sizeof(struct cfa_p4_prof_key_cfg)
+#define CFA_KEY_MAX_FIELD_CNT 41
+#define CFA_ACT_MAX_TEMPLATE_SZ sizeof(struct cfa_p4_action_template)
+
+/**
+ * CFA HW version definition
+ */
+enum hcapi_cfa_ver {
+	HCAPI_CFA_P40 = 0, /**< CFA phase 4.0 */
+	HCAPI_CFA_P45 = 1, /**< CFA phase 4.5 */
+	HCAPI_CFA_P58 = 2, /**< CFA phase 5.8 */
+	HCAPI_CFA_P59 = 3, /**< CFA phase 5.9 */
+	HCAPI_CFA_PMAX = 4
+};
+
+/**
+ * CFA direction definition
+ */
+enum hcapi_cfa_dir {
+	HCAPI_CFA_DIR_RX = 0, /**< Receive */
+	HCAPI_CFA_DIR_TX = 1, /**< Transmit */
+	HCAPI_CFA_DIR_MAX = 2
+};
+
+/**
+ * CFA HW OPCODE definition
+ */
+enum hcapi_cfa_hwops {
+	HCAPI_CFA_HWOPS_PUT, /**< Write to HW operation */
+	HCAPI_CFA_HWOPS_GET, /**< Read from HW operation */
+	HCAPI_CFA_HWOPS_ADD, /**< For operations which require more than simple
+			      * writes to HW, this operation is used. The
+			      * distinction with this operation when compared
+			      * to the PUT ops is that this operation is used
+			      * in conjunction with the HCAPI_CFA_HWOPS_DEL
+			      * op to remove the operations issued by the
+			      * ADD OP.
+			      */
+	HCAPI_CFA_HWOPS_DEL, /**< This issues operations to clear the hardware.
+			      * This operation is used in conjunction
+			      * with the HCAPI_CFA_HWOPS_ADD op and is the
+			      * way to undo/clear the ADD op.
+			      */
+	HCAPI_CFA_HWOPS_MAX
+};
+
+/**
+ * CFA HW KEY CONTROL OPCODE definition
+ */
+enum hcapi_cfa_key_ctrlops {
+	HCAPI_CFA_KEY_CTRLOPS_INSERT, /**< insert control bits */
+	HCAPI_CFA_KEY_CTRLOPS_STRIP, /**< strip control bits */
+	HCAPI_CFA_KEY_CTRLOPS_MAX
+};
+
+/**
+ * CFA HW field structure definition
+ */
+struct hcapi_cfa_field {
+	/** [in] Starting bit position pf the HW field within a HW table
+	 *  entry.
+	 */
+	uint16_t bitpos;
+	/** [in] Number of bits for the HW field. */
+	uint8_t bitlen;
+};
+
+/**
+ * CFA HW table entry layout structure definition
+ */
+struct hcapi_cfa_layout {
+	/** [out] Bit order of layout */
+	bool is_msb_order;
+	/** [out] Size in bits of entry */
+	uint32_t total_sz_in_bits;
+	/** [out] data pointer of the HW layout fields array */
+	const struct hcapi_cfa_field *field_array;
+	/** [out] number of HW field entries in the HW layout field array */
+	uint32_t array_sz;
+};
+
+/**
+ * CFA HW data object definition
+ */
+struct hcapi_cfa_data_obj {
+	/** [in] HW field identifier. Used as an index to a HW table layout */
+	uint16_t field_id;
+	/** [in] Value of the HW field */
+	uint64_t val;
+};
+
+/**
+ * CFA HW definition
+ */
+struct hcapi_cfa_hw {
+	/** [in] HW table base address for the operation with optional device
+	 *  handle. For on-chip HW table operation, this is the either the TX
+	 *  or RX CFA HW base address. For off-chip table, this field is the
+	 *  base memory address of the off-chip table.
+	 */
+	uint64_t base_addr;
+	/** [in] Optional opaque device handle. It is generally used to access
+	 *  an GRC register space through PCIE BAR and passed to the BAR memory
+	 *  accessor routine.
+	 */
+	void *handle;
+};
+
+/**
+ * CFA HW operation definition
+ *
+ */
+struct hcapi_cfa_hwop {
+	/** [in] HW opcode */
+	enum hcapi_cfa_hwops opcode;
+	/** [in] CFA HW information used by accessor routines.
+	 */
+	struct hcapi_cfa_hw hw;
+};
+
+/**
+ * CFA HW data structure definition
+ */
+struct hcapi_cfa_data {
+	/** [in] physical offset to the HW table for the data to be
+	 *  written to.  If this is an array of registers, this is the
+	 *  index into the array of registers.  For writing keys, this
+	 *  is the byte offset into the memory wher the key should be
+	 *  written.
+	 */
+	union {
+		uint32_t index;
+		uint32_t byte_offset;
+	} u;
+	/** [in] HW data buffer pointer */
+	uint8_t *data;
+	/** [in] HW data mask buffer pointer */
+	uint8_t *data_mask;
+	/** [in] size of the HW data buffer in bytes */
+	uint16_t data_sz;
+};
+
+/*********************** Truflow start ***************************/
+enum hcapi_cfa_pg_tbl_lvl {
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
+};
+
+enum hcapi_cfa_em_table_type {
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
+};
+
+struct hcapi_cfa_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct hcapi_cfa_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct hcapi_cfa_em_page_tbl    pg_tbl[TF_PT_LVL_MAX];
+};
+
+struct hcapi_cfa_em_ctx_mem_info {
+	struct hcapi_cfa_em_table		em_tables[TF_MAX_TABLE];
+};
+
+/*********************** Truflow end ****************************/
+
+/**
+ * CFA HW key table definition
+ *
+ * Applicable to EEM and off-chip EM table only.
+ */
+struct hcapi_cfa_key_tbl {
+	/** [in] For EEM, this is the KEY0 base mem pointer. For off-chip EM,
+	 *  this is the base mem pointer of the key table.
+	 */
+	uint8_t *base0;
+	/** [in] total size of the key table in bytes. For EEM, this size is
+	 *  same for both KEY0 and KEY1 table.
+	 */
+	uint32_t size;
+	/** [in] number of key buckets, applicable for newer chips */
+	uint32_t num_buckets;
+	/** [in] For EEM, this is KEY1 base mem pointer. Fo off-chip EM,
+	 *  this is the key record memory base pointer within the key table,
+	 *  applicable for newer chip
+	 */
+	uint8_t *base1;
+};
+
+/**
+ * CFA HW key buffer definition
+ */
+struct hcapi_cfa_key_obj {
+	/** [in] pointer to the key data buffer */
+	uint32_t *data;
+	/** [in] buffer len in bits */
+	uint32_t len;
+	/** [in] Pointer to the key layout */
+	struct hcapi_cfa_key_layout *layout;
+};
+
+/**
+ * CFA HW key data definition
+ */
+struct hcapi_cfa_key_data {
+	/** [in] For on-chip key table, it is the offset in unit of smallest
+	 *  key. For off-chip key table, it is the byte offset relative
+	 *  to the key record memory base.
+	 */
+	uint32_t offset;
+	/** [in] HW key data buffer pointer */
+	uint8_t *data;
+	/** [in] size of the key in bytes */
+	uint16_t size;
+};
+
+/**
+ * CFA HW key location definition
+ */
+struct hcapi_cfa_key_loc {
+	/** [out] on-chip EM bucket offset or off-chip EM bucket mem pointer */
+	uint64_t bucket_mem_ptr;
+	/** [out] index within the EM bucket */
+	uint8_t bucket_idx;
+};
+
+/**
+ * CFA HW layout table definition
+ */
+struct hcapi_cfa_layout_tbl {
+	/** [out] data pointer to an array of fix formatted layouts supported.
+	 *  The index to the array is the CFA HW table ID
+	 */
+	const struct hcapi_cfa_layout *tbl;
+	/** [out] number of fix formatted layouts in the layout array */
+	uint16_t num_layouts;
+};
+
+/**
+ * Key template consists of key fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_key_template {
+	/** [in] key field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t field_en[CFA_KEY_MAX_FIELD_CNT];
+	/** [in] Identified if the key template is for TCAM. If false, the
+	 *  the key template is for EM. This field is mandantory for device that
+	 *  only support fix key formats.
+	 */
+	bool is_wc_tcam_key;
+};
+
+/**
+ * key layout consist of field array, key bitlen, key ID, and other meta data
+ * pertain to a key
+ */
+struct hcapi_cfa_key_layout {
+	/** [out] key layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual key size in number of bits */
+	uint16_t bitlen;
+	/** [out] key identifier and this field is only valid for device
+	 *  that supports fix key formats
+	 */
+	uint16_t id;
+	/** [out] Indentified the key layout is WC TCAM key */
+	bool is_wc_tcam_key;
+	/** [out] total slices size, valid for WC TCAM key only. It can be
+	 *  used by the user to determine the total size of WC TCAM key slices
+	 *  in bytes.
+	 */
+	uint16_t slices_size;
+};
+
+/**
+ * key layout memory contents
+ */
+struct hcapi_cfa_key_layout_contents {
+	/** key layouts */
+	struct hcapi_cfa_key_layout key_layout;
+
+	/** layout */
+	struct hcapi_cfa_layout layout;
+
+	/** fields */
+	struct hcapi_cfa_field field_array[CFA_KEY_MAX_FIELD_CNT];
+};
+
+/**
+ * Action template consists of action fields that can be enabled/disabled
+ * individually.
+ */
+struct hcapi_cfa_action_template {
+	/** [in] CFA version for the action template */
+	enum hcapi_cfa_ver hw_ver;
+	/** [in] action field enable field array, set 1 to the correspeonding
+	 *  field enable to make a field valid
+	 */
+	uint8_t data[CFA_ACT_MAX_TEMPLATE_SZ];
+};
+
+/**
+ * action layout consist of field array, action wordlen and action format ID
+ */
+struct hcapi_cfa_action_layout {
+	/** [in] action identifier */
+	uint16_t id;
+	/** [out] action layout data */
+	struct hcapi_cfa_layout *layout;
+	/** [out] actual action record size in number of bits */
+	uint16_t wordlen;
+};
+
+/**
+ *  \defgroup CFA_HCAPI_PUT_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to program a specified value to a
+ * HW field based on the provided programming layout.
+ *
+ * @param[in,out] obj_data
+ *   A data pointer to a CFA HW key/mask data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[in] val
+ *   Value of the HW field to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field(uint64_t *data_buf,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t val);
+
+/**
+ * This API provides the functionality to program an array of field values
+ * with corresponding field IDs to a number of profiler sub-block fields
+ * based on the fixed profiler sub-block hardware programming layout.
+ *
+ * @param[in, out] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * This API provides the functionality to write a value to a
+ * field within the bit position and bit length of a HW data
+ * object based on a provided programming layout.
+ *
+ * @param[in, out] act_obj
+ *   A pointer of the action object to be initialized
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param field_id
+ *   [in] Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[in] val
+ *   HW field value to be programmed
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_put_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t val);
+
+/*@}*/
+
+/**
+ *  \defgroup CFA_HCAPI_GET_API
+ *  HCAPI used for writing to the hardware
+ *  @{
+ */
+
+/**
+ * This API provides the functionality to get the word length of
+ * a layout object.
+ *
+ * @param[in] layout
+ *   A pointer of the HW layout
+ *
+ * @return
+ *   Word length of the layout object
+ */
+uint16_t hcapi_cfa_get_wordlen(const struct hcapi_cfa_layout *layout);
+
+/**
+ * The API provides the functionality to get bit offset and bit
+ * length information of a field from a programming layout.
+ *
+ * @param[in] layout
+ *   A pointer of the action layout
+ *
+ * @param[out] slice
+ *   A pointer to the action offset info data structure
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_slice(const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, struct hcapi_cfa_field *slice);
+
+/**
+ * This API provides the functionality to read the value of a
+ * CFA HW field from CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA HW key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in] field_id
+ *   ID of the HW field to be programmed
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field(uint64_t *obj_data,
+			const struct hcapi_cfa_layout *layout,
+			uint16_t field_id, uint64_t *val);
+
+/**
+ * This API provides the functionality to read a number of
+ * HW fields from a CFA HW data object based on the hardware
+ * programming layout.
+ *
+ * @param[in] obj_data
+ *   A pointer to a CFA profiler key/mask object data
+ *
+ * @param[in] layout
+ *   A pointer to CFA HW programming layout
+ *
+ * @param[in, out] field_tbl
+ *   A pointer to an array that consists of the object field
+ *   ID/value pairs
+ *
+ * @param[in] field_tbl_sz
+ *   Number of entries in the table
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_fields(uint64_t *obj_data,
+			 const struct hcapi_cfa_layout *layout,
+			 struct hcapi_cfa_data_obj *field_tbl,
+			 uint16_t field_tbl_sz);
+
+/**
+ * Get a value to a specific location relative to a HW field
+ *
+ * This API provides the functionality to read HW field from
+ * a section of a HW data object identified by the bit position
+ * and bit length from a given programming layout in order to avoid
+ * reading the entire HW data object.
+ *
+ * @param[in] obj_data
+ *   A pointer of the data object to read from
+ *
+ * @param[in] layout
+ *   A pointer of the programming layout
+ *
+ * @param[in] field_id
+ *   Identifier of the HW field
+ *
+ * @param[in] bitpos_adj
+ *   Bit position adjustment value
+ *
+ * @param[in] bitlen_adj
+ *   Bit length adjustment value
+ *
+ * @param[out] val
+ *   Value of the HW field
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_get_field_rel(uint64_t *obj_data,
+			    const struct hcapi_cfa_layout *layout,
+			    uint16_t field_id, int16_t bitpos_adj,
+			    int16_t bitlen_adj, uint64_t *val);
+
+/**
+ * This function is used to initialize a layout_contents structure
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * initialized.
+ *
+ * @param[in] layout_contents
+ *  A pointer of the layout contents to initialize
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_init_key_layout_contents(struct hcapi_cfa_key_layout_contents *cont);
+
+/**
+ * This function is used to validate a key template
+ *
+ * The struct hcapi_cfa_key_template is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_template
+ *  A pointer of the key template contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int
+hcapi_cfa_is_valid_key_template(struct hcapi_cfa_key_template *key_template);
+
+/**
+ * This function is used to validate a key layout
+ *
+ * The struct hcapi_cfa_key_layout is complex as there are three
+ * layers of abstraction.  Each of those layer need to be properly
+ * validated.
+ *
+ * @param[in] key_layout
+ *  A pointer of the key layout contents to validate
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_is_valid_key_layout(struct hcapi_cfa_key_layout *key_layout);
+
+/**
+ * This function is used to hash E/EM keys
+ *
+ *
+ * @param[in] key_data
+ *  A pointer of the key
+ *
+ * @param[in] bitlen
+ *  Number of bits in the key
+ *
+ * @return
+ *   CRC32 and Lookup3 hashes of the input key
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen);
+
+/**
+ * This function is used to execute an operation
+ *
+ *
+ * @param[in] op
+ *  Operation
+ *
+ * @param[in] key_tbl
+ *  Table
+ *
+ * @param[in] key_obj
+ *  Key data
+ *
+ * @param[in] key_key_loc
+ *
+ * @return
+ *   0 for SUCCESS, negative value for FAILURE
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc);
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset);
+#endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
new file mode 100644
index 000000000..ca0b1c923
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <string.h>
+#include "lookup3.h"
+#include "rand.h"
+
+#include "hcapi_cfa_defs.h"
+
+#define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
+#define TF_EM_PAGE_SIZE (1 << 21)
+uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
+uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
+bool hcapi_cfa_lkup_init;
+
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
+static void hcapi_cfa_seeds_init(void)
+{
+	int i;
+	uint32_t r;
+
+	if (hcapi_cfa_lkup_init)
+		return;
+
+	hcapi_cfa_lkup_init = true;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	hcapi_cfa_lkup_lkup3_init_cfg = SWAP_WORDS32(rand32());
+
+	for (i = 0; i < HCAPI_CFA_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		hcapi_cfa_lkup_em_seed_mem[i * 2 + 1] = (r & 0x1);
+	}
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t hcapi_cfa_crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "CRC2:");
+#endif
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+#ifdef TF_EEM_DEBUG
+		TFP_DRV_LOG(DEBUG,
+			    "%02X %08X %08X\n",
+			    (buf[l] & 0xff),
+			    crc,
+			    ~crc);
+#endif
+	}
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "\n");
+#endif
+
+	return ~crc;
+}
+
+static uint32_t hcapi_cfa_crc32_hash(uint8_t *key)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = CFA_P4_EEM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = hcapi_cfa_lkup_em_seed_mem[index * 2];
+	val2 = hcapi_cfa_lkup_em_seed_mem[index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	val1 = hcapi_cfa_crc32i(~val1,
+		      (key - (CFA_P4_EEM_KEY_MAX_SIZE - 1)),
+		      CFA_P4_EEM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = hcapi_cfa_crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((const uint32_t *)(uintptr_t *)in_key) + 1,
+			 CFA_P4_EEM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 hcapi_cfa_lkup_lkup3_init_cfg);
+
+	return val1;
+}
+
+
+uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
+			      uint32_t offset)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	uint64_t addr;
+
+	if (mem == NULL)
+		return 0;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = mem->num_lvl - 1;
+
+	addr = (uintptr_t)mem->pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Approximation of HCAPI hcapi_cfa_key_hash()
+ *
+ * Return:
+ *
+ */
+uint64_t hcapi_cfa_key_hash(uint64_t *key_data,
+			    uint16_t bitlen)
+{
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+
+	/*
+	 * Init the seeds if needed
+	 */
+	if (!hcapi_cfa_lkup_init)
+		hcapi_cfa_seeds_init();
+
+	key0_hash = hcapi_cfa_crc32_hash(((uint8_t *)key_data) +
+					      (bitlen / 8) - 1);
+
+	key1_hash = hcapi_cfa_lookup3_hash((uint8_t *)key_data);
+
+	return ((uint64_t)key0_hash) << 32 | (uint64_t)key1_hash;
+}
+
+static int hcapi_cfa_key_hw_op_put(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_get(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+
+	memcpy(key_obj->data,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_add(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Is entry free?
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is entry is valid then report failure
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT))
+		return -1;
+
+	memcpy((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->data,
+	       key_obj->size);
+
+	return rc;
+}
+
+static int hcapi_cfa_key_hw_op_del(struct hcapi_cfa_hwop *op,
+				   struct hcapi_cfa_key_data *key_obj)
+{
+	int rc = 0;
+	struct cfa_p4_eem_64b_entry table_entry;
+
+	/*
+	 * Read entry
+	 */
+	memcpy(&table_entry,
+	       (uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       key_obj->size);
+
+	/*
+	 * If this is not a valid entry then report failure.
+	 */
+	if (table_entry.hdr.word1 & (1 << CFA_P4_EEM_ENTRY_VALID_SHIFT)) {
+		/*
+		 * If a key has been provided then verify the key matches
+		 * before deleting the entry.
+		 */
+		if (key_obj->data != NULL) {
+			if (memcmp(&table_entry,
+				   key_obj->data,
+				   key_obj->size) != 0)
+				return -1;
+		}
+	} else {
+		return -1;
+	}
+
+
+	/*
+	 * Delete entry
+	 */
+	memset((uint8_t *)(uintptr_t)op->hw.base_addr +
+	       key_obj->offset,
+	       0,
+	       key_obj->size);
+
+	return rc;
+}
+
+
+/** Apporiximation of hcapi_cfa_key_hw_op()
+ *
+ *
+ */
+int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
+			struct hcapi_cfa_key_tbl *key_tbl,
+			struct hcapi_cfa_key_data *key_obj,
+			struct hcapi_cfa_key_loc *key_loc)
+{
+	int rc = 0;
+
+	if (op == NULL ||
+	    key_tbl == NULL ||
+	    key_obj == NULL ||
+	    key_loc == NULL)
+		return -1;
+
+	op->hw.base_addr =
+		hcapi_get_table_page((struct hcapi_cfa_em_table *)
+				     key_tbl->base0,
+				     key_obj->offset);
+
+	if (op->hw.base_addr == 0)
+		return -1;
+
+	switch (op->opcode) {
+	case HCAPI_CFA_HWOPS_PUT: /**< Write to HW operation */
+		rc = hcapi_cfa_key_hw_op_put(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_GET: /**< Read from HW operation */
+		rc = hcapi_cfa_key_hw_op_get(op, key_obj);
+		break;
+	case HCAPI_CFA_HWOPS_ADD:
+		/**< For operations which require more than
+		 * simple writes to HW, this operation is used. The
+		 * distinction with this operation when compared
+		 * to the PUT ops is that this operation is used
+		 * in conjunction with the HCAPI_CFA_HWOPS_DEL
+		 * op to remove the operations issued by the
+		 * ADD OP.
+		 */
+
+		rc = hcapi_cfa_key_hw_op_add(op, key_obj);
+
+		break;
+	case HCAPI_CFA_HWOPS_DEL:
+		rc = hcapi_cfa_key_hw_op_del(op, key_obj);
+		break;
+	default:
+		rc = -1;
+		break;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
new file mode 100644
index 000000000..0661d6363
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -0,0 +1,451 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _HCAPI_CFA_P4_H_
+#define _HCAPI_CFA_P4_H_
+
+#include "cfa_p40_hw.h"
+
+/** CFA phase 4 fix formatted table(layout) ID definition
+ *
+ */
+enum cfa_p4_tbl_id {
+	CFA_P4_TBL_L2CTXT_TCAM = 0,
+	CFA_P4_TBL_L2CTXT_REMAP,
+	CFA_P4_TBL_PROF_TCAM,
+	CFA_P4_TBL_PROF_TCAM_REMAP,
+	CFA_P4_TBL_WC_TCAM,
+	CFA_P4_TBL_WC_TCAM_REC,
+	CFA_P4_TBL_WC_TCAM_REMAP,
+	CFA_P4_TBL_VEB_TCAM,
+	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_MAX
+};
+
+#define CFA_P4_PROF_MAX_KEYS 4
+enum cfa_p4_mac_sel_mode {
+	CFA_P4_MAC_SEL_MODE_FIRST = 0,
+	CFA_P4_MAC_SEL_MODE_LOWEST = 1,
+};
+
+struct cfa_p4_prof_key_cfg {
+	uint8_t mac_sel[CFA_P4_PROF_MAX_KEYS];
+#define CFA_P4_PROF_MAC_SEL_DMAC0 (1 << 0)
+#define CFA_P4_PROF_MAC_SEL_T_MAC0 (1 << 1)
+#define CFA_P4_PROF_MAC_SEL_OUTERMOST_MAC0 (1 << 2)
+#define CFA_P4_PROF_MAC_SEL_DMAC1 (1 << 3)
+#define CFA_P4_PROF_MAC_SEL_T_MAC1 (1 << 4)
+#define CFA_P4_PROF_MAC_OUTERMOST_MAC1 (1 << 5)
+	uint8_t pass_cnt;
+	enum cfa_p4_mac_sel_mode mode;
+};
+
+/**
+ * CFA action layout definition
+ */
+
+#define CFA_P4_ACTION_MAX_LAYOUT_SIZE 184
+
+/**
+ * Action object template structure
+ *
+ * Template structure presents data fields that are necessary to know
+ * at the beginning of Action Builder (AB) processing. Like before the
+ * AB compilation. One such example could be a template that is
+ * flexible in size (Encap Record) and the presence of these fields
+ * allows for determining the template size as well as where the
+ * fields are located in the record.
+ *
+ * The template may also present fields that are not made visible to
+ * the caller by way of the action fields.
+ *
+ * Template fields also allow for additional checking on user visible
+ * fields. One such example could be the encap pointer behavior on a
+ * CFA_P4_ACT_OBJ_TYPE_ACT or CFA_P4_ACT_OBJ_TYPE_ACT_SRAM.
+ */
+struct cfa_p4_action_template {
+	/** Action Object type
+	 *
+	 * Controls the type of the Action Template
+	 */
+	enum {
+		/** Select this type to build an Action Record Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT,
+		/** Select this type to build an Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT,
+		/** Select this type to build a SRAM Action Record
+		 * Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ACT_SRAM,
+		/** Select this type to build a SRAM Action
+		 * Encapsulation Object.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_ENCAP_SRAM,
+		/** Select this type to build a SRAM Action Modify
+		 * Object, with IPv4 capability.
+		 */
+		/* In case of Stingray the term Modify is used for the 'NAT
+		 * action'. Action builder is leveraged to fill in the NAT
+		 * object which then can be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_MODIFY_IPV4_SRAM,
+		/** Select this type to build a SRAM Action Source
+		 * Property Object.
+		 */
+		/* In case of Stingray this is not a 'pure' action record.
+		 * Action builder is leveraged to full in the Source Property
+		 * object which can then be referenced by the action
+		 * record.
+		 */
+		CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM,
+		/** Select this type to build a SRAM Action Statistics
+		 * Object
+		 */
+		CFA_P4_ACT_OBJ_TYPE_STAT_SRAM,
+	} obj_type;
+
+	/** Action Control
+	 *
+	 * Controls the internals of the Action Template
+	 *
+	 * act is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_ACT)
+	 */
+	/*
+	 * Stat and encap are always inline for EEM as table scope
+	 * allocation does not allow for separate Stats allocation,
+	 * but has the xx_inline flags as to be forward compatible
+	 * with Stingray 2, always treated as TRUE.
+	 */
+	struct {
+		/** Set to CFA_HCAPI_TRUE to enable statistics
+		 */
+		uint8_t stat_enable;
+		/** Set to CFA_HCAPI_TRUE to enable statistics to be inlined
+		 */
+		uint8_t stat_inline;
+
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation
+		 */
+		uint8_t encap_enable;
+		/** Set to CFA_HCAPI_TRUE to enable encapsulation to be inlined
+		 */
+		uint8_t encap_inline;
+	} act;
+
+	/** Modify Setting
+	 *
+	 * Controls the type of the Modify Action the template is
+	 * describing
+	 *
+	 * modify is valid when:
+	 * (obj_type == CFA_P4_ACT_OBJ_TYPE_MODIFY_SRAM)
+	 */
+	enum {
+		/** Set to enable Modify of Source IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_SOURCE_IPV4 = 0,
+		/** Set to enable Modify of Destination IPv4 Address
+		 */
+		CFA_P4_MR_REPLACE_DEST_IPV4
+	} modify;
+
+	/** Encap Control
+	 * Controls the type of encapsulation the template is
+	 * describing
+	 *
+	 * encap is valid when:
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_ACT) &&
+	 *   act.encap_enable) ||
+	 * ((obj_type == CFA_P4_ACT_OBJ_TYPE_SRC_PROP_SRAM)
+	 */
+	struct {
+		/* Direction is required as Stingray Encap on RX is
+		 * limited to l2 and VTAG only.
+		 */
+		/** Receive or Transmit direction
+		 */
+		uint8_t direction;
+		/** Set to CFA_HCAPI_TRUE to enable L2 capability in the
+		 *  template
+		 */
+		uint8_t l2_enable;
+		/** vtag controls the Encap Vector - VTAG Encoding, 4 bits
+		 *
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_0, default, no VLAN
+		 *      Tags applied
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_1, adds capability to
+		 *      set 1 VLAN Tag. Action Template compile adds
+		 *      the following field to the action object
+		 *      ::TF_ER_VLAN1
+		 * <li> CFA_P4_ACT_ENCAP_VTAGS_PUSH_2, adds capability to
+		 *      set 2 VLAN Tags. Action Template compile adds
+		 *      the following fields to the action object
+		 *      ::TF_ER_VLAN1 and ::TF_ER_VLAN2
+		 * </ul>
+		 */
+		enum { CFA_P4_ACT_ENCAP_VTAGS_PUSH_0 = 0,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_1,
+		       CFA_P4_ACT_ENCAP_VTAGS_PUSH_2 } vtag;
+
+		/*
+		 * The remaining fields are NOT supported when
+		 * direction is RX and ((obj_type ==
+		 * CFA_P4_ACT_OBJ_TYPE_ACT) && act.encap_enable).
+		 * ab_compile_layout will perform the checking and
+		 * skip remaining fields.
+		 */
+		/** L3 Encap controls the Encap Vector - L3 Encoding,
+		 *  3 bits. Defines the type of L3 Encapsulation the
+		 *  template is describing.
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_L3_NONE, default, no L3
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV4, enables L3 IPv4
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_IPV6, enables L3 IPv6
+		 *      Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8847, enables L3 MPLS
+		 *      8847 Encapsulation.
+		 * <li> CFA_P4_ACT_ENCAP_L3_MPLS_8848, enables L3 MPLS
+		 *      8848 Encapsulation.
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable any L3 encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_L3_NONE = 0,
+			/** Set to enable L3 IPv4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV4 = 4,
+			/** Set to enable L3 IPv6 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_IPV6 = 5,
+			/** Set to enable L3 MPLS 8847 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8847 = 6,
+			/** Set to enable L3 MPLS 8848 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_L3_MPLS_8848 = 7
+		} l3;
+
+#define CFA_P4_ACT_ENCAP_MAX_MPLS_LABELS 8
+		/** 1-8 labels, valid when
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8847) ||
+		 * (l3 == CFA_P4_ACT_ENCAP_L3_MPLS_8848)
+		 *
+		 * MAX number of MPLS Labels 8.
+		 */
+		uint8_t l3_num_mpls_labels;
+
+		/** Set to CFA_HCAPI_TRUE to enable L4 capability in the
+		 * template.
+		 *
+		 * CFA_HCAPI_TRUE adds ::TF_EN_UDP_SRC_PORT and
+		 * ::TF_EN_UDP_DST_PORT to the template.
+		 */
+		uint8_t l4_enable;
+
+		/** Tunnel Encap controls the Encap Vector - Tunnel
+		 *  Encap, 3 bits. Defines the type of Tunnel
+		 *  encapsulation the template is describing
+		 * <ul>
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NONE, default, no Tunnel
+		 *      Encapsulation processing.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL
+		 * <li> CFA_P4_ACT_ENCAP_TNL_VXLAN. NOTE: Expects
+		 *      l4_enable set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NGE. NOTE: Expects l4_enable
+		 *      set to CFA_P4_TRUE;
+		 * <li> CFA_P4_ACT_ENCAP_TNL_NVGRE. NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GRE.NOTE: only valid if
+		 *      l4_enable set to CFA_HCAPI_FALSE.
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4
+		 * <li> CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		 * </ul>
+		 */
+		enum {
+			/** Set to disable Tunnel header encapsulation
+			 * processing, default
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NONE = 0,
+			/** Set to enable Tunnel Generic Full header
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL,
+			/** Set to enable VXLAN header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_VXLAN,
+			/** Set to enable NGE (VXLAN2) header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NGE,
+			/** Set to enable NVGRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_NVGRE,
+			/** Set to enable GRE header encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GRE,
+			/** Set to enable Generic header after Tunnel
+			 * L4 encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4,
+			/** Set to enable Generic header after Tunnel
+			 * encapsulation
+			 */
+			CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL
+		} tnl;
+
+		/** Number of bytes of generic tunnel header,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_FULL) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TL4) ||
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_GENERIC_AFTER_TNL)
+		 */
+		uint8_t tnl_generic_size;
+		/** Number of 32b words of nge options,
+		 * valid when
+		 * (tnl == CFA_P4_ACT_ENCAP_TNL_NGE)
+		 */
+		uint8_t tnl_nge_op_len;
+		/* Currently not planned */
+		/* Custom Header */
+		/*	uint8_t custom_enable; */
+	} encap;
+};
+
+/**
+ * Enumeration of SRAM entry types, used for allocation of
+ * fixed SRAM entities. The memory model for CFA HCAPI
+ * determines if an SRAM entry type is supported.
+ */
+enum cfa_p4_action_sram_entry_type {
+	/* NOTE: Any additions to this enum must be reflected on FW
+	 * side as well.
+	 */
+
+	/** SRAM Action Record */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	/** SRAM Action Encap 8 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
+	/** SRAM Action Encap 16 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
+	/** SRAM Action Encap 64 Bytes */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+	/** SRAM Action Modify IPv4 Source */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
+	/** SRAM Action Modify IPv4 Destination */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+	/** SRAM Action Source Properties SMAC */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
+	/** SRAM Action Source Properties SMAC IPv4 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV4,
+	/** SRAM Action Source Properties SMAC IPv6 */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC_IPV6,
+	/** SRAM Action Statistics 64 Bits */
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_STATS_64,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MAX
+};
+
+/**
+ * SRAM Action Record structure holding either an action index or an
+ * action ptr.
+ */
+union cfa_p4_action_sram_act_record {
+	/** SRAM Action idx specifies the offset of the SRAM
+	 * element within its SRAM Entry Type block. This
+	 * index can be written into i.e. an L2 Context. Use
+	 * this type for all SRAM Action Record types except
+	 * SRAM Full Action records. Use act_ptr instead.
+	 */
+	uint16_t act_idx;
+	/** SRAM Full Action is special in that it needs an
+	 * action record pointer. This pointer can be written
+	 * into i.e. a Wildcard TCAM entry.
+	 */
+	uint32_t act_ptr;
+};
+
+/**
+ * cfa_p4_action_param parameter definition
+ */
+struct cfa_p4_action_param {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	uint8_t dir;
+	/**
+	 * [in] type of the sram allocation type
+	 */
+	enum cfa_p4_action_sram_entry_type type;
+	/**
+	 * [in] action record to set. The 'type' specified lists the
+	 *	record definition to use in the passed in record.
+	 */
+	union cfa_p4_action_sram_act_record record;
+	/**
+	 * [in] number of elements in act_data
+	 */
+	uint32_t act_size;
+	/**
+	 * [in] ptr to array of action data
+	 */
+	uint64_t *act_data;
+};
+
+/**
+ * EEM Key entry sizes
+ */
+#define CFA_P4_EEM_KEY_MAX_SIZE 52
+#define CFA_P4_EEM_KEY_RECORD_SIZE 64
+
+/**
+ * cfa_eem_entry_hdr
+ */
+struct cfa_p4_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define CFA_P4_EEM_ENTRY_VALID_SHIFT 31
+#define CFA_P4_EEM_ENTRY_VALID_MASK 0x80000000
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_SHIFT 30
+#define CFA_P4_EEM_ENTRY_L1_CACHEABLE_MASK 0x40000000
+#define CFA_P4_EEM_ENTRY_STRENGTH_SHIFT 28
+#define CFA_P4_EEM_ENTRY_STRENGTH_MASK 0x30000000
+#define CFA_P4_EEM_ENTRY_RESERVED_SHIFT 17
+#define CFA_P4_EEM_ENTRY_RESERVED_MASK 0x0FFE0000
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_SHIFT 8
+#define CFA_P4_EEM_ENTRY_KEY_SIZE_MASK 0x0001FF00
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_SHIFT 3
+#define CFA_P4_EEM_ENTRY_ACT_REC_SIZE_MASK 0x000000F8
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_SHIFT 2
+#define CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK 0x00000004
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_SHIFT 1
+#define CFA_P4_EEM_ENTRY_EXT_FLOW_CTR_MASK 0x00000002
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_SHIFT 0
+#define CFA_P4_EEM_ENTRY_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/**
+ *  cfa_p4_eem_key_entry
+ */
+struct cfa_p4_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[CFA_P4_EEM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct cfa_p4_eem_entry_hdr hdr;
+};
+
+#endif /* _CFA_HW_P4_H_ */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 1f7df9d06..33e6ebd66 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,6 +43,8 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_rm_new.c',
 
+	'hcapi/hcapi_cfa_p4.c',
+
 	'tf_ulp/bnxt_ulp.c',
 	'tf_ulp/ulp_mark_mgr.c',
 	'tf_ulp/ulp_flow_db.c',
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 91cbc6299..38f7fe419 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -189,7 +189,7 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
 		return NULL;
 
-	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
 		return NULL;
 
 	/*
@@ -325,7 +325,7 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key0_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key0_hash,
-					 KEY0_TABLE,
+					 TF_KEY0_TABLE,
 					 dir);
 
 	/*
@@ -334,23 +334,23 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
 	key1_entry = tf_em_entry_exists(tbl_scope_cb,
 					 entry,
 					 key1_hash,
-					 KEY1_TABLE,
+					 TF_KEY1_TABLE,
 					 dir);
 
 	if (key0_entry == -EEXIST) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return -EEXIST;
 	} else if (key1_entry == -EEXIST) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return -EEXIST;
 	} else if (key0_entry == 0) {
-		*table = KEY0_TABLE;
+		*table = TF_KEY0_TABLE;
 		*index = key0_hash;
 		return 0;
 	} else if (key1_entry == 0) {
-		*table = KEY1_TABLE;
+		*table = TF_KEY1_TABLE;
 		*index = key1_hash;
 		return 0;
 	}
@@ -384,7 +384,7 @@ int tf_insert_eem_entry(struct tf_session *session,
 	int		   num_of_entry;
 
 	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
 
 	if (!mask)
 		return -EINVAL;
@@ -392,13 +392,13 @@ int tf_insert_eem_entry(struct tf_session *session,
 	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
 
 	key0_hash = tf_em_lkup_get_crc32_hash(session,
-				      &parms->key[num_of_entry] - 1,
-				      parms->dir);
+					      &parms->key[num_of_entry] - 1,
+					      parms->dir);
 	key0_index = key0_hash & mask;
 
 	key1_hash =
 	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-					parms->key);
+				       parms->key);
 	key1_index = key1_hash & mask;
 
 	/*
@@ -420,14 +420,14 @@ int tf_insert_eem_entry(struct tf_session *session,
 				      key1_index,
 				      &index,
 				      &table_type) == 0) {
-		if (table_type == KEY0_TABLE) {
+		if (table_type == TF_KEY0_TABLE) {
 			TF_SET_GFID(gfid,
 				    key0_index,
-				    KEY0_TABLE);
+				    TF_KEY0_TABLE);
 		} else {
 			TF_SET_GFID(gfid,
 				    key1_index,
-				    KEY1_TABLE);
+				    TF_KEY1_TABLE);
 		}
 
 		/*
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 4e236d56c..35a7cfab5 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -285,8 +285,8 @@ tf_em_setup_page_table(struct tf_em_table *tbl)
 		tf_em_link_page_table(tp, tp_next, set_pte_last);
 	}
 
-	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
 /**
@@ -317,7 +317,7 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 			uint64_t *num_data_pages)
 {
 	uint64_t lvl_data_size = page_size;
-	int lvl = PT_LVL_0;
+	int lvl = TF_PT_LVL_0;
 	uint64_t data_size;
 
 	*num_data_pages = 0;
@@ -326,10 +326,10 @@ tf_em_size_page_tbl_lvl(uint32_t page_size,
 	while (lvl_data_size < data_size) {
 		lvl++;
 
-		if (lvl == PT_LVL_1)
+		if (lvl == TF_PT_LVL_1)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				page_size;
-		else if (lvl == PT_LVL_2)
+		else if (lvl == TF_PT_LVL_2)
 			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
 				MAX_PAGE_PTRS(page_size) * page_size;
 		else
@@ -386,18 +386,18 @@ tf_em_size_page_tbls(int max_lvl,
 		     uint32_t page_size,
 		     uint32_t *page_cnt)
 {
-	if (max_lvl == PT_LVL_0) {
-		page_cnt[PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == PT_LVL_1) {
-		page_cnt[PT_LVL_1] = num_data_pages;
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
-	} else if (max_lvl == PT_LVL_2) {
-		page_cnt[PT_LVL_2] = num_data_pages;
-		page_cnt[PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
-		page_cnt[PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
 	} else {
 		return;
 	}
@@ -434,7 +434,7 @@ tf_em_size_table(struct tf_em_table *tbl)
 	/* Determine number of page table levels and the number
 	 * of data pages needed to process the given eem table.
 	 */
-	if (tbl->type == RECORD_TABLE) {
+	if (tbl->type == TF_RECORD_TABLE) {
 		/*
 		 * For action records just a memory size is provided. Work
 		 * backwards to resolve to number of entries
@@ -480,9 +480,9 @@ tf_em_size_table(struct tf_em_table *tbl)
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
 		    num_data_pages,
-		    page_cnt[PT_LVL_0],
-		    page_cnt[PT_LVL_1],
-		    page_cnt[PT_LVL_2]);
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
 
 	return 0;
 }
@@ -508,7 +508,7 @@ tf_em_ctx_unreg(struct tf *tfp,
 	struct tf_em_table *tbl;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
@@ -544,7 +544,7 @@ tf_em_ctx_reg(struct tf *tfp,
 	int rc = 0;
 	int i;
 
-	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
@@ -719,41 +719,41 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		return -EINVAL;
 	}
 	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->rx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->rx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
 		parms->tx_max_key_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
 		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
 		parms->tx_max_action_entry_sz_in_bits / 8;
 
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries =
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
 		0;
 
 	return 0;
@@ -1572,11 +1572,11 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 
 		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
 		rc = tf_msg_em_cfg(tfp,
-				   em_tables[KEY0_TABLE].num_entries,
-				   em_tables[KEY0_TABLE].ctx_id,
-				   em_tables[KEY1_TABLE].ctx_id,
-				   em_tables[RECORD_TABLE].ctx_id,
-				   em_tables[EFC_TABLE].ctx_id,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
@@ -1600,9 +1600,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 * actions related to a single table scope.
 		 */
 		rc = tf_create_tbl_pool_external(dir,
-					    tbl_scope_cb,
-					    em_tables[RECORD_TABLE].num_entries,
-					    em_tables[RECORD_TABLE].entry_size);
+				    tbl_scope_cb,
+				    em_tables[TF_RECORD_TABLE].num_entries,
+				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
 			PMD_DRV_LOG(ERR,
 				    "%d TBL: Unable to allocate idx pools %s\n",
@@ -1672,7 +1672,7 @@ tf_set_tbl_entry(struct tf *tfp,
 		base_addr = tf_em_get_table_page(tbl_scope_cb,
 						 parms->dir,
 						 offset,
-						 RECORD_TABLE);
+						 TF_RECORD_TABLE);
 		if (base_addr == NULL) {
 			PMD_DRV_LOG(ERR,
 				    "dir:%d, Base address lookup failed\n",
@@ -1972,7 +1972,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
 
-		for (j = KEY0_TABLE; j < MAX_TABLE; j++) {
+		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
 			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
 			printf
 	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index a8bb0edab..ee8a14665 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -14,18 +14,18 @@
 struct tf_session;
 
 enum tf_pg_tbl_lvl {
-	PT_LVL_0,
-	PT_LVL_1,
-	PT_LVL_2,
-	PT_LVL_MAX
+	TF_PT_LVL_0,
+	TF_PT_LVL_1,
+	TF_PT_LVL_2,
+	TF_PT_LVL_MAX
 };
 
 enum tf_em_table_type {
-	KEY0_TABLE,
-	KEY1_TABLE,
-	RECORD_TABLE,
-	EFC_TABLE,
-	MAX_TABLE
+	TF_KEY0_TABLE,
+	TF_KEY1_TABLE,
+	TF_RECORD_TABLE,
+	TF_EFC_TABLE,
+	TF_MAX_TABLE
 };
 
 struct tf_em_page_tbl {
@@ -41,15 +41,15 @@ struct tf_em_table {
 	uint16_t			ctx_id;
 	uint32_t			entry_size;
 	int				num_lvl;
-	uint32_t			page_cnt[PT_LVL_MAX];
+	uint32_t			page_cnt[TF_PT_LVL_MAX];
 	uint64_t			num_data_pages;
 	void				*l0_addr;
 	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
 };
 
 struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[MAX_TABLE];
+	struct tf_em_table		em_tables[TF_MAX_TABLE];
 };
 
 /** table scope control block content */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 16/51] net/bnxt: add core changes for EM and EEM lookups
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (14 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-07  8:08           ` Ferruh Yigit
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
                           ` (36 subsequent siblings)
  52 siblings, 1 reply; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru, Shahaji Bhosle

From: Randy Schacher <stuart.schacher@broadcom.com>

- Move External Exact and Exact Match to device module using HCAPI
  to add and delete entries
- Make EM active through the device interface.

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Shahaji Bhosle <shahaji.bhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   3 +-
 drivers/net/bnxt/hcapi/cfa_p40_hw.h       | 781 ++++++++++++++++++++++
 drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 ---
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     |   2 +-
 drivers/net/bnxt/tf_core/Makefile         |   8 +
 drivers/net/bnxt/tf_core/hwrm_tf.h        |  24 +-
 drivers/net/bnxt/tf_core/tf_core.c        | 441 ++++++------
 drivers/net/bnxt/tf_core/tf_core.h        | 141 ++--
 drivers/net/bnxt/tf_core/tf_device.h      |  32 +
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   3 +
 drivers/net/bnxt/tf_core/tf_em.c          | 567 +++++-----------
 drivers/net/bnxt/tf_core/tf_em.h          |  72 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |  23 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |   4 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  25 +-
 drivers/net/bnxt/tf_core/tf_rm.c          | 156 +++--
 drivers/net/bnxt/tf_core/tf_tbl.c         | 437 +++++-------
 drivers/net/bnxt/tf_core/tf_tbl.h         |  49 +-
 18 files changed, 1627 insertions(+), 1233 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
 delete mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 365627499..349b09c36 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -46,9 +46,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
+include $(SRCDIR)/hcapi/Makefile
 endif
 
 #
diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
new file mode 100644
index 000000000..172706f12
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h
@@ -0,0 +1,781 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_hw.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  taken from 12/16/19 17:18:12
+ *
+ * Note:  This file was first generated using  tflib_decode.py.
+ *
+ *        Changes have been made due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union table fields)
+ *        Changes not autogenerated are noted in comments.
+ */
+
+#ifndef _CFA_P40_HW_H_
+#define _CFA_P40_HW_H_
+
+/**
+ * Valid TCAM entry. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Key type (pass). (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS 164
+#define CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS 2
+/**
+ * Tunnel HDR type. (for idx 5 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS 160
+#define CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Number of VLAN tags in tunnel l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS 158
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Number of VLAN tags in l2 header. (for idx 4 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS 156
+#define CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS 2
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS    108
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS  48
+/**
+ * Tunnel Outer VLAN Tag ID. (for idx 3 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS  96
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS 12
+/**
+ * Tunnel Inner VLAN Tag ID. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS  84
+#define CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS 12
+/**
+ * Source Partition. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  80
+#define CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+/**
+ * Source Virtual I/F. (for idx 2 ...)
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  8
+/**
+ * Tunnel/Inner Source/Dest. MAC Address.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS    24
+#define CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS  48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS    12
+#define CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS  12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS    0
+#define CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS  12
+
+enum cfa_p40_prof_l2_ctxt_tcam_flds {
+	CFA_P40_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 1,
+	CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 2,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 3,
+	CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 4,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC1_FLD = 5,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_FLD = 6,
+	CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_FLD = 7,
+	CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_FLD = 8,
+	CFA_P40_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P40_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P40_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 167
+
+/**
+ * Valid entry. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_VALID_BITPOS        79
+#define CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS      1
+/**
+ * reserved program to 0. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS     78
+#define CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS   1
+/**
+ * PF Parif Number. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS     74
+#define CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS   4
+/**
+ * Number of VLAN Tags. (for idx 2 ...)
+ */
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS    72
+#define CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS  2
+/**
+ * Dest. MAC Address.
+ */
+#define CFA_P40_ACT_VEB_TCAM_MAC_BITPOS          24
+#define CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS        48
+/**
+ * Outer VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_OVID_BITPOS         12
+#define CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS       12
+/**
+ * Inner VLAN Tag ID.
+ */
+#define CFA_P40_ACT_VEB_TCAM_IVID_BITPOS         0
+#define CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS       12
+
+enum cfa_p40_act_veb_tcam_flds {
+	CFA_P40_ACT_VEB_TCAM_VALID_FLD = 0,
+	CFA_P40_ACT_VEB_TCAM_RESERVED_FLD = 1,
+	CFA_P40_ACT_VEB_TCAM_PARIF_IN_FLD = 2,
+	CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_FLD = 3,
+	CFA_P40_ACT_VEB_TCAM_MAC_FLD = 4,
+	CFA_P40_ACT_VEB_TCAM_OVID_FLD = 5,
+	CFA_P40_ACT_VEB_TCAM_IVID_FLD = 6,
+	CFA_P40_ACT_VEB_TCAM_MAX_FLD
+};
+
+#define CFA_P40_ACT_VEB_TCAM_TOTAL_NUM_BITS 80
+
+/**
+ * Entry is valid.
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS 18
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS 1
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS 2
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS 16
+/**
+ * for resolving TCAM/EM conflicts
+ */
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS 0
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS 2
+
+enum cfa_p40_lkup_tcam_record_mem_flds {
+	CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_FLD = 0,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_FLD = 1,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_FLD = 2,
+	CFA_P40_LKUP_TCAM_RECORD_MEM_MAX_FLD
+};
+
+#define CFA_P40_LKUP_TCAM_RECORD_MEM_TOTAL_NUM_BITS 19
+
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS 62
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_tpid_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_MAX = 0x3UL
+};
+/**
+ * (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS 60
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS 2
+enum cfa_p40_prof_ctxt_remap_mem_pri_anti_spoof_ctl {
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_IGNORE = 0x0UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DROP = 0x1UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_DEFAULT = 0x2UL,
+
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_SPIF = 0x3UL,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_MAX = 0x3UL
+};
+/**
+ * Bypass Source Properties Lookup. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS 59
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS 1
+/**
+ * SP Record Pointer. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS 43
+#define CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS 16
+/**
+ * BD Action pointer passing enable. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS 42
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS 1
+/**
+ * Default VLAN TPID. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS 39
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS 3
+/**
+ * Allowed VLAN TPIDs. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS 33
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS 6
+/**
+ * Default VLAN PRI.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS 30
+#define CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS 3
+/**
+ * Allowed VLAN PRIs.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS 22
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS 8
+/**
+ * Partition.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS 18
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS 4
+/**
+ * Bypass Lookup.
+ */
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS 17
+#define CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS 1
+
+/**
+ * L2 Context Remap Data. Action bypass mode (1) {7'd0,prof_vnic[9:0]} Note:
+ * should also set byp_lkup_en. Action bypass mode (0) byp_lkup_en(0) -
+ * {prof_func[6:0],l2_context[9:0]} byp_lkup_en(1) - {1'b0,act_rec_ptr[15:0]}
+ */
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS 12
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS 10
+#define CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS 7
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS 10
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS 0
+#define CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS 16
+
+enum cfa_p40_prof_ctxt_remap_mem_flds {
+	CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_FLD = 0,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_FLD = 1,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_FLD = 2,
+	CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_FLD = 3,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_FLD = 4,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_FLD = 5,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_FLD = 6,
+	CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_FLD = 7,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_FLD = 8,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_FLD = 9,
+	CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_FLD = 10,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_FLD = 11,
+	CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_FLD = 12,
+	CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_FLD = 13,
+	CFA_P40_PROF_CTXT_REMAP_MEM_ARP_FLD = 14,
+	CFA_P40_PROF_CTXT_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_CTXT_REMAP_MEM_TOTAL_NUM_BITS 64
+
+/**
+ * Bypass action pointer look up (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS 37
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS 1
+/**
+ * Exact match search enable (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS 36
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS 1
+/**
+ * Exact match profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS 28
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS 8
+/**
+ * Exact match key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS 5
+/**
+ * Exact match key mask
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS 13
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS 10
+/**
+ * TCAM search enable
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS 12
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS 1
+/**
+ * TCAM profile
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS 4
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS 8
+/**
+ * TCAM key format
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS 4
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS 2
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS 16
+
+enum cfa_p40_prof_profile_tcam_remap_mem_flds {
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TOTAL_NUM_BITS 38
+
+/**
+ * Valid TCAM entry (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS   80
+#define CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS 1
+/**
+ * Packet type (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS 76
+#define CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS 4
+/**
+ * Pass through CFA (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS 74
+#define CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS 2
+/**
+ * Aggregate error (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS 73
+#define CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS 1
+/**
+ * Profile function (for idx 2 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS 66
+#define CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS 7
+/**
+ * Reserved for future use. Set to 0.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS 57
+#define CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS 9
+/**
+ * non-tunnel(0)/tunneled(1) packet (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS 56
+#define CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS 1
+/**
+ * Tunnel L2 tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS 55
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L2 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS 53
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped tunnel L2 dest_type UC(0)/MC(2)/BC(3) (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS 51
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS 2
+/**
+ * Tunnel L2 1+ VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS 50
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * Tunnel L2 2 VLAN tags present (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS 49
+#define CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS 1
+/**
+ * Tunnel L3 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS 48
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS 1
+/**
+ * Tunnel L3 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS 47
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS 1
+/**
+ * Tunnel L3 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS 43
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L3 header is IPV4 or IPV6. (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS 42
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 src address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS 41
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * Tunnel L3 IPV6 dest address is compressed (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS 40
+#define CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * Tunnel L4 valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS 39
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel L4 error (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS 38
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS 1
+/**
+ * Tunnel L4 header type (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS 34
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel L4 header is UDP or TCP (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS 33
+#define CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS 1
+/**
+ * Tunnel valid (for idx 1 ...)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS 32
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS 1
+/**
+ * Tunnel error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS 31
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS 1
+/**
+ * Tunnel header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS 27
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS 4
+/**
+ * Tunnel header flags
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS 24
+#define CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS 3
+/**
+ * L2 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS 23
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS 1
+/**
+ * L2 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS 22
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS 1
+/**
+ * L2 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS 20
+#define CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS 2
+/**
+ * Remapped L2 dest_type UC(0)/MC(2)/BC(3)
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS 18
+#define CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS 2
+/**
+ * L2 header 1+ VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS 17
+#define CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS 1
+/**
+ * L2 header 2 VLAN tags present
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS 16
+#define CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS 1
+/**
+ * L3 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS 15
+#define CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS 1
+/**
+ * L3 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS 14
+#define CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS 1
+/**
+ * L3 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS 10
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS 4
+/**
+ * L3 header is IPV4 or IPV6.
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS 9
+#define CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS 1
+/**
+ * L3 header IPV6 src address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS 8
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS 1
+/**
+ * L3 header IPV6 dest address is compressed
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS 7
+#define CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS 1
+/**
+ * L4 header valid
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS 6
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS 1
+/**
+ * L4 header error
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS 5
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS 1
+/**
+ * L4 header type
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS 1
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS 4
+/**
+ * L4 header is UDP or TCP
+ */
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS 0
+#define CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS 1
+
+enum cfa_p40_prof_profile_tcam_flds {
+	CFA_P40_PROF_PROFILE_TCAM_VALID_FLD = 0,
+	CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_FLD = 1,
+	CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_FLD = 2,
+	CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_FLD = 3,
+	CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_FLD = 4,
+	CFA_P40_PROF_PROFILE_TCAM_RESERVED_FLD = 5,
+	CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_FLD = 6,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_FLD = 7,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_FLD = 8,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_FLD = 9,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_FLD = 10,
+	CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_FLD = 11,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_FLD = 12,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_FLD = 13,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_FLD = 14,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_FLD = 15,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_FLD = 16,
+	CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_FLD = 17,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_FLD = 18,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_FLD = 19,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_FLD = 20,
+	CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_FLD = 21,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_FLD = 22,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_FLD = 23,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_FLD = 24,
+	CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_FLD = 25,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_FLD = 26,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_FLD = 27,
+	CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_FLD = 28,
+	CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_FLD = 29,
+	CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_FLD = 30,
+	CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_FLD = 31,
+	CFA_P40_PROF_PROFILE_TCAM_L3_VALID_FLD = 32,
+	CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_FLD = 33,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_FLD = 34,
+	CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_FLD = 35,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_FLD = 36,
+	CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_FLD = 37,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_FLD = 38,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_FLD = 39,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_FLD = 40,
+	CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_FLD = 41,
+	CFA_P40_PROF_PROFILE_TCAM_MAX_FLD
+};
+
+#define CFA_P40_PROF_PROFILE_TCAM_TOTAL_NUM_BITS 81
+
+/**
+ * CFA flexible key layout definition
+ */
+enum cfa_p40_key_fld_id {
+	CFA_P40_KEY_FLD_ID_MAX
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+/**
+ * Valid
+ */
+#define CFA_P40_EEM_KEY_TBL_VALID_BITPOS 0
+#define CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS 1
+
+/**
+ * L1 Cacheable
+ */
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS 1
+#define CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS 1
+
+/**
+ * Strength
+ */
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS 2
+#define CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS 2
+
+/**
+ * Key Size
+ */
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS 15
+#define CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS 9
+
+/**
+ * Record Size
+ */
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS 24
+#define CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS 5
+
+/**
+ * Action Record Internal
+ */
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS 29
+#define CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS 1
+
+/**
+ * External Flow Counter
+ */
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS 30
+#define CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS 31
+#define CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS 33
+
+/**
+ * EEM Key omitted - create using keybuilder
+ * Fields here cannot be larger than a uint64_t
+ */
+
+#define CFA_P40_EEM_KEY_TBL_TOTAL_NUM_BITS 64
+
+enum cfa_p40_eem_key_tbl_flds {
+	CFA_P40_EEM_KEY_TBL_VALID_FLD = 0,
+	CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_FLD = 1,
+	CFA_P40_EEM_KEY_TBL_STRENGTH_FLD = 2,
+	CFA_P40_EEM_KEY_TBL_KEY_SZ_FLD = 3,
+	CFA_P40_EEM_KEY_TBL_REC_SZ_FLD = 4,
+	CFA_P40_EEM_KEY_TBL_ACT_REC_INT_FLD = 5,
+	CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_FLD = 6,
+	CFA_P40_EEM_KEY_TBL_AR_PTR_FLD = 7,
+	CFA_P40_EEM_KEY_TBL_MAX_FLD
+};
+
+/**
+ * Mirror Destination 0 Source Property Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_SP_PTR_BITPOS 0
+#define CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS 11
+
+/**
+ * igonore or honor drop
+ */
+#define CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS 13
+#define CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS 1
+
+/**
+ * ingress or egress copy
+ */
+#define CFA_P40_MIRROR_TBL_COPY_BITPOS 14
+#define CFA_P40_MIRROR_TBL_COPY_NUM_BITS 1
+
+/**
+ * Mirror Destination enable.
+ */
+#define CFA_P40_MIRROR_TBL_EN_BITPOS 15
+#define CFA_P40_MIRROR_TBL_EN_NUM_BITS 1
+
+/**
+ * Action Record Pointer
+ */
+#define CFA_P40_MIRROR_TBL_AR_PTR_BITPOS 16
+#define CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS 16
+
+#define CFA_P40_MIRROR_TBL_TOTAL_NUM_BITS 32
+
+enum cfa_p40_mirror_tbl_flds {
+	CFA_P40_MIRROR_TBL_SP_PTR_FLD = 0,
+	CFA_P40_MIRROR_TBL_IGN_DROP_FLD = 1,
+	CFA_P40_MIRROR_TBL_COPY_FLD = 2,
+	CFA_P40_MIRROR_TBL_EN_FLD = 3,
+	CFA_P40_MIRROR_TBL_AR_PTR_FLD = 4,
+	CFA_P40_MIRROR_TBL_MAX_FLD
+};
+
+/**
+ * P45 Specific Updates (SR) - Non-autogenerated
+ */
+/**
+ * Valid TCAM entry.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS   166
+#define CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS 1
+/**
+ * Source Partition.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS  166
+#define CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS 4
+
+/**
+ * Source Virtual I/F.
+ */
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS    72
+#define CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS  12
+
+
+/* The SR layout of the l2 ctxt key is different from the Wh+.  Switch to
+ * cfa_p45_hw.h definition when available.
+ */
+enum cfa_p45_prof_l2_ctxt_tcam_flds {
+	CFA_P45_PROF_L2_CTXT_TCAM_VALID_FLD = 0,
+	CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_FLD = 1,
+	CFA_P45_PROF_L2_CTXT_TCAM_KEY_TYPE_FLD = 2,
+	CFA_P45_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_FLD = 3,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_FLD = 4,
+	CFA_P45_PROF_L2_CTXT_TCAM_L2_NUMTAGS_FLD = 5,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC1_FLD = 6,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_OVID_FLD = 7,
+	CFA_P45_PROF_L2_CTXT_TCAM_T_IVID_FLD = 8,
+	CFA_P45_PROF_L2_CTXT_TCAM_SVIF_FLD = 9,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAC0_FLD = 10,
+	CFA_P45_PROF_L2_CTXT_TCAM_OVID_FLD = 11,
+	CFA_P45_PROF_L2_CTXT_TCAM_IVID_FLD = 12,
+	CFA_P45_PROF_L2_CTXT_TCAM_MAX_FLD
+};
+
+#define CFA_P45_PROF_L2_CTXT_TCAM_TOTAL_NUM_BITS 171
+
+#endif /* _CFA_P40_HW_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c b/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
deleted file mode 100644
index 39afd4dbc..000000000
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_common.c
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- *   Copyright(c) 2019-2020 Broadcom Limited.
- *   All rights reserved.
- */
-
-#include "bitstring.h"
-#include "hcapi_cfa_defs.h"
-#include <errno.h>
-#include "assert.h"
-
-/* HCAPI CFA common PUT APIs */
-int hcapi_cfa_put_field(uint64_t *data_buf,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id, uint64_t val)
-{
-	assert(layout);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		bs_put_msb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	else
-		bs_put_lsb(data_buf,
-			   layout->field_array[field_id].bitpos,
-			   layout->field_array[field_id].bitlen, val);
-	return 0;
-}
-
-int hcapi_cfa_put_fields(uint64_t *obj_data,
-			 const struct hcapi_cfa_layout *layout,
-			 struct hcapi_cfa_data_obj *field_tbl,
-			 uint16_t field_tbl_sz)
-{
-	int i;
-	uint16_t bitpos;
-	uint8_t bitlen;
-	uint16_t field_id;
-
-	assert(layout);
-	assert(field_tbl);
-
-	if (layout->is_msb_order) {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_msb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	} else {
-		for (i = 0; i < field_tbl_sz; i++) {
-			field_id = field_tbl[i].field_id;
-			if (field_id > layout->array_sz)
-				return -EINVAL;
-			bitpos = layout->field_array[field_id].bitpos;
-			bitlen = layout->field_array[field_id].bitlen;
-			bs_put_lsb(obj_data, bitpos, bitlen,
-				   field_tbl[i].val);
-		}
-	}
-	return 0;
-}
-
-/* HCAPI CFA common GET APIs */
-int hcapi_cfa_get_field(uint64_t *obj_data,
-			const struct hcapi_cfa_layout *layout,
-			uint16_t field_id,
-			uint64_t *val)
-{
-	assert(layout);
-	assert(val);
-
-	if (field_id > layout->array_sz)
-		/* Invalid field_id */
-		return -EINVAL;
-
-	if (layout->is_msb_order)
-		*val = bs_get_msb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	else
-		*val = bs_get_lsb(obj_data,
-				  layout->field_array[field_id].bitpos,
-				  layout->field_array[field_id].bitlen);
-	return 0;
-}
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index ca0b1c923..42b37da0f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2019-2020 Broadcom
  * All rights reserved.
  */
-
+#include <inttypes.h>
 #include <stdint.h>
 #include <stdlib.h>
 #include <stdbool.h>
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 2c02e29e7..5ed32f12a 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -24,3 +24,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
+
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_device.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index c04d1034a..1e78296c6 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -27,8 +27,8 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_SET = 823,
 	HWRM_TFT_TBL_TYPE_GET = 824,
-	HWRM_TFT_TBL_TYPE_GET_BULK = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET_BULK,
+	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -82,8 +82,8 @@ struct tf_session_sram_resc_flush_input;
 struct tf_tbl_type_set_input;
 struct tf_tbl_type_get_input;
 struct tf_tbl_type_get_output;
-struct tf_tbl_type_get_bulk_input;
-struct tf_tbl_type_get_bulk_output;
+struct tf_tbl_type_bulk_get_input;
+struct tf_tbl_type_bulk_get_output;
 /* Input params for session attach */
 typedef struct tf_session_attach_input {
 	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
@@ -905,8 +905,6 @@ typedef struct tf_tbl_type_get_input {
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
 	/* When set to 1, indicates the get apply to TX */
 #define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Index to get */
@@ -922,17 +920,17 @@ typedef struct tf_tbl_type_get_output {
 } tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
-typedef struct tf_tbl_type_get_bulk_input {
+typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
 	uint16_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_RX	   (0x0)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_DIR_TX	   (0x1)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_TX	   (0x1)
 	/* When set to 1, indicates the clear entry on read */
-#define TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
+#define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_CLEAR_ON_READ	  (0x2)
 	/* Type of the object to set */
 	uint32_t			 type;
 	/* Starting index to get from */
@@ -941,12 +939,12 @@ typedef struct tf_tbl_type_get_bulk_input {
 	uint32_t			 num_entries;
 	/* Host memory where data will be stored */
 	uint64_t			 host_addr;
-} tf_tbl_type_get_bulk_input_t, *ptf_tbl_type_get_bulk_input_t;
+} tf_tbl_type_bulk_get_input_t, *ptf_tbl_type_bulk_get_input_t;
 
 /* Output params for table type get */
-typedef struct tf_tbl_type_get_bulk_output {
+typedef struct tf_tbl_type_bulk_get_output {
 	/* Size of the total data read in bytes */
 	uint16_t			 size;
-} tf_tbl_type_get_bulk_output_t, *ptf_tbl_type_get_bulk_output_t;
+} tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index eac57e7bd..648d0d1bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,33 +19,41 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static inline uint32_t SWAP_WORDS32(uint32_t val32)
+static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
+			       enum tf_device_type device,
+			       uint16_t key_sz_in_bits,
+			       uint16_t *num_slice_per_row)
 {
-	return (((val32 & 0x0000ffff) << 16) |
-		((val32 & 0xffff0000) >> 16));
-}
+	uint16_t key_bytes;
+	uint16_t slice_sz = 0;
+
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
+
+	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
+		if (device == TF_DEVICE_TYPE_WH) {
+			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
+			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		} else {
+			TFP_DRV_LOG(ERR,
+				    "Unsupported device type %d\n",
+				    device);
+			return -ENOTSUP;
+		}
 
-static void tf_seeds_init(struct tf_session *session)
-{
-	int i;
-	uint32_t r;
-
-	/* Initialize the lfsr */
-	rand_init();
-
-	/* RX and TX use the same seed values */
-	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
-		session->lkup_lkup3_init_cfg[TF_DIR_TX] =
-						SWAP_WORDS32(rand32());
-
-	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
-		r = SWAP_WORDS32(rand32());
-		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
-		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+		if (key_bytes > *num_slice_per_row * slice_sz) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Key size %d is not supported\n",
+				    tf_tcam_tbl_2_str(tcam_tbl_type),
+				    key_bytes);
+			return -ENOTSUP;
+		}
+	} else { /* for other type of tcam */
+		*num_slice_per_row = 1;
 	}
+
+	return 0;
 }
 
 /**
@@ -153,15 +161,18 @@ tf_open_session(struct tf                    *tfp,
 	uint8_t fw_session_id;
 	int dir;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
 	 * firmware open session succeeds.
 	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH)
+	if (parms->device_type != TF_DEVICE_TYPE_WH) {
+		TFP_DRV_LOG(ERR,
+			    "Unsupported device type %d\n",
+			    parms->device_type);
 		return -ENOTSUP;
+	}
 
 	/* Build the beginning of session_id */
 	rc = sscanf(parms->ctrl_chan_name,
@@ -171,7 +182,7 @@ tf_open_session(struct tf                    *tfp,
 		    &slot,
 		    &device);
 	if (rc != 4) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "Failed to scan device ctrl_chan_name\n");
 		return -EINVAL;
 	}
@@ -183,13 +194,13 @@ tf_open_session(struct tf                    *tfp,
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
-			PMD_DRV_LOG(ERR,
-				    "Session is already open, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Session is already open, rc:%s\n",
+				    strerror(-rc));
 		else
-			PMD_DRV_LOG(ERR,
-				    "Open message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Open message send failed, rc:%s\n",
+				    strerror(-rc));
 
 		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
 		return rc;
@@ -202,13 +213,13 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
-	tfp->session = alloc_parms.mem_va;
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
 
 	/* Allocate core data for the session */
 	alloc_parms.nitems = 1;
@@ -217,9 +228,9 @@ tf_open_session(struct tf                    *tfp,
 	rc = tfp_calloc(&alloc_parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -240,12 +251,13 @@ tf_open_session(struct tf                    *tfp,
 	session->session_id.internal.device = device;
 	session->session_id.internal.fw_session_id = fw_session_id;
 
+	/* Query for Session Config
+	 */
 	rc = tf_msg_session_qcfg(tfp);
 	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%d\n",
-			    rc);
+		TFP_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
@@ -256,9 +268,9 @@ tf_open_session(struct tf                    *tfp,
 #if (TF_SHADOW == 1)
 		rc = tf_rm_shadow_db_init(tfs);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%d",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%s",
+				    strerror(-rc));
 		/* Add additional processing */
 #endif /* TF_SHADOW */
 	}
@@ -266,13 +278,12 @@ tf_open_session(struct tf                    *tfp,
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
-		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Rm allocate validate failed, rc:%s\n",
+			    strerror(-rc));
 		goto cleanup_close;
 	}
 
-	/* Setup hash seeds */
-	tf_seeds_init(session);
-
 	/* Initialize EM pool */
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		rc = tf_create_em_pool(session,
@@ -290,11 +301,11 @@ tf_open_session(struct tf                    *tfp,
 	/* Return session ID */
 	parms->session_id = session->session_id;
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session created, session_id:%d\n",
 		    parms->session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
@@ -379,8 +390,7 @@ tf_attach_session(struct tf *tfp __rte_unused,
 #if (TF_SHARED == 1)
 	int rc;
 
-	if (tfp == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	/* - Open the shared memory for the attach_chan_name
 	 * - Point to the shared session for this Device instance
@@ -389,12 +399,10 @@ tf_attach_session(struct tf *tfp __rte_unused,
 	 *   than one client of the session.
 	 */
 
-	if (tfp->session) {
-		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-			rc = tf_msg_session_attach(tfp,
-						   parms->ctrl_chan_name,
-						   parms->session_id);
-		}
+	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_attach(tfp,
+					   parms->ctrl_chan_name,
+					   parms->session_id);
 	}
 #endif /* TF_SHARED */
 	return -1;
@@ -472,8 +480,7 @@ tf_close_session(struct tf *tfp)
 	union tf_session_id session_id;
 	int dir;
 
-	if (tfp == NULL || tfp->session == NULL)
-		return -EINVAL;
+	TF_CHECK_TFP_SESSION(tfp);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -487,9 +494,9 @@ tf_close_session(struct tf *tfp)
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "Message send failed, rc:%d\n",
-				    rc);
+			TFP_DRV_LOG(ERR,
+				    "Message send failed, rc:%s\n",
+				    strerror(-rc));
 		}
 
 		/* Update the ref_count */
@@ -509,11 +516,11 @@ tf_close_session(struct tf *tfp)
 		tfp->session = NULL;
 	}
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "Session closed, session_id:%d\n",
 		    session_id.id);
 
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO,
 		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
 		    session_id.internal.domain,
 		    session_id.internal.bus,
@@ -565,27 +572,39 @@ tf_close_session_new(struct tf *tfp)
 int tf_insert_em_entry(struct tf *tfp,
 		       struct tf_insert_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL) {
-		/* External EEM */
-		return tf_insert_eem_entry((struct tf_session *)
-					   (tfp->session->core_data),
-					   tbl_scope_cb,
-					   parms);
-	} else if (parms->mem == TF_MEM_INTERNAL) {
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM insert failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
 	return -EINVAL;
@@ -600,27 +619,44 @@ int tf_insert_em_entry(struct tf *tfp,
 int tf_delete_em_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_tbl_scope_cb     *tbl_scope_cb;
+	struct tf_session      *tfs;
+	struct tf_dev_info     *dev;
+	int rc;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	tbl_scope_cb = tbl_scope_cb_find((struct tf_session *)
-					 (tfp->session->core_data),
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tfp, parms);
-	else
-		return tf_delete_em_internal_entry(tfp, parms);
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: EM delete failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
 }
 
-/** allocate identifier resource
- *
- * Returns success or failure code.
- */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms)
 {
@@ -629,14 +665,7 @@ int tf_alloc_identifier(struct tf *tfp,
 	int id;
 	int rc;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -662,30 +691,31 @@ int tf_alloc_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: %s\n",
+		TFP_DRV_LOG(ERR, "%s: %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	id = ba_alloc(session_pool);
 
 	if (id == BA_FAIL) {
-		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		return -ENOMEM;
@@ -763,14 +793,7 @@ int tf_free_identifier(struct tf *tfp,
 	int ba_rc;
 	struct tf_session *tfs;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -796,29 +819,31 @@ int tf_free_identifier(struct tf *tfp,
 				rc);
 		break;
 	case TF_IDENT_TYPE_L2_FUNC:
-		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
 		rc = -EOPNOTSUPP;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type));
-		rc = -EINVAL;
+		rc = -EOPNOTSUPP;
 		break;
 	}
 	if (rc) {
-		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s Identifier pool access failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
+			    tf_ident_2_str(parms->ident_type),
+			    strerror(-rc));
 		return rc;
 	}
 
 	ba_rc = ba_inuse(session_pool, (int)parms->id);
 
 	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_ident_2_str(parms->ident_type),
 			    parms->id);
@@ -893,21 +918,30 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index = 0;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -916,36 +950,16 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority) {
-		for (index = session_pool->size - 1; index >= 0; index--) {
-			if (ba_inuse(session_pool,
-					  index) == BA_ENTRY_FREE) {
-				break;
-			}
-		}
-		if (ba_alloc_index(session_pool,
-				   index) == BA_FAIL) {
-			TFP_DRV_LOG(ERR,
-				    "%s: %s: ba_alloc index %d failed\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-				    index);
-			return -ENOMEM;
-		}
-	} else {
-		index = ba_alloc(session_pool);
-		if (index == BA_FAIL) {
-			TFP_DRV_LOG(ERR, "%s: %s: Out of resource\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-			return -ENOMEM;
-		}
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
 	}
 
+	index *= num_slice_per_row;
+
 	parms->idx = index;
 	return 0;
 }
@@ -956,26 +970,29 @@ tf_set_tcam_entry(struct tf *tfp,
 {
 	int rc;
 	int id;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row;
 
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
-	/*
-	 * Each tcam send msg function should check for key sizes range
-	 */
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 parms->key_sz_in_bits,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
 
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
@@ -985,11 +1002,12 @@ tf_set_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 		   "%s: %s: Invalid or not allocated index, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
@@ -1006,21 +1024,8 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	int rc = -EOPNOTSUPP;
-
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "%s, Session info invalid\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	return rc;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+	return -EOPNOTSUPP;
 }
 
 int
@@ -1028,20 +1033,29 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
+	int index;
 	struct tf_session *tfs;
 	struct bitalloc *session_pool;
+	uint16_t num_slice_per_row = 1;
 
-	if (parms == NULL || tfp == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR, "%s: Session error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
+	/* TEMP, due to device design. When tcam is modularized device
+	 * should be retrieved from the session
+	 */
+	enum tf_device_type device_type;
+	/* TEMP */
+	device_type = TF_DEVICE_TYPE_WH;
 
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
+				 device_type,
+				 0,
+				 &num_slice_per_row);
+	/* Error logging handled by tf_check_tcam_entry */
+	if (rc)
+		return rc;
+
 	rc = tf_rm_lookup_tcam_type_pool(tfs,
 					 parms->dir,
 					 parms->tcam_tbl_type,
@@ -1050,24 +1064,27 @@ tf_free_tcam_entry(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	rc = ba_inuse(session_pool, (int)parms->idx);
+	index = parms->idx / num_slice_per_row;
+
+	rc = ba_inuse(session_pool, index);
 	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    index);
 		return -EINVAL;
 	}
 
-	ba_free(session_pool, (int)parms->idx);
+	ba_free(session_pool, index);
 
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx);
+			    parms->idx,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index f1ef00b30..bb456bba7 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -10,7 +10,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <stdio.h>
-
+#include "hcapi/hcapi_cfa.h"
 #include "tf_project.h"
 
 /**
@@ -54,6 +54,7 @@ enum tf_mem {
 #define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
 #define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
 
+
 /*
  * Helper Macros
  */
@@ -132,34 +133,40 @@ struct tf_session_version {
  */
 enum tf_device_type {
 	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
-	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
-	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_SR,     /**< Stingray  */
+	TF_DEVICE_TYPE_THOR,   /**< Thor      */
+	TF_DEVICE_TYPE_SR2,    /**< Stingray2 */
 	TF_DEVICE_TYPE_MAX     /**< Maximum   */
 };
 
-/** Identifier resource types
+/**
+ * Identifier resource types
  */
 enum tf_identifier_type {
-	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The L2 Context is returned from the L2 Ctxt TCAM lookup
 	 *  and can be used in WC TCAM or EM keys to virtualize further
 	 *  lookups.
 	 */
 	TF_IDENT_TYPE_L2_CTXT,
-	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	/**
+	 *  The WC profile func is returned from the L2 Ctxt TCAM lookup
 	 *  to enable virtualization of the profile TCAM.
 	 */
 	TF_IDENT_TYPE_PROF_FUNC,
-	/** The WC profile ID is included in the WC lookup key
+	/**
+	 *  The WC profile ID is included in the WC lookup key
 	 *  to enable virtualization of the WC TCAM hardware.
 	 */
 	TF_IDENT_TYPE_WC_PROF,
-	/** The EM profile ID is included in the EM lookup key
+	/**
+	 *  The EM profile ID is included in the EM lookup key
 	 *  to enable virtualization of the EM hardware. (not required for SR2
 	 *  as it has table scope)
 	 */
 	TF_IDENT_TYPE_EM_PROF,
-	/** The L2 func is included in the ILT result and from recycling to
+	/**
+	 *  The L2 func is included in the ILT result and from recycling to
 	 *  enable virtualization of further lookups.
 	 */
 	TF_IDENT_TYPE_L2_FUNC,
@@ -239,7 +246,8 @@ enum tf_tbl_type {
 
 	/* External */
 
-	/** External table type - initially 1 poolsize entries.
+	/**
+	 * External table type - initially 1 poolsize entries.
 	 * All External table types are associated with a table
 	 * scope. Internal types are not.
 	 */
@@ -279,13 +287,17 @@ enum tf_em_tbl_type {
 	TF_EM_TBL_TYPE_MAX
 };
 
-/** TruFlow Session Information
+/**
+ * TruFlow Session Information
  *
  * Structure defining a TruFlow Session, also known as a Management
  * session. This structure is initialized at time of
  * tf_open_session(). It is passed to all of the TruFlow APIs as way
  * to prescribe and isolate resources between different TruFlow ULP
  * Applications.
+ *
+ * Ownership of the elements is split between ULP and TruFlow. Please
+ * see the individual elements.
  */
 struct tf_session_info {
 	/**
@@ -355,7 +367,8 @@ struct tf_session_info {
 	uint32_t              core_data_sz_bytes;
 };
 
-/** TruFlow handle
+/**
+ * TruFlow handle
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
@@ -405,7 +418,8 @@ struct tf_session_resources {
  * tf_open_session parameters definition.
  */
 struct tf_open_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -417,7 +431,8 @@ struct tf_open_session_parms {
 	 * shared memory allocation.
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
-	/** [in] shadow_copy
+	/**
+	 * [in] shadow_copy
 	 *
 	 * Boolean controlling the use and availability of shadow
 	 * copy. Shadow copy will allow the TruFlow to keep track of
@@ -430,7 +445,8 @@ struct tf_open_session_parms {
 	 * control channel.
 	 */
 	bool shadow_copy;
-	/** [in/out] session_id
+	/**
+	 * [in/out] session_id
 	 *
 	 * Session_id is unique per session.
 	 *
@@ -441,7 +457,8 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
-	/** [in] device type
+	/**
+	 * [in] device type
 	 *
 	 * Device type is passed, one of Wh+, SR, Thor, SR2
 	 */
@@ -484,7 +501,8 @@ int tf_open_session_new(struct tf *tfp,
 			struct tf_open_session_parms *parms);
 
 struct tf_attach_session_parms {
-	/** [in] ctrl_chan_name
+	/**
+	 * [in] ctrl_chan_name
 	 *
 	 * String containing name of control channel interface to be
 	 * used for this session to communicate with firmware.
@@ -497,7 +515,8 @@ struct tf_attach_session_parms {
 	 */
 	char ctrl_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] attach_chan_name
+	/**
+	 * [in] attach_chan_name
 	 *
 	 * String containing name of attach channel interface to be
 	 * used for this session.
@@ -510,7 +529,8 @@ struct tf_attach_session_parms {
 	 */
 	char attach_chan_name[TF_SESSION_NAME_MAX];
 
-	/** [in] session_id
+	/**
+	 * [in] session_id
 	 *
 	 * Session_id is unique per session. For Attach the session_id
 	 * should be the session_id that was returned on the first
@@ -565,7 +585,8 @@ int tf_close_session_new(struct tf *tfp);
  *
  * @ref tf_free_identifier
  */
-/** tf_alloc_identifier parameter definition
+/**
+ * tf_alloc_identifier parameter definition
  */
 struct tf_alloc_identifier_parms {
 	/**
@@ -582,7 +603,8 @@ struct tf_alloc_identifier_parms {
 	uint16_t id;
 };
 
-/** tf_free_identifier parameter definition
+/**
+ * tf_free_identifier parameter definition
  */
 struct tf_free_identifier_parms {
 	/**
@@ -599,7 +621,8 @@ struct tf_free_identifier_parms {
 	uint16_t id;
 };
 
-/** allocate identifier resource
+/**
+ * allocate identifier resource
  *
  * TruFlow core will allocate a free id from the per identifier resource type
  * pool reserved for the session during tf_open().  No firmware is involved.
@@ -611,7 +634,8 @@ int tf_alloc_identifier(struct tf *tfp,
 int tf_alloc_identifier_new(struct tf *tfp,
 			    struct tf_alloc_identifier_parms *parms);
 
-/** free identifier resource
+/**
+ * free identifier resource
  *
  * TruFlow core will return an id back to the per identifier resource type pool
  * reserved for the session.  No firmware is involved.  During tf_close, the
@@ -639,7 +663,8 @@ int tf_free_identifier_new(struct tf *tfp,
  */
 
 
-/** tf_alloc_tbl_scope_parms definition
+/**
+ * tf_alloc_tbl_scope_parms definition
  */
 struct tf_alloc_tbl_scope_parms {
 	/**
@@ -662,7 +687,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -684,7 +709,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] Brd4 only receive table access interface id
+	 * [in] SR2 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -709,7 +734,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -719,7 +744,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -750,7 +775,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -758,7 +783,6 @@ int tf_alloc_tbl_scope(struct tf *tfp,
 int tf_free_tbl_scope(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms);
 
-
 /**
  * @page tcam TCAM Access
  *
@@ -771,7 +795,9 @@ int tf_free_tbl_scope(struct tf *tfp,
  * @ref tf_free_tcam_entry
  */
 
-/** tf_alloc_tcam_entry parameter definition
+
+/**
+ * tf_alloc_tcam_entry parameter definition
  */
 struct tf_alloc_tcam_entry_parms {
 	/**
@@ -799,9 +825,7 @@ struct tf_alloc_tcam_entry_parms {
 	 */
 	uint8_t *mask;
 	/**
-	 * [in] Priority of entry requested
-	 * 0: index from top i.e. highest priority first
-	 * !0: index from bottom i.e lowest priority first
+	 * [in] Priority of entry requested (definition TBD)
 	 */
 	uint32_t priority;
 	/**
@@ -819,7 +843,8 @@ struct tf_alloc_tcam_entry_parms {
 	uint16_t idx;
 };
 
-/** allocate TCAM entry
+/**
+ * allocate TCAM entry
  *
  * Allocate a TCAM entry - one of these types:
  *
@@ -844,7 +869,8 @@ struct tf_alloc_tcam_entry_parms {
 int tf_alloc_tcam_entry(struct tf *tfp,
 			struct tf_alloc_tcam_entry_parms *parms);
 
-/** tf_set_tcam_entry parameter definition
+/**
+ * tf_set_tcam_entry parameter definition
  */
 struct	tf_set_tcam_entry_parms {
 	/**
@@ -881,7 +907,8 @@ struct	tf_set_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/** set TCAM entry
+/**
+ * set TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
  *
@@ -892,7 +919,8 @@ struct	tf_set_tcam_entry_parms {
 int tf_set_tcam_entry(struct tf	*tfp,
 		      struct tf_set_tcam_entry_parms *parms);
 
-/** tf_get_tcam_entry parameter definition
+/**
+ * tf_get_tcam_entry parameter definition
  */
 struct tf_get_tcam_entry_parms {
 	/**
@@ -929,7 +957,7 @@ struct tf_get_tcam_entry_parms {
 	uint16_t result_sz_in_bits;
 };
 
-/*
+/**
  * get TCAM entry
  *
  * Program a TCAM table entry for a TruFlow session.
@@ -941,7 +969,7 @@ struct tf_get_tcam_entry_parms {
 int tf_get_tcam_entry(struct tf *tfp,
 		      struct tf_get_tcam_entry_parms *parms);
 
-/*
+/**
  * tf_free_tcam_entry parameter definition
  */
 struct tf_free_tcam_entry_parms {
@@ -963,7 +991,9 @@ struct tf_free_tcam_entry_parms {
 	uint16_t ref_cnt;
 };
 
-/*
+/**
+ * free TCAM entry
+ *
  * Free TCAM entry.
  *
  * Firmware checks to ensure the TCAM entries are owned by the TruFlow
@@ -989,6 +1019,7 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_get_tbl_entry
  */
 
+
 /**
  * tf_alloc_tbl_entry parameter definition
  */
@@ -1201,9 +1232,9 @@ int tf_get_tbl_entry(struct tf *tfp,
 		     struct tf_get_tbl_entry_parms *parms);
 
 /**
- * tf_get_bulk_tbl_entry parameter definition
+ * tf_bulk_get_tbl_entry parameter definition
  */
-struct tf_get_bulk_tbl_entry_parms {
+struct tf_bulk_get_tbl_entry_parms {
 	/**
 	 * [in] Receive or transmit direction
 	 */
@@ -1212,11 +1243,6 @@ struct tf_get_bulk_tbl_entry_parms {
 	 * [in] Type of object to get
 	 */
 	enum tf_tbl_type type;
-	/**
-	 * [in] Clear hardware entries on reads only
-	 * supported for TF_TBL_TYPE_ACT_STATS_64
-	 */
-	bool clear_on_read;
 	/**
 	 * [in] Starting index to read from
 	 */
@@ -1250,8 +1276,8 @@ struct tf_get_bulk_tbl_entry_parms {
  * Returns success or failure code. Failure will be returned if the
  * provided data buffer is too small for the data type requested.
  */
-int tf_get_bulk_tbl_entry(struct tf *tfp,
-		     struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_bulk_get_tbl_entry(struct tf *tfp,
+		     struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
@@ -1280,7 +1306,7 @@ struct tf_insert_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1332,12 +1358,12 @@ struct tf_delete_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
 	 * [in] epoch group IDs of entry to delete
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1366,7 +1392,7 @@ struct tf_search_em_entry_parms {
 	 */
 	uint32_t tbl_scope_id;
 	/**
-	 * [in] ID of table interface to use (Brd4 only)
+	 * [in] ID of table interface to use (SR2 only)
 	 */
 	uint32_t tbl_if_id;
 	/**
@@ -1387,7 +1413,7 @@ struct tf_search_em_entry_parms {
 	uint16_t em_record_sz_in_bits;
 	/**
 	 * [in] epoch group IDs of entry to lookup
-	 * 2 element array with 2 ids. (Brd4 only)
+	 * 2 element array with 2 ids. (SR2 only)
 	 */
 	uint16_t *epochs;
 	/**
@@ -1415,7 +1441,7 @@ struct tf_search_em_entry_parms {
  * specified direction and table scope.
  *
  * When inserting an entry into an exact match table, the TruFlow library may
- * need to allocate a dynamic bucket for the entry (Brd4 only).
+ * need to allocate a dynamic bucket for the entry (SR2 only).
  *
  * The insertion of duplicate entries in an EM table is not permitted.	If a
  * TruFlow application can guarantee that it will never insert duplicates, it
@@ -1490,4 +1516,5 @@ int tf_delete_em_entry(struct tf *tfp,
  */
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 6aeb6fedb..1501b20d9 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -366,6 +366,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_tcam)(struct tf *tfp,
 			       struct tf_tcam_get_parms *parms);
+
+	/**
+	 * Insert EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_em_entry)(struct tf *tfp,
+				      struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_em_entry)(struct tf *tfp,
+				      struct tf_delete_em_entry_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index c235976fe..f4bd95f1c 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl_type.h"
 #include "tf_tcam.h"
+#include "tf_em.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -89,4 +90,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_insert_em_entry = tf_em_insert_entry,
+	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
index 38f7fe419..fcbbd7eca 100644
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -17,11 +17,6 @@
 
 #include "bnxt.h"
 
-/* Enable EEM table dump
- */
-#define TF_EEM_DUMP
-
-static struct tf_eem_64b_entry zero_key_entry;
 
 static uint32_t tf_em_get_key_mask(int num_entries)
 {
@@ -36,326 +31,22 @@ static uint32_t tf_em_get_key_mask(int num_entries)
 	return mask;
 }
 
-/* CRC32i support for Key0 hash */
-#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
-#define crc32(x, y) crc32i(~0, x, y)
-
-static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
-0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
-0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
-0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
-0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
-0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
-0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
-0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
-0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
-0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
-0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
-0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
-0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
-0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
-0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
-0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
-0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
-0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
-0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
-0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
-0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
-0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
-0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
-0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
-0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
-0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
-0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
-0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
-0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
-0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
-0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
-0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
-0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
-0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
-0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
-0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
-0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
-0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
-0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
-0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
-0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
-0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
-0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
-0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
-0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
-0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
-0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
-0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
-0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
-0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
-0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
-0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
-0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
-0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
-0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
-0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
-0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
-0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
-0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
-0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
-0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
-0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
-0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
-0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
-0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
-};
-
-static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
-{
-	int l;
-
-	for (l = (len - 1); l >= 0; l--)
-		crc = ucrc32(buf[l], crc);
-
-	return ~crc;
-}
-
-static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
-					  uint8_t *key,
-					  enum tf_dir dir)
-{
-	int i;
-	uint32_t index;
-	uint32_t val1, val2;
-	uint8_t temp[4];
-	uint8_t *kptr = key;
-
-	/* Do byte-wise XOR of the 52-byte HASH key first. */
-	index = *key;
-	kptr--;
-
-	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
-		index = index ^ *kptr;
-		kptr--;
-	}
-
-	/* Get seeds */
-	val1 = session->lkup_em_seed_mem[dir][index * 2];
-	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
-
-	temp[3] = (uint8_t)(val1 >> 24);
-	temp[2] = (uint8_t)(val1 >> 16);
-	temp[1] = (uint8_t)(val1 >> 8);
-	temp[0] = (uint8_t)(val1 & 0xff);
-	val1 = 0;
-
-	/* Start with seed */
-	if (!(val2 & 0x1))
-		val1 = crc32i(~val1, temp, 4);
-
-	val1 = crc32i(~val1,
-		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
-		      TF_HW_EM_KEY_MAX_SIZE);
-
-	/* End with seed */
-	if (val2 & 0x1)
-		val1 = crc32i(~val1, temp, 4);
-
-	return val1;
-}
-
-static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
-					    uint8_t *in_key)
-{
-	uint32_t val1;
-
-	val1 = hashword(((uint32_t *)in_key) + 1,
-			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
-			 lookup3_init_value);
-
-	return val1;
-}
-
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum tf_em_table_type table_type)
-{
-	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
-	void *addr = NULL;
-	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
-
-	if (ctx == NULL)
-		return NULL;
-
-	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
-		return NULL;
-
-	if (table_type < TF_KEY0_TABLE || table_type > TF_EFC_TABLE)
-		return NULL;
-
-	/*
-	 * Use the level according to the num_level of page table
-	 */
-	level = ctx->em_tables[table_type].num_lvl - 1;
-
-	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
-
-	return addr;
-}
-
-/** Read Key table entry
- *
- * Entry is read in to entry
- */
-static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
-	return 0;
-}
-
-static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-				 struct tf_eem_64b_entry *entry,
-				 uint32_t entry_size,
-				 uint32_t index,
-				 enum tf_em_table_type table_type,
-				 enum tf_dir dir)
-{
-	void *page;
-	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
-
-	page = tf_em_get_table_page(tbl_scope_cb,
-				    dir,
-				    (index * entry_size),
-				    table_type);
-
-	if (page == NULL)
-		return -EINVAL;
-
-	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
-
-	return 0;
-}
-
-static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_eem_64b_entry *entry,
-			       uint32_t index,
-			       enum tf_em_table_type table_type,
-			       enum tf_dir dir)
-{
-	int rc;
-	struct tf_eem_64b_entry table_entry;
-
-	rc = tf_em_read_entry(tbl_scope_cb,
-			      &table_entry,
-			      TF_EM_KEY_RECORD_SIZE,
-			      index,
-			      table_type,
-			      dir);
-
-	if (rc != 0)
-		return -EINVAL;
-
-	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
-		if (entry != NULL) {
-			if (memcmp(&table_entry,
-				   entry,
-				   TF_EM_KEY_RECORD_SIZE) == 0)
-				return -EEXIST;
-		} else {
-			return -EEXIST;
-		}
-
-		return -EBUSY;
-	}
-
-	return 0;
-}
-
-static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
-				    uint8_t *in_key,
-				    struct tf_eem_64b_entry *key_entry)
+static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+				   uint8_t	       *in_key,
+				   struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
 
-	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
 		key_entry->hdr.pointer = result->pointer;
 	else
 		key_entry->hdr.pointer = result->pointer;
 
 	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-}
-
-/* tf_em_select_inject_table
- *
- * Returns:
- * 0 - Key does not exist in either table and can be inserted
- *		at "index" in table "table".
- * EEXIST  - Key does exist in table at "index" in table "table".
- * TF_ERR     - Something went horribly wrong.
- */
-static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
-					  enum tf_dir dir,
-					  struct tf_eem_64b_entry *entry,
-					  uint32_t key0_hash,
-					  uint32_t key1_hash,
-					  uint32_t *index,
-					  enum tf_em_table_type *table)
-{
-	int key0_entry;
-	int key1_entry;
-
-	/*
-	 * Check KEY0 table.
-	 */
-	key0_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key0_hash,
-					 TF_KEY0_TABLE,
-					 dir);
 
-	/*
-	 * Check KEY1 table.
-	 */
-	key1_entry = tf_em_entry_exists(tbl_scope_cb,
-					 entry,
-					 key1_hash,
-					 TF_KEY1_TABLE,
-					 dir);
-
-	if (key0_entry == -EEXIST) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return -EEXIST;
-	} else if (key1_entry == -EEXIST) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return -EEXIST;
-	} else if (key0_entry == 0) {
-		*table = TF_KEY0_TABLE;
-		*index = key0_hash;
-		return 0;
-	} else if (key1_entry == 0) {
-		*table = TF_KEY1_TABLE;
-		*index = key1_hash;
-		return 0;
-	}
-
-	return -EINVAL;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
 }
 
 /** insert EEM entry API
@@ -368,20 +59,24 @@ static int tf_em_select_inject_table(struct tf_tbl_scope_cb *tbl_scope_cb,
  *   0
  *   TF_ERR_EM_DUP  - key is already in table
  */
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms)
+static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			       struct tf_insert_em_entry_parms *parms)
 {
 	uint32_t	   mask;
 	uint32_t	   key0_hash;
 	uint32_t	   key1_hash;
 	uint32_t	   key0_index;
 	uint32_t	   key1_index;
-	struct tf_eem_64b_entry key_entry;
+	struct cfa_p4_eem_64b_entry key_entry;
 	uint32_t	   index;
-	enum tf_em_table_type table_type;
+	enum hcapi_cfa_em_table_type table_type;
 	uint32_t	   gfid;
-	int		   num_of_entry;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
 
 	/* Get mask to use on hash */
 	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
@@ -389,72 +84,84 @@ int tf_insert_eem_entry(struct tf_session *session,
 	if (!mask)
 		return -EINVAL;
 
-	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
 
-	key0_hash = tf_em_lkup_get_crc32_hash(session,
-					      &parms->key[num_of_entry] - 1,
-					      parms->dir);
-	key0_index = key0_hash & mask;
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
 
-	key1_hash =
-	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
-				       parms->key);
+	key0_index = key0_hash & mask;
 	key1_index = key1_hash & mask;
 
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
 	/*
 	 * Use the "result" arg to populate all of the key entry then
 	 * store the byte swapped "raw" entry in a local copy ready
 	 * for insertion in to the table.
 	 */
-	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
 				((uint8_t *)parms->key),
 				&key_entry);
 
 	/*
-	 * Find which table to use
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
 	 */
-	if (tf_em_select_inject_table(tbl_scope_cb,
-				      parms->dir,
-				      &key_entry,
-				      key0_index,
-				      key1_index,
-				      &index,
-				      &table_type) == 0) {
-		if (table_type == TF_KEY0_TABLE) {
-			TF_SET_GFID(gfid,
-				    key0_index,
-				    TF_KEY0_TABLE);
-		} else {
-			TF_SET_GFID(gfid,
-				    key1_index,
-				    TF_KEY1_TABLE);
-		}
-
-		/*
-		 * Inject
-		 */
-		if (tf_em_write_entry(tbl_scope_cb,
-				      &key_entry,
-				      TF_EM_KEY_RECORD_SIZE,
-				      index,
-				      table_type,
-				      parms->dir) == 0) {
-			TF_SET_FLOW_ID(parms->flow_id,
-				       gfid,
-				       TF_GFID_TABLE_EXTERNAL,
-				       parms->dir);
-			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-						     0,
-						     0,
-						     0,
-						     index,
-						     0,
-						     table_type);
-			return 0;
-		}
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
 	}
 
-	return -EINVAL;
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
 }
 
 /**
@@ -463,8 +170,8 @@ int tf_insert_eem_entry(struct tf_session *session,
  *  returns:
  *     0 - Success
  */
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms)
+static int tf_insert_em_internal_entry(struct tf                       *tfp,
+				       struct tf_insert_em_entry_parms *parms)
 {
 	int       rc;
 	uint32_t  gfid;
@@ -494,7 +201,7 @@ int tf_insert_em_internal_entry(struct tf *tfp,
 	if (rc != 0)
 		return -1;
 
-	TFP_DRV_LOG(INFO,
+	PMD_DRV_LOG(ERR,
 		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
 		   index * TF_SESSION_EM_ENTRY_SIZE,
 		   rptr_index,
@@ -527,8 +234,8 @@ int tf_insert_em_internal_entry(struct tf *tfp,
  * 0
  * -EINVAL
  */
-int tf_delete_em_internal_entry(struct tf *tfp,
-				struct tf_delete_em_entry_parms *parms)
+static int tf_delete_em_internal_entry(struct tf                       *tfp,
+				       struct tf_delete_em_entry_parms *parms)
 {
 	int rc;
 	struct tf_session *session =
@@ -558,46 +265,96 @@ int tf_delete_em_internal_entry(struct tf *tfp,
  *   0
  *   TF_NO_EM_MATCH - entry not found
  */
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms)
+static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_delete_em_entry_parms *parms)
 {
-	struct tf_session	   *session;
-	struct tf_tbl_scope_cb	   *tbl_scope_cb;
-	enum tf_em_table_type hash_type;
+	enum hcapi_cfa_em_table_type hash_type;
 	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
 
-	if (parms == NULL)
+	if (parms->flow_handle == 0)
 		return -EINVAL;
 
-	session = (struct tf_session *)tfp->session->core_data;
-	if (session == NULL)
-		return -EINVAL;
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
 
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		return -EINVAL;
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
 
-	if (parms->flow_handle == 0)
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
+	}
 
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+	/* Process the EM entry per Table Scope type */
+	if (parms->mem == TF_MEM_EXTERNAL)
+		/* External EEM */
+		return tf_insert_eem_entry
+			(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		/* Internal EM */
+		return tf_insert_em_internal_entry(tfp,	parms);
 
-	if (tf_em_entry_exists(tbl_scope_cb,
-			       NULL,
-			       index,
-			       hash_type,
-			       parms->dir) == -EEXIST) {
-		tf_em_write_entry(tbl_scope_cb,
-				  &zero_key_entry,
-				  TF_EM_KEY_RECORD_SIZE,
-				  index,
-				  hash_type,
-				  parms->dir);
+	return -EINVAL;
+}
 
-		return 0;
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
 	}
+	if (parms->mem == TF_MEM_EXTERNAL)
+		return tf_delete_eem_entry(tbl_scope_cb, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		return tf_delete_em_internal_entry(tfp, parms);
 
 	return -EINVAL;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index c1805df73..2262ae7cc 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,13 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define SUPPORT_CFA_HW_P4 1
+#define SUPPORT_CFA_HW_P58 0
+#define SUPPORT_CFA_HW_P59 0
+#define SUPPORT_CFA_HW_ALL 0
+
+#include "hcapi/hcapi_cfa_defs.h"
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -26,56 +33,15 @@
 #define TF_EM_INTERNAL_INDEX_MASK 0xFFFC
 #define TF_EM_INTERNAL_ENTRY_MASK  0x3
 
-/** EEM Entry header
- *
- */
-struct tf_eem_entry_hdr {
-	uint32_t pointer;
-	uint32_t word1;  /*
-			  * The header is made up of two words,
-			  * this is the first word. This field has multiple
-			  * subfields, there is no suitable single name for
-			  * it so just going with word1.
-			  */
-#define TF_LKUP_RECORD_VALID_SHIFT 31
-#define TF_LKUP_RECORD_VALID_MASK 0x80000000
-#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
-#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
-#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
-#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
-#define TF_LKUP_RECORD_RESERVED_SHIFT 17
-#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
-#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
-#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
-#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
-#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
-#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
-#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
-#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
-#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
-#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
-};
-
-/** EEM Entry
- *  Each EEM entry is 512-bit (64-bytes)
- */
-struct tf_eem_64b_entry {
-	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
-	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
-};
-
 /** EM Entry
  *  Each EM entry is 512-bit (64-bytes) but ordered differently to
  *  EEM.
  */
 struct tf_em_64b_entry {
 	/** Header is 8 bytes long */
-	struct tf_eem_entry_hdr hdr;
+	struct cfa_p4_eem_entry_hdr hdr;
 	/** Key is 448 bits - 56 bytes */
-	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
 /**
@@ -127,22 +93,14 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
 					  uint32_t tbl_scope_id);
 
-int tf_insert_eem_entry(struct tf_session *session,
-			struct tf_tbl_scope_cb *tbl_scope_cb,
-			struct tf_insert_em_entry_parms *parms);
-
-int tf_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *parms);
-
-int tf_delete_eem_entry(struct tf *tfp,
-			struct tf_delete_em_entry_parms *parms);
-
-int tf_delete_em_internal_entry(struct tf                       *tfp,
-				struct tf_delete_em_entry_parms *parms);
-
 void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   enum tf_dir dir,
 			   uint32_t offset,
-			   enum tf_em_table_type table_type);
+			   enum hcapi_cfa_em_table_type table_type);
+
+int tf_em_insert_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
 
+int tf_em_delete_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e08a96f23..60274eb35 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -183,6 +183,10 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
+/**
+ * NEW HWRM direct messages
+ */
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -1259,8 +1263,9 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
 	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength = (em_result->hdr.word1 & TF_LKUP_RECORD_STRENGTH_MASK) >>
-		TF_LKUP_RECORD_STRENGTH_SHIFT;
+	req.strength =
+		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
 	req.em_key_bitlen = em_parms->key_sz_in_bits;
 	req.action_ptr = em_result->hdr.pointer;
 	req.em_record_idx = *rptr_index;
@@ -1436,22 +1441,20 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 }
 
 int
-tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *params)
+tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *params)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_bulk_input req = { 0 };
-	struct tf_tbl_type_get_bulk_output resp = { 0 };
+	struct tf_tbl_type_bulk_get_input req = { 0 };
+	struct tf_tbl_type_bulk_get_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	int data_size = 0;
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16((params->dir) |
-		((params->clear_on_read) ?
-		 TF_TBL_TYPE_GET_BULK_INPUT_FLAGS_CLEAR_ON_READ : 0x0));
+	req.flags = tfp_cpu_to_le_16(params->dir);
 	req.type = tfp_cpu_to_le_32(params->type);
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
@@ -1462,7 +1465,7 @@ tf_msg_get_bulk_tbl_entry(struct tf *tfp,
 	MSG_PREP(parms,
 		 TF_KONG_MB,
 		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET_BULK,
+		 HWRM_TFT_TBL_TYPE_BULK_GET,
 		 req,
 		 resp);
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 06f52ef00..1dad2b9fb 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -338,7 +338,7 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  * Returns:
  *  0 on Success else internal Truflow error
  */
-int tf_msg_get_bulk_tbl_entry(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms);
+int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 9b7f5a069..b7b445102 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -23,29 +23,27 @@
 					    * IDs
 					    */
 #define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
-					    * TCAM. A slices is a WC TCAM entry.
-					    */
+#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
 #define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
 #define TF_NUM_METER             1024      /* < Number of meter instances */
 #define TF_NUM_MIRROR               2      /* < Number of mirror instances */
 #define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
 
-/* Wh+/Brd2 specific HW resources */
+/* Wh+/SR specific HW resources */
 #define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
 					    * entries
 					    */
 
-/* Brd2/Brd4 specific HW resources */
+/* SR/SR2 specific HW resources */
 #define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
 
 
-/* Brd3, Brd4 common HW resources */
+/* Thor, SR2 common HW resources */
 #define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
 					    * templates
 					    */
 
-/* Brd4 specific HW resources */
+/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
 #define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
 #define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
@@ -149,10 +147,11 @@
 #define TF_RSVD_METER_INST_END_IDX_TX             0
 
 /* Mirror */
-#define TF_RSVD_MIRROR_RX                         1
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
 #define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         1
+#define TF_RSVD_MIRROR_TX                         0
 #define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
 #define TF_RSVD_MIRROR_END_IDX_TX                 0
 
@@ -501,13 +500,13 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_METER_INST,
 	TF_RESC_TYPE_HW_MIRROR,
 	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/Brd2 specific HW resources */
+	/* Wh+/SR specific HW resources */
 	TF_RESC_TYPE_HW_SP_TCAM,
-	/* Brd2/Brd4 specific HW resources */
+	/* SR/SR2 specific HW resources */
 	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Brd3, Brd4 common HW resources */
+	/* Thor, SR2 common HW resources */
 	TF_RESC_TYPE_HW_FKB,
-	/* Brd4 specific HW resources */
+	/* SR2 specific HW resources */
 	TF_RESC_TYPE_HW_TBL_SCOPE,
 	TF_RESC_TYPE_HW_EPOCH0,
 	TF_RESC_TYPE_HW_EPOCH1,
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 2264704d2..b6fe2f1ad 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -14,6 +14,7 @@
 #include "tf_resources.h"
 #include "tf_msg.h"
 #include "bnxt.h"
+#include "tfp.h"
 
 /**
  * Internal macro to perform HW resource allocation check between what
@@ -329,13 +330,13 @@ tf_rm_print_hw_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_hw_2_str(i),
 				    hw_query->hw_query[i].max,
 				    tf_rm_rsvd_hw_value(dir, i));
@@ -359,13 +360,13 @@ tf_rm_print_sram_qcaps_error(enum tf_dir dir,
 {
 	int i;
 
-	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	PMD_DRV_LOG(ERR, "  Elements:\n");
+	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	TFP_DRV_LOG(ERR, "  Elements:\n");
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (*error_flag & 1 << i)
-			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
 				    tf_hcapi_sram_2_str(i),
 				    sram_query->sram_query[i].max,
 				    tf_rm_rsvd_sram_value(dir, i));
@@ -1700,7 +1701,7 @@ tf_rm_hw_alloc_validate(enum tf_dir dir,
 
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed id:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1727,7 +1728,7 @@ tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
 
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				"%s, Alloc failed idx:%d expect:%d got:%d\n",
 				tf_dir_2_str(dir),
 				i,
@@ -1820,19 +1821,22 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed,"
+			"error_flag:0x%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
@@ -1845,9 +1849,10 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1857,15 +1862,17 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -1903,19 +1910,22 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+		TFP_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed,"
+			"error_flag:%x, rc:%s\n",
 			tf_dir_2_str(dir),
-			error_flag);
+			error_flag,
+			strerror(-rc));
 		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
 		goto cleanup;
 	}
@@ -1931,9 +1941,10 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 					    sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1943,15 +1954,18 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
 	if (rc) {
 		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed,"
+			    " rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
 		goto cleanup;
 	}
 
 	return 0;
 
  cleanup:
+
 	return -1;
 }
 
@@ -2177,7 +2191,7 @@ tf_rm_hw_to_flush(struct tf_session *tfs,
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
 		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
 	} else {
-		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
 			    tf_dir_2_str(dir),
 			    free_cnt,
 			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
@@ -2538,8 +2552,8 @@ tf_rm_log_hw_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
 		if (hw_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_hw_2_str(i));
 	}
@@ -2564,8 +2578,8 @@ tf_rm_log_sram_flush(enum tf_dir dir,
 	 */
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
 		if (sram_entries[i].stride != 0)
-			PMD_DRV_LOG(ERR,
-				    "%s: %s was not cleaned up\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, %s was not cleaned up\n",
 				    tf_dir_2_str(dir),
 				    tf_hcapi_sram_2_str(i));
 	}
@@ -2777,9 +2791,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering HW resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering HW resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
@@ -2789,9 +2804,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2805,9 +2821,10 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = -ENOTEMPTY;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
@@ -2818,9 +2835,10 @@ tf_rm_close(struct tf *tfp)
 			if (rc) {
 				rc_close = rc;
 				/* Log error */
-				PMD_DRV_LOG(ERR,
-					    "%s, HW flush failed\n",
-					    tf_dir_2_str(i));
+				TFP_DRV_LOG(ERR,
+					    "%s, HW flush failed, rc:%s\n",
+					    tf_dir_2_str(i),
+					    strerror(-rc));
 			}
 		}
 
@@ -2828,18 +2846,20 @@ tf_rm_close(struct tf *tfp)
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, HW free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, HW free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 
 		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
 		if (rc) {
 			rc_close = rc;
 			/* Log error */
-			PMD_DRV_LOG(ERR,
-				    "%s, SRAM free failed\n",
-				    tf_dir_2_str(i));
+			TFP_DRV_LOG(ERR,
+				    "%s, SRAM free failed, rc:%s\n",
+				    tf_dir_2_str(i),
+				    strerror(-rc));
 		}
 	}
 
@@ -2890,14 +2910,14 @@ tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Tcam type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "%s:, Tcam type lookup failed, type:%d\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, Tcam type lookup failed, type:%d\n",
 			    tf_dir_2_str(dir),
 			    type);
 		return rc;
@@ -3057,15 +3077,15 @@ tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
 	}
 
 	if (rc == -EOPNOTSUPP) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type not supported, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type not supported, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	} else if (rc == -1) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Table type lookup failed, type:%d\n",
-			    dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Table type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
 			    type);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 35a7cfab5..a68335304 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -23,6 +23,7 @@
 #include "bnxt.h"
 #include "tf_resources.h"
 #include "tf_rm.h"
+#include "stack.h"
 #include "tf_common.h"
 
 #define PTU_PTE_VALID          0x1UL
@@ -53,14 +54,14 @@
  *   Pointer to the page table to free
  */
 static void
-tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
 {
 	uint32_t i;
 
 	for (i = 0; i < tp->pg_count; i++) {
 		if (!tp->pg_va_tbl[i]) {
-			PMD_DRV_LOG(WARNING,
-				    "No map for page %d table %016" PRIu64 "\n",
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
 				    i,
 				    (uint64_t)(uintptr_t)tp);
 			continue;
@@ -84,15 +85,14 @@ tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
  *   Pointer to the EM table to free
  */
 static void
-tf_em_free_page_table(struct tf_em_table *tbl)
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int i;
 
 	for (i = 0; i < tbl->num_lvl; i++) {
 		tp = &tbl->pg_tbl[i];
-
-		PMD_DRV_LOG(INFO,
+		TFP_DRV_LOG(INFO,
 			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
 			   TF_EM_PAGE_SIZE,
 			    i,
@@ -124,7 +124,7 @@ tf_em_free_page_table(struct tf_em_table *tbl)
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
 		   uint32_t pg_count,
 		   uint32_t pg_size)
 {
@@ -183,9 +183,9 @@ tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
  *   -ENOMEM - Out of memory
  */
 static int
-tf_em_alloc_page_table(struct tf_em_table *tbl)
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp;
 	int rc = 0;
 	int i;
 	uint32_t j;
@@ -197,14 +197,15 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
 					tbl->page_cnt[i],
 					TF_EM_PAGE_SIZE);
 		if (rc) {
-			PMD_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d\n",
-				i);
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
 			goto cleanup;
 		}
 
 		for (j = 0; j < tp->pg_count; j++) {
-			PMD_DRV_LOG(INFO,
+			TFP_DRV_LOG(INFO,
 				"EEM: Allocated page table: size %u lvl %d cnt"
 				" %u VA:%p PA:%p\n",
 				TF_EM_PAGE_SIZE,
@@ -234,8 +235,8 @@ tf_em_alloc_page_table(struct tf_em_table *tbl)
  *   Flag controlling if the page table is last
  */
 static void
-tf_em_link_page_table(struct tf_em_page_tbl *tp,
-		      struct tf_em_page_tbl *tp_next,
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
 		      bool set_pte_last)
 {
 	uint64_t *pg_pa = tp_next->pg_pa_tbl;
@@ -270,10 +271,10 @@ tf_em_link_page_table(struct tf_em_page_tbl *tp,
  *   Pointer to EM page table
  */
 static void
-tf_em_setup_page_table(struct tf_em_table *tbl)
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 {
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
 	bool set_pte_last = 0;
 	int i;
 
@@ -415,7 +416,7 @@ tf_em_size_page_tbls(int max_lvl,
  *   - ENOMEM - Out of memory
  */
 static int
-tf_em_size_table(struct tf_em_table *tbl)
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
 {
 	uint64_t num_data_pages;
 	uint32_t *page_cnt;
@@ -456,11 +457,10 @@ tf_em_size_table(struct tf_em_table *tbl)
 					  tbl->num_entries,
 					  &num_data_pages);
 	if (max_lvl < 0) {
-		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		PMD_DRV_LOG(WARNING,
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
 			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type,
-			    (uint64_t)num_entries * tbl->entry_size,
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
 			    TF_EM_PAGE_SIZE);
 		return -ENOMEM;
 	}
@@ -474,8 +474,8 @@ tf_em_size_table(struct tf_em_table *tbl)
 	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
 				page_cnt);
 
-	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	PMD_DRV_LOG(INFO,
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
 		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
 		    max_lvl + 1,
 		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
@@ -504,8 +504,9 @@ tf_em_ctx_unreg(struct tf *tfp,
 		struct tf_tbl_scope_cb *tbl_scope_cb,
 		int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 
 	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
@@ -539,8 +540,9 @@ tf_em_ctx_reg(struct tf *tfp,
 	      struct tf_tbl_scope_cb *tbl_scope_cb,
 	      int dir)
 {
-	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
 	int rc = 0;
 	int i;
 
@@ -601,7 +603,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 					TF_MEGABYTE) / (key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
 				    "%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -613,7 +615,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
 				    "%u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -625,7 +627,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx flows "
 				    "requested:%u max:%u\n",
 				    parms->rx_num_flows_in_k * TF_KILOBYTE,
@@ -642,7 +644,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Rx requested: %u\n",
 				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -658,7 +660,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			(key_b + action_b);
 
 		if (num_entries < TF_EM_MIN_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Insufficient memory requested:%uMB\n",
 				    parms->rx_mem_size_in_mb);
 			return -EINVAL;
@@ -670,7 +672,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -682,7 +684,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 		    TF_EM_MIN_ENTRIES ||
 		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
 		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx flows "
 				    "requested:%u max:%u\n",
 				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
@@ -696,7 +698,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 			cnt *= 2;
 
 		if (cnt > TF_EM_MAX_ENTRIES) {
-			PMD_DRV_LOG(ERR,
+			TFP_DRV_LOG(ERR,
 				    "EEM: Invalid number of Tx requested: %u\n",
 		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
 			return -EINVAL;
@@ -705,7 +707,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->rx_num_flows_in_k != 0 &&
 	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Rx key size required: %u\n",
 			    (parms->rx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -713,7 +715,7 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 	if (parms->tx_num_flows_in_k != 0 &&
 	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		PMD_DRV_LOG(ERR,
+		TFP_DRV_LOG(ERR,
 			    "EEM: Tx key size required: %u\n",
 			    (parms->tx_max_key_sz_in_bits));
 		return -EINVAL;
@@ -795,11 +797,10 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 
 	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
 	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -817,9 +818,9 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -833,11 +834,11 @@ tf_set_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Set failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -891,9 +892,9 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 	/* Verify that the entry has been previously allocated */
 	id = ba_inuse(session_pool, index);
 	if (id != 1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -EINVAL;
@@ -907,11 +908,11 @@ tf_get_tbl_entry_internal(struct tf *tfp,
 				  parms->data,
 				  parms->idx);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Get failed, type:%d, rc:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    rc);
+			    strerror(-rc));
 	}
 
 	return rc;
@@ -932,8 +933,8 @@ tf_get_tbl_entry_internal(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 static int
-tf_get_bulk_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc;
 	int id;
@@ -975,7 +976,7 @@ tf_get_bulk_tbl_entry_internal(struct tf *tfp,
 	}
 
 	/* Get the entry */
-	rc = tf_msg_get_bulk_tbl_entry(tfp, parms);
+	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1006,10 +1007,9 @@ static int
 tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
 			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Alloc with search not supported\n",
-		    parms->dir);
-
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Alloc with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1032,9 +1032,9 @@ static int
 tf_free_tbl_entry_shadow(struct tf_session *tfs,
 			 struct tf_free_tbl_entry_parms *parms)
 {
-	PMD_DRV_LOG(ERR,
-		    "dir:%d, Entry Free with search not supported\n",
-		    parms->dir);
+	TFP_DRV_LOG(ERR,
+		    "%s, Entry Free with search not supported\n",
+		    tf_dir_2_str(parms->dir));
 
 	return -EOPNOTSUPP;
 }
@@ -1074,8 +1074,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	parms.alignment = 0;
 
 	if (tfp_calloc(&parms) != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
-			    dir, strerror(-ENOMEM));
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
 		return -ENOMEM;
 	}
 
@@ -1084,8 +1084,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	rc = stack_init(num_entries, parms.mem_va, pool);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 
@@ -1101,13 +1101,13 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 	for (i = 0; i < num_entries; i++) {
 		rc = stack_push(pool, j);
 		if (rc != 0) {
-			PMD_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
 				    tf_dir_2_str(dir), strerror(-rc));
 			goto cleanup;
 		}
 
 		if (j < 0) {
-			PMD_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
 				    dir, j);
 			goto cleanup;
 		}
@@ -1116,8 +1116,8 @@ tf_create_tbl_pool_external(enum tf_dir dir,
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
-			    dir, strerror(-rc));
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
 		goto cleanup;
 	}
 	return 0;
@@ -1168,18 +1168,7 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1188,9 +1177,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-					"%s, table scope not allocated\n",
-					tf_dir_2_str(parms->dir));
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1200,9 +1189,9 @@ tf_alloc_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_pop(pool, &index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type);
 		return rc;
 	}
@@ -1233,18 +1222,7 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	struct bitalloc *session_pool;
 	struct tf_session *tfs;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1254,11 +1232,10 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_MIRROR_CONFIG &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1276,9 +1253,9 @@ tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
 	if (id == -1) {
 		free_cnt = ba_free_count(session_pool);
 
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Allocation failed, type:%d, free:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d, free:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   free_cnt);
 		return -ENOMEM;
@@ -1323,18 +1300,7 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct stack *pool;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1343,9 +1309,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
@@ -1355,9 +1321,9 @@ tf_free_tbl_entry_pool_external(struct tf *tfp,
 	rc = stack_push(pool, index);
 
 	if (rc != 0) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 	}
@@ -1386,18 +1352,7 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	struct tf_session *tfs;
 	uint32_t index;
 
-	/* Check parameters */
-	if (tfp == NULL || parms == NULL) {
-		PMD_DRV_LOG(ERR, "Invalid parameters\n");
-		return -EINVAL;
-	}
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
@@ -1408,9 +1363,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
 	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
 	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Type not supported, type:%d\n",
-			    parms->dir,
+		TFP_DRV_LOG(ERR,
+			    "%s, Type not supported, type:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type);
 		return -EOPNOTSUPP;
 	}
@@ -1439,9 +1394,9 @@ tf_free_tbl_entry_pool_internal(struct tf *tfp,
 	/* Check if element was indeed allocated */
 	id = ba_inuse_free(session_pool, index);
 	if (id == -1) {
-		PMD_DRV_LOG(ERR,
-		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   parms->dir,
+		TFP_DRV_LOG(ERR,
+		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
 		   parms->type,
 		   index);
 		return -ENOMEM;
@@ -1485,8 +1440,10 @@ tf_free_eem_tbl_scope_cb(struct tf *tfp,
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 parms->tbl_scope_id);
 
-	if (tbl_scope_cb == NULL)
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
 		return -EINVAL;
+	}
 
 	/* Free Table control block */
 	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
@@ -1516,23 +1473,17 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	int rc;
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_table *em_tables;
+	struct hcapi_cfa_em_table *em_tables;
 	int index;
 	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 
-	/* check parameters */
-	if (parms == NULL || tfp->session == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
-
 	session = (struct tf_session *)tfp->session->core_data;
 
 	/* Get Table Scope control block from the session pool */
 	index = ba_alloc(session->tbl_scope_pool_rx);
 	if (index == -1) {
-		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
 			    "Control Block\n");
 		return -ENOMEM;
 	}
@@ -1547,8 +1498,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				     dir,
 				     &tbl_scope_cb->em_caps[dir]);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"EEM: Unable to query for EEM capability\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 	}
@@ -1565,8 +1518,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 		 */
 		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup;
 		}
 
@@ -1580,8 +1535,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				   parms->hw_flow_cache_flush_timer,
 				   dir);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"TBL: Unable to configure EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1590,8 +1547,10 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
 
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware\n");
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
 			goto cleanup_full;
 		}
 
@@ -1604,9 +1563,9 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 				    em_tables[TF_RECORD_TABLE].num_entries,
 				    em_tables[TF_RECORD_TABLE].entry_size);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "%d TBL: Unable to allocate idx pools %s\n",
-				    dir,
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup_full;
 		}
@@ -1634,13 +1593,12 @@ tf_set_tbl_entry(struct tf *tfp,
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct tf_session *session;
 
-	if (tfp == NULL || parms == NULL || parms->data == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
 		return -EINVAL;
 	}
 
@@ -1654,9 +1612,9 @@ tf_set_tbl_entry(struct tf *tfp,
 		tbl_scope_id = parms->tbl_scope_id;
 
 		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Table scope not allocated\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Table scope not allocated\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1665,18 +1623,21 @@ tf_set_tbl_entry(struct tf *tfp,
 		 */
 		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
 
-		if (tbl_scope_cb == NULL)
-			return -EINVAL;
+		if (tbl_scope_cb == NULL) {
+			TFP_DRV_LOG(ERR,
+				    "%s, table scope error\n",
+				    tf_dir_2_str(parms->dir));
+				return -EINVAL;
+		}
 
 		/* External table, implicitly the Action table */
-		base_addr = tf_em_get_table_page(tbl_scope_cb,
-						 parms->dir,
-						 offset,
-						 TF_RECORD_TABLE);
+		base_addr = (void *)(uintptr_t)
+		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
+
 		if (base_addr == NULL) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Base address lookup failed\n",
-				    parms->dir);
+			TFP_DRV_LOG(ERR,
+				    "%s, Base address lookup failed\n",
+				    tf_dir_2_str(parms->dir));
 			return -EINVAL;
 		}
 
@@ -1688,11 +1649,11 @@ tf_set_tbl_entry(struct tf *tfp,
 		/* Internal table type processing */
 		rc = tf_set_tbl_entry_internal(tfp, parms);
 		if (rc) {
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Set failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Set failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 		}
 	}
 
@@ -1706,31 +1667,24 @@ tf_get_tbl_entry(struct tf *tfp,
 {
 	int rc = 0;
 
-	if (tfp == NULL || parms == NULL)
-		return -EINVAL;
-
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 
 	if (parms->type == TF_TBL_TYPE_EXT) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, External table type not supported\n",
-			    parms->dir);
+		/* Not supported, yet */
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported\n",
+			    tf_dir_2_str(parms->dir));
 
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
 		rc = tf_get_tbl_entry_internal(tfp, parms);
 		if (rc)
-			PMD_DRV_LOG(ERR,
-				    "dir:%d, Get failed, type:%d, rc:%d\n",
-				    parms->dir,
+			TFP_DRV_LOG(ERR,
+				    "%s, Get failed, type:%d, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    rc);
+				    strerror(-rc));
 	}
 
 	return rc;
@@ -1738,8 +1692,8 @@ tf_get_tbl_entry(struct tf *tfp,
 
 /* API defined in tf_core.h */
 int
-tf_get_bulk_tbl_entry(struct tf *tfp,
-		 struct tf_get_bulk_tbl_entry_parms *parms)
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
 {
 	int rc = 0;
 
@@ -1754,7 +1708,7 @@ tf_get_bulk_tbl_entry(struct tf *tfp,
 		rc = -EOPNOTSUPP;
 	} else {
 		/* Internal table type processing */
-		rc = tf_get_bulk_tbl_entry_internal(tfp, parms);
+		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
 		if (rc)
 			TFP_DRV_LOG(ERR,
 				    "%s, Bulk get failed, type:%d, rc:%s\n",
@@ -1773,11 +1727,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	rc = tf_alloc_eem_tbl_scope(tfp, parms);
 
@@ -1791,11 +1741,7 @@ tf_free_tbl_scope(struct tf *tfp,
 {
 	int rc;
 
-	/* check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
 
 	/* free table scope and all associated resources */
 	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
@@ -1813,11 +1759,7 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
 	/*
 	 * No shadow copy support for external tables, allocate and return
 	 */
@@ -1827,13 +1769,6 @@ tf_alloc_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1849,9 +1784,9 @@ tf_alloc_tbl_entry(struct tf *tfp,
 
 	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 
 	return rc;
 }
@@ -1866,11 +1801,8 @@ tf_free_tbl_entry(struct tf *tfp,
 	struct tf_session *tfs;
 #endif /* TF_SHADOW */
 
-	/* Check parameters */
-	if (parms == NULL || tfp == NULL) {
-		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS_SESSION(tfp, parms);
+
 	/*
 	 * No shadow of external tables so just free the entry
 	 */
@@ -1880,13 +1812,6 @@ tf_free_tbl_entry(struct tf *tfp,
 	}
 
 #if (TF_SHADOW == 1)
-	if (tfp->session == NULL || tfp->session->core_data == NULL) {
-		PMD_DRV_LOG(ERR,
-			    "dir:%d, Session info invalid\n",
-			    parms->dir);
-		return -EINVAL;
-	}
-
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
 	/* Search the Shadow DB for requested element. If not found go
@@ -1903,16 +1828,16 @@ tf_free_tbl_entry(struct tf *tfp,
 	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
 
 	if (rc)
-		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
-			    parms->dir,
-			    rc);
+		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 	return rc;
 }
 
 
 static void
-tf_dump_link_page_table(struct tf_em_page_tbl *tp,
-			struct tf_em_page_tbl *tp_next)
+tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+			struct hcapi_cfa_em_page_tbl *tp_next)
 {
 	uint64_t *pg_va;
 	uint32_t i;
@@ -1951,9 +1876,9 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 {
 	struct tf_session      *session;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_em_page_tbl *tp;
-	struct tf_em_page_tbl *tp_next;
-	struct tf_em_table *tbl;
+	struct hcapi_cfa_em_page_tbl *tp;
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_table *tbl;
 	int i;
 	int j;
 	int dir;
@@ -1967,7 +1892,7 @@ void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
 	tbl_scope_cb = tbl_scope_cb_find(session,
 					 tbl_scope_id);
 	if (tbl_scope_cb == NULL)
-		TFP_DRV_LOG(ERR, "No table scope\n");
+		PMD_DRV_LOG(ERR, "No table scope\n");
 
 	for (dir = 0; dir < TF_DIR_MAX; dir++) {
 		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index ee8a14665..b17557345 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -13,45 +13,6 @@
 
 struct tf_session;
 
-enum tf_pg_tbl_lvl {
-	TF_PT_LVL_0,
-	TF_PT_LVL_1,
-	TF_PT_LVL_2,
-	TF_PT_LVL_MAX
-};
-
-enum tf_em_table_type {
-	TF_KEY0_TABLE,
-	TF_KEY1_TABLE,
-	TF_RECORD_TABLE,
-	TF_EFC_TABLE,
-	TF_MAX_TABLE
-};
-
-struct tf_em_page_tbl {
-	uint32_t	pg_count;
-	uint32_t	pg_size;
-	void		**pg_va_tbl;
-	uint64_t	*pg_pa_tbl;
-};
-
-struct tf_em_table {
-	int				type;
-	uint32_t			num_entries;
-	uint16_t			ctx_id;
-	uint32_t			entry_size;
-	int				num_lvl;
-	uint32_t			page_cnt[TF_PT_LVL_MAX];
-	uint64_t			num_data_pages;
-	void				*l0_addr;
-	uint64_t			l0_dma_addr;
-	struct tf_em_page_tbl pg_tbl[TF_PT_LVL_MAX];
-};
-
-struct tf_em_ctx_mem_info {
-	struct tf_em_table		em_tables[TF_MAX_TABLE];
-};
-
 /** table scope control block content */
 struct tf_em_caps {
 	uint32_t flags;
@@ -74,18 +35,14 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps          em_caps[TF_DIR_MAX];
 	struct stack               ext_act_pool[TF_DIR_MAX];
 	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
 };
 
-/**
- * Hardware Page sizes supported for EEM:
- *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- *
- * Round-down other page sizes to the lower hardware page
- * size supported.
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
  */
 #define TF_EM_PAGE_SIZE_4K 12
 #define TF_EM_PAGE_SIZE_8K 13
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 17/51] net/bnxt: implement support for TCAM access
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (15 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 18/51] net/bnxt: multiple device implementation Ajit Khaparde
                           ` (35 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Implement TCAM alloc, free, bind, and unbind functions
Update tf_core, tf_msg, etc.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      | 258 ++++++++++-----------
 drivers/net/bnxt/tf_core/tf_device.h    |  14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |  25 ++-
 drivers/net/bnxt/tf_core/tf_msg.c       |  31 +--
 drivers/net/bnxt/tf_core/tf_msg.h       |   4 +-
 drivers/net/bnxt/tf_core/tf_tcam.c      | 285 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tcam.h      |  66 ++++--
 7 files changed, 480 insertions(+), 203 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 648d0d1bd..29522c66e 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,43 +19,6 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-static int tf_check_tcam_entry(enum tf_tcam_tbl_type tcam_tbl_type,
-			       enum tf_device_type device,
-			       uint16_t key_sz_in_bits,
-			       uint16_t *num_slice_per_row)
-{
-	uint16_t key_bytes;
-	uint16_t slice_sz = 0;
-
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
-#define CFA_P4_WC_TCAM_SLICE_SIZE     12
-
-	if (tcam_tbl_type == TF_TCAM_TBL_TYPE_WC_TCAM) {
-		key_bytes = TF_BITS2BYTES_WORD_ALIGN(key_sz_in_bits);
-		if (device == TF_DEVICE_TYPE_WH) {
-			slice_sz = CFA_P4_WC_TCAM_SLICE_SIZE;
-			*num_slice_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
-		} else {
-			TFP_DRV_LOG(ERR,
-				    "Unsupported device type %d\n",
-				    device);
-			return -ENOTSUP;
-		}
-
-		if (key_bytes > *num_slice_per_row * slice_sz) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Key size %d is not supported\n",
-				    tf_tcam_tbl_2_str(tcam_tbl_type),
-				    key_bytes);
-			return -ENOTSUP;
-		}
-	} else { /* for other type of tcam */
-		*num_slice_per_row = 1;
-	}
-
-	return 0;
-}
-
 /**
  * Create EM Tbl pool of memory indexes.
  *
@@ -918,49 +881,56 @@ tf_alloc_tcam_entry(struct tf *tfp,
 		    struct tf_alloc_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_alloc_parms aparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_alloc_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = ba_alloc(session_pool);
-	if (index == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
+	aparms.dir = parms->dir;
+	aparms.type = parms->tcam_tbl_type;
+	aparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	aparms.priority = parms->priority;
+	rc = dev->ops->tf_dev_alloc_tcam(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
-		return -ENOMEM;
+			    strerror(-rc));
+		return rc;
 	}
 
-	index *= num_slice_per_row;
+	parms->idx = aparms.idx;
 
-	parms->idx = index;
 	return 0;
 }
 
@@ -969,55 +939,60 @@ tf_set_tcam_entry(struct tf *tfp,
 		  struct tf_set_tcam_entry_parms *parms)
 {
 	int rc;
-	int id;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_set_parms sparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 parms->key_sz_in_bits,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	if (dev->ops->tf_dev_set_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	/* Verify that the entry has been previously allocated */
-	index = parms->idx / num_slice_per_row;
+	sparms.dir = parms->dir;
+	sparms.type = parms->tcam_tbl_type;
+	sparms.idx = parms->idx;
+	sparms.key = parms->key;
+	sparms.mask = parms->mask;
+	sparms.key_size = TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	sparms.result = parms->result;
+	sparms.result_size = TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	rc = dev->ops->tf_dev_set_tcam(tfp, &sparms);
+	if (rc) {
 		TFP_DRV_LOG(ERR,
-		   "%s: %s: Invalid or not allocated index, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-		   parms->idx);
-		return -EINVAL;
+			    "%s: TCAM set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
 	}
 
-	rc = tf_msg_tcam_entry_set(tfp, parms);
-
-	return rc;
+	return 0;
 }
 
 int
@@ -1033,59 +1008,52 @@ tf_free_tcam_entry(struct tf *tfp,
 		   struct tf_free_tcam_entry_parms *parms)
 {
 	int rc;
-	int index;
 	struct tf_session *tfs;
-	struct bitalloc *session_pool;
-	uint16_t num_slice_per_row = 1;
-
-	/* TEMP, due to device design. When tcam is modularized device
-	 * should be retrieved from the session
-	 */
-	enum tf_device_type device_type;
-	/* TEMP */
-	device_type = TF_DEVICE_TYPE_WH;
+	struct tf_dev_info *dev;
+	struct tf_tcam_free_parms fparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	tfs = (struct tf_session *)(tfp->session->core_data);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	rc = tf_check_tcam_entry(parms->tcam_tbl_type,
-				 device_type,
-				 0,
-				 &num_slice_per_row);
-	/* Error logging handled by tf_check_tcam_entry */
-	if (rc)
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	rc = tf_rm_lookup_tcam_type_pool(tfs,
-					 parms->dir,
-					 parms->tcam_tbl_type,
-					 &session_pool);
-	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
-	if (rc)
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
 		return rc;
+	}
 
-	index = parms->idx / num_slice_per_row;
-
-	rc = ba_inuse(session_pool, index);
-	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+	if (dev->ops->tf_dev_free_tcam == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    index);
-		return -EINVAL;
+			    strerror(-rc));
+		return rc;
 	}
 
-	ba_free(session_pool, index);
-
-	rc = tf_msg_tcam_entry_free(tfp, parms);
+	fparms.dir = parms->dir;
+	fparms.type = parms->tcam_tbl_type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: TCAM allocation failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
-			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
-			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 1501b20d9..32d9a5442 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -116,8 +116,11 @@ struct tf_dev_ops {
 	 * [in] tfp
 	 *   Pointer to TF handle
 	 *
-	 * [out] slice_size
-	 *   Pointer to slice size the device supports
+	 * [in] type
+	 *   TCAM table type
+	 *
+	 * [in] key_sz
+	 *   Key size
 	 *
 	 * [out] num_slices_per_row
 	 *   Pointer to number of slices per row the device supports
@@ -126,9 +129,10 @@ struct tf_dev_ops {
 	 *   - (0) if successful.
 	 *   - (-EINVAL) on failure.
 	 */
-	int (*tf_dev_get_wc_tcam_slices)(struct tf *tfp,
-					 uint16_t *slice_size,
-					 uint16_t *num_slices_per_row);
+	int (*tf_dev_get_tcam_slice_info)(struct tf *tfp,
+					  enum tf_tcam_tbl_type type,
+					  uint16_t key_sz,
+					  uint16_t *num_slices_per_row);
 
 	/**
 	 * Allocation of an identifier element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index f4bd95f1c..77fb693dd 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -56,18 +56,21 @@ tf_dev_p4_get_max_types(struct tf *tfp __rte_unused,
  *   - (-EINVAL) on failure.
  */
 static int
-tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
-			     uint16_t *slice_size,
-			     uint16_t *num_slices_per_row)
+tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
+			      enum tf_tcam_tbl_type type,
+			      uint16_t key_sz,
+			      uint16_t *num_slices_per_row)
 {
-#define CFA_P4_WC_TCAM_SLICE_SIZE       12
-#define CFA_P4_WC_TCAM_SLICES_PER_ROW    2
+#define CFA_P4_WC_TCAM_SLICES_PER_ROW 2
+#define CFA_P4_WC_TCAM_SLICE_SIZE     12
 
-	if (slice_size == NULL || num_slices_per_row == NULL)
-		return -EINVAL;
-
-	*slice_size = CFA_P4_WC_TCAM_SLICE_SIZE;
-	*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+	if (type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
+		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
+			return -ENOTSUP;
+	} else { /* for other type of tcam */
+		*num_slices_per_row = 1;
+	}
 
 	return 0;
 }
@@ -77,7 +80,7 @@ tf_dev_p4_get_wc_tcam_slices(struct tf *tfp __rte_unused,
  */
 const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
-	.tf_dev_get_wc_tcam_slices = tf_dev_p4_get_wc_tcam_slices,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 60274eb35..b50e1d48c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -9,6 +9,7 @@
 #include <string.h>
 
 #include "tf_msg_common.h"
+#include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
 #include "tf_session.h"
@@ -1480,27 +1481,19 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
-#define TF_BYTES_PER_SLICE(tfp) 12
-#define NUM_SLICES(tfp, bytes) \
-	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
-
 int
 tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_set_tcam_entry_parms *parms)
+		      struct tf_tcam_set_parms *parms)
 {
 	int rc;
 	struct tfp_send_msg_parms mparms = { 0 };
 	struct hwrm_tf_tcam_set_input req = { 0 };
 	struct hwrm_tf_tcam_set_output resp = { 0 };
-	uint16_t key_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
-	uint16_t result_bytes =
-		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
@@ -1508,11 +1501,11 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	if (parms->dir == TF_DIR_TX)
 		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	req.key_size = key_bytes;
-	req.mask_offset = key_bytes;
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
 	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * key_bytes;
-	req.result_size = result_bytes;
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
 	data_size = 2 * req.key_size + req.result_size;
 
 	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
@@ -1530,9 +1523,9 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 			   sizeof(buf.pa_addr));
 	}
 
-	tfp_memcpy(&data[0], parms->key, key_bytes);
-	tfp_memcpy(&data[key_bytes], parms->mask, key_bytes);
-	tfp_memcpy(&data[req.result_offset], parms->result, result_bytes);
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
 
 	mparms.tf_type = HWRM_TF_TCAM_SET;
 	mparms.req_data = (uint32_t *)&req;
@@ -1554,7 +1547,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 int
 tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_free_tcam_entry_parms *in_parms)
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
 	struct hwrm_tf_tcam_free_input req =  { 0 };
@@ -1562,7 +1555,7 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 
 	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
 	if (rc != 0)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1dad2b9fb..a3e0f7bba 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -247,7 +247,7 @@ int tf_msg_em_op(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_set(struct tf *tfp,
-			  struct tf_set_tcam_entry_parms *parms);
+			  struct tf_tcam_set_parms *parms);
 
 /**
  * Sends tcam entry 'free' to the Firmware.
@@ -262,7 +262,7 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_tcam_entry_free(struct tf *tfp,
-			   struct tf_free_tcam_entry_parms *parms);
+			   struct tf_tcam_free_parms *parms);
 
 /**
  * Sends Set message of a Table Type element to the firmware.
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 3ad99dd0d..b9dba5323 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -3,16 +3,24 @@
  * All rights reserved.
  */
 
+#include <string.h>
 #include <rte_common.h>
 
 #include "tf_tcam.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_rm_new.h"
+#include "tf_device.h"
+#include "tfp.h"
+#include "tf_session.h"
+#include "tf_msg.h"
 
 struct tf;
 
 /**
  * TCAM DBs.
  */
-/* static void *tcam_db[TF_DIR_MAX]; */
+static void *tcam_db[TF_DIR_MAX];
 
 /**
  * TCAM Shadow DBs
@@ -22,7 +30,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -33,19 +41,131 @@ int
 tf_tcam_bind(struct tf *tfp __rte_unused,
 	     struct tf_tcam_cfg_parms *parms __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "TCAM already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
+		db_cfg.rm_db = tcam_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: TCAM DB creation failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+	}
+
+	init = 1;
+
 	return 0;
 }
 
 int
 tf_tcam_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init)
+		return -EINVAL;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tcam_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tcam_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
-tf_tcam_alloc(struct tf *tfp __rte_unused,
-	      struct tf_tcam_alloc_parms *parms __rte_unused)
+tf_tcam_alloc(struct tf *tfp,
+	      struct tf_tcam_alloc_parms *parms)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Allocate requested element */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = (uint32_t *)&parms->idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed tcam, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	parms->idx *= num_slice_per_row;
+
 	return 0;
 }
 
@@ -53,6 +173,92 @@ int
 tf_tcam_free(struct tf *tfp __rte_unused,
 	     struct tf_tcam_free_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  0,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tcam_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx / num_slice_per_row;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
@@ -67,6 +273,77 @@ int
 tf_tcam_set(struct tf *tfp __rte_unused,
 	    struct tf_tcam_set_parms *parms __rte_unused)
 {
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	uint16_t num_slice_per_row = 1;
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No TCAM DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc)
+		return rc;
+
+	if (dev->ops->tf_dev_get_tcam_slice_info == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Need to retrieve row size etc */
+	rc = dev->ops->tf_dev_get_tcam_slice_info(tfp,
+						  parms->type,
+						  parms->key_size,
+						  &num_slice_per_row);
+	if (rc)
+		return rc;
+
+	/* Check if element is in use */
+	aparms.rm_db = tcam_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx / num_slice_per_row;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry is not allocated, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->type),
+			    parms->idx,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 68c25eb1b..67c3bcb49 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -50,10 +50,18 @@ struct tf_tcam_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint16_t idx;
 };
 
 /**
@@ -71,7 +79,7 @@ struct tf_tcam_free_parms {
 	/**
 	 * [in] Index to free
 	 */
-	uint32_t idx;
+	uint16_t idx;
 	/**
 	 * [out] Reference count after free, only valid if session has been
 	 * created with shadow_copy.
@@ -90,7 +98,7 @@ struct tf_tcam_alloc_search_parms {
 	/**
 	 * [in] TCAM table type
 	 */
-	enum tf_tcam_tbl_type tcam_tbl_type;
+	enum tf_tcam_tbl_type type;
 	/**
 	 * [in] Enable search for matching entry
 	 */
@@ -100,9 +108,9 @@ struct tf_tcam_alloc_search_parms {
 	 */
 	uint8_t *key;
 	/**
-	 * [in] key size in bits (if search)
+	 * [in] key size (if search)
 	 */
-	uint16_t key_sz_in_bits;
+	uint16_t key_size;
 	/**
 	 * [in] Mask data to match on (if search)
 	 */
@@ -139,17 +147,29 @@ struct tf_tcam_set_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [in] Entry data
+	 * [in] Entry index to write to
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [in] Entry size
+	 * [in] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to write to
+	 * [in] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [in] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [in] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
@@ -165,17 +185,29 @@ struct tf_tcam_get_parms {
 	 */
 	enum tf_tcam_tbl_type type;
 	/**
-	 * [out] Entry data
+	 * [in] Entry index to read
 	 */
-	uint8_t *data;
+	uint32_t idx;
 	/**
-	 * [out] Entry size
+	 * [out] array containing key
 	 */
-	uint16_t data_sz_in_bytes;
+	uint8_t *key;
 	/**
-	 * [in] Entry index to read
+	 * [out] array containing mask fields
 	 */
-	uint32_t idx;
+	uint8_t *mask;
+	/**
+	 * [out] key size
+	 */
+	uint16_t key_size;
+	/**
+	 * [out] array containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] result size
+	 */
+	uint16_t result_size;
 };
 
 /**
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 18/51] net/bnxt: multiple device implementation
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (16 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
                           ` (34 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

Implement the Identifier, Table Type and the Resource Manager
modules.
Integrate Resource Manager with HCAPI.
Update open/close session.
Move to direct msgs for qcaps and resv messages.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 751 ++++++++---------------
 drivers/net/bnxt/tf_core/tf_core.h       |  97 ++-
 drivers/net/bnxt/tf_core/tf_device.c     |  10 +-
 drivers/net/bnxt/tf_core/tf_device.h     |   1 +
 drivers/net/bnxt/tf_core/tf_device_p4.c  |  26 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |  29 +-
 drivers/net/bnxt/tf_core/tf_identifier.h |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  45 +-
 drivers/net/bnxt/tf_core/tf_msg.h        |   1 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 225 +++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  11 +-
 drivers/net/bnxt/tf_core/tf_session.c    |  28 +-
 drivers/net/bnxt/tf_core/tf_session.h    |   2 +-
 drivers/net/bnxt/tf_core/tf_tbl.c        | 611 +-----------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c   | 252 +++++++-
 drivers/net/bnxt/tf_core/tf_tbl_type.h   |   2 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |  12 +-
 drivers/net/bnxt/tf_core/tf_util.h       |  45 +-
 drivers/net/bnxt/tf_core/tfp.c           |   4 +-
 19 files changed, 880 insertions(+), 1276 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 29522c66e..3e23d0513 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -19,284 +19,15 @@
 #include "tf_common.h"
 #include "hwrm_tf.h"
 
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- * [in] num_entries
- *   number of entries to write
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i, j;
-	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
-			    strerror(-ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Fill pool with indexes
-	 */
-	j = num_entries - 1;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-		j--;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Create EM Tbl pool of memory indexes.
- *
- * [in] session
- *   Pointer to session
- * [in] dir
- *   direction
- *
- * Return:
- */
-static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
-{
-	struct stack *pool = &session->em_pool[dir];
-	uint32_t *ptr;
-
-	ptr = stack_items(pool);
-
-	tfp_free(ptr);
-}
-
 int
-tf_open_session(struct tf                    *tfp,
+tf_open_session(struct tf *tfp,
 		struct tf_open_session_parms *parms)
-{
-	int rc;
-	struct tf_session *session;
-	struct tfp_calloc_parms alloc_parms;
-	unsigned int domain, bus, slot, device;
-	uint8_t fw_session_id;
-	int dir;
-
-	TF_CHECK_PARMS(tfp, parms);
-
-	/* Filter out any non-supported device types on the Core
-	 * side. It is assumed that the Firmware will be supported if
-	 * firmware open session succeeds.
-	 */
-	if (parms->device_type != TF_DEVICE_TYPE_WH) {
-		TFP_DRV_LOG(ERR,
-			    "Unsupported device type %d\n",
-			    parms->device_type);
-		return -ENOTSUP;
-	}
-
-	/* Build the beginning of session_id */
-	rc = sscanf(parms->ctrl_chan_name,
-		    "%x:%x:%x.%d",
-		    &domain,
-		    &bus,
-		    &slot,
-		    &device);
-	if (rc != 4) {
-		TFP_DRV_LOG(ERR,
-			    "Failed to scan device ctrl_chan_name\n");
-		return -EINVAL;
-	}
-
-	/* open FW session and get a new session_id */
-	rc = tf_msg_session_open(tfp,
-				 parms->ctrl_chan_name,
-				 &fw_session_id);
-	if (rc) {
-		/* Log error */
-		if (rc == -EEXIST)
-			TFP_DRV_LOG(ERR,
-				    "Session is already open, rc:%s\n",
-				    strerror(-rc));
-		else
-			TFP_DRV_LOG(ERR,
-				    "Open message send failed, rc:%s\n",
-				    strerror(-rc));
-
-		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
-		return rc;
-	}
-
-	/* Allocate session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session_info);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session info, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
-
-	/* Allocate core data for the session */
-	alloc_parms.nitems = 1;
-	alloc_parms.size = sizeof(struct tf_session);
-	alloc_parms.alignment = 0;
-	rc = tfp_calloc(&alloc_parms);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "Failed to allocate session data, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	tfp->session->core_data = alloc_parms.mem_va;
-
-	session = (struct tf_session *)tfp->session->core_data;
-	tfp_memcpy(session->ctrl_chan_name,
-		   parms->ctrl_chan_name,
-		   TF_SESSION_NAME_MAX);
-
-	/* Initialize Session */
-	session->dev = NULL;
-	tf_rm_init(tfp);
-
-	/* Construct the Session ID */
-	session->session_id.internal.domain = domain;
-	session->session_id.internal.bus = bus;
-	session->session_id.internal.device = device;
-	session->session_id.internal.fw_session_id = fw_session_id;
-
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Shadow DB configuration */
-	if (parms->shadow_copy) {
-		/* Ignore shadow_copy setting */
-		session->shadow_copy = 0;/* parms->shadow_copy; */
-#if (TF_SHADOW == 1)
-		rc = tf_rm_shadow_db_init(tfs);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "Shadow DB Initialization failed\n, rc:%s",
-				    strerror(-rc));
-		/* Add additional processing */
-#endif /* TF_SHADOW */
-	}
-
-	/* Adjust the Session with what firmware allowed us to get */
-	rc = tf_rm_allocate_validate(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Rm allocate validate failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
-	/* Initialize EM pool */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_create_em_pool(session,
-				       (enum tf_dir)dir,
-				       TF_SESSION_EM_POOL_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EM Pool initialization failed\n");
-			goto cleanup_close;
-		}
-	}
-
-	session->ref_count++;
-
-	/* Return session ID */
-	parms->session_id = session->session_id;
-
-	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    parms->session_id.internal.domain,
-		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
-
-	return 0;
-
- cleanup:
-	tfp_free(tfp->session->core_data);
-	tfp_free(tfp->session);
-	tfp->session = NULL;
-	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
-}
-
-int
-tf_open_session_new(struct tf *tfp,
-		    struct tf_open_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
 	struct tf_session_open_session_parms oparms;
 
-	TF_CHECK_PARMS(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Filter out any non-supported device types on the Core
 	 * side. It is assumed that the Firmware will be supported if
@@ -347,33 +78,8 @@ tf_open_session_new(struct tf *tfp,
 }
 
 int
-tf_attach_session(struct tf *tfp __rte_unused,
-		  struct tf_attach_session_parms *parms __rte_unused)
-{
-#if (TF_SHARED == 1)
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/* - Open the shared memory for the attach_chan_name
-	 * - Point to the shared session for this Device instance
-	 * - Check that session is valid
-	 * - Attach to the firmware so it can record there is more
-	 *   than one client of the session.
-	 */
-
-	if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_attach(tfp,
-					   parms->ctrl_chan_name,
-					   parms->session_id);
-	}
-#endif /* TF_SHARED */
-	return -1;
-}
-
-int
-tf_attach_session_new(struct tf *tfp,
-		      struct tf_attach_session_parms *parms)
+tf_attach_session(struct tf *tfp,
+		  struct tf_attach_session_parms *parms)
 {
 	int rc;
 	unsigned int domain, bus, slot, device;
@@ -436,65 +142,6 @@ tf_attach_session_new(struct tf *tfp,
 
 int
 tf_close_session(struct tf *tfp)
-{
-	int rc;
-	int rc_close = 0;
-	struct tf_session *tfs;
-	union tf_session_id session_id;
-	int dir;
-
-	TF_CHECK_TFP_SESSION(tfp);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Cleanup if we're last user of the session */
-	if (tfs->ref_count == 1) {
-		/* Cleanup any outstanding resources */
-		rc_close = tf_rm_close(tfp);
-	}
-
-	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Message send failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		/* Update the ref_count */
-		tfs->ref_count--;
-	}
-
-	session_id = tfs->session_id;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Free EM pool */
-		for (dir = 0; dir < TF_DIR_MAX; dir++)
-			tf_free_em_pool(tfs, (enum tf_dir)dir);
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
-
-	TFP_DRV_LOG(INFO,
-		    "Session closed, session_id:%d\n",
-		    session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
-		    session_id.internal.domain,
-		    session_id.internal.bus,
-		    session_id.internal.device,
-		    session_id.internal.fw_session_id);
-
-	return rc_close;
-}
-
-int
-tf_close_session_new(struct tf *tfp)
 {
 	int rc;
 	struct tf_session_close_session_parms cparms = { 0 };
@@ -620,76 +267,9 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
-int tf_alloc_identifier(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	int id;
-	int rc;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	if (rc) {
-		TFP_DRV_LOG(ERR, "%s: identifier pool %s failure, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	id = ba_alloc(session_pool);
-
-	if (id == BA_FAIL) {
-		TFP_DRV_LOG(ERR, "%s: %s: No resource available\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		return -ENOMEM;
-	}
-	parms->id = id;
-	return 0;
-}
-
 int
-tf_alloc_identifier_new(struct tf *tfp,
-			struct tf_alloc_identifier_parms *parms)
+tf_alloc_identifier(struct tf *tfp,
+		    struct tf_alloc_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -732,7 +312,7 @@ tf_alloc_identifier_new(struct tf *tfp,
 	}
 
 	aparms.dir = parms->dir;
-	aparms.ident_type = parms->ident_type;
+	aparms.type = parms->ident_type;
 	aparms.id = &id;
 	rc = dev->ops->tf_dev_alloc_ident(tfp, &aparms);
 	if (rc) {
@@ -748,79 +328,9 @@ tf_alloc_identifier_new(struct tf *tfp,
 	return 0;
 }
 
-int tf_free_identifier(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
-{
-	struct bitalloc *session_pool;
-	int rc;
-	int ba_rc;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	switch (parms->ident_type) {
-	case TF_IDENT_TYPE_L2_CTXT:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_L2_CTXT_REMAP_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_PROF_FUNC:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_PROF_FUNC_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_EM_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_EM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_WC_PROF:
-		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
-				TF_WC_TCAM_PROF_ID_POOL_NAME,
-				rc);
-		break;
-	case TF_IDENT_TYPE_L2_FUNC:
-		TFP_DRV_LOG(ERR, "%s: unsupported %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		TFP_DRV_LOG(ERR, "%s: invalid %s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type));
-		rc = -EOPNOTSUPP;
-		break;
-	}
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: %s Identifier pool access failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    strerror(-rc));
-		return rc;
-	}
-
-	ba_rc = ba_inuse(session_pool, (int)parms->id);
-
-	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d already free",
-			    tf_dir_2_str(parms->dir),
-			    tf_ident_2_str(parms->ident_type),
-			    parms->id);
-		return -EINVAL;
-	}
-
-	ba_free(session_pool, (int)parms->id);
-
-	return 0;
-}
-
 int
-tf_free_identifier_new(struct tf *tfp,
-		       struct tf_free_identifier_parms *parms)
+tf_free_identifier(struct tf *tfp,
+		   struct tf_free_identifier_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
@@ -862,12 +372,12 @@ tf_free_identifier_new(struct tf *tfp,
 	}
 
 	fparms.dir = parms->dir;
-	fparms.ident_type = parms->ident_type;
+	fparms.type = parms->ident_type;
 	fparms.id = parms->id;
 	rc = dev->ops->tf_dev_free_ident(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Identifier allocation failed, rc:%s\n",
+			    "%s: Identifier free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
@@ -1057,3 +567,242 @@ tf_free_tcam_entry(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_alloc_parms aparms;
+	uint32_t idx;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&aparms, 0, sizeof(struct tf_tbl_alloc_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	aparms.dir = parms->dir;
+	aparms.type = parms->type;
+	aparms.idx = &idx;
+	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table allocation failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_free_parms fparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&fparms, 0, sizeof(struct tf_tbl_free_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	fparms.dir = parms->dir;
+	fparms.type = parms->type;
+	fparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table free failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_set_parms sparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&sparms, 0, sizeof(struct tf_tbl_set_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.data = parms->data;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_parms gparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&gparms, 0, sizeof(struct tf_tbl_get_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.data = parms->data;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.idx = parms->idx;
+	rc = dev->ops->tf_dev_get_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index bb456bba7..a7a7bd38a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -384,34 +384,87 @@ struct tf {
 	struct tf_session_info *session;
 };
 
+/**
+ * Identifier resource definition
+ */
+struct tf_identifier_resources {
+	/**
+	 * Array of TF Identifiers where each entry is expected to be
+	 * set to the requested resource number of that specific type.
+	 * The index used is tf_identifier_type.
+	 */
+	uint16_t cnt[TF_IDENT_TYPE_MAX];
+};
+
+/**
+ * Table type resource definition
+ */
+struct tf_tbl_resources {
+	/**
+	 * Array of TF Table types where each entry is expected to be
+	 * set to the requeste resource number of that specific
+	 * type. The index used is tf_tbl_type.
+	 */
+	uint16_t cnt[TF_TBL_TYPE_MAX];
+};
+
+/**
+ * TCAM type resource definition
+ */
+struct tf_tcam_resources {
+	/**
+	 * Array of TF TCAM types where each entry is expected to be
+	 * set to the requested resource number of that specific
+	 * type. The index used is tf_tcam_tbl_type.
+	 */
+	uint16_t cnt[TF_TCAM_TBL_TYPE_MAX];
+};
+
+/**
+ * EM type resource definition
+ */
+struct tf_em_resources {
+	/**
+	 * Array of TF EM table types where each entry is expected to
+	 * be set to the requested resource number of that specific
+	 * type. The index used is tf_em_tbl_type.
+	 */
+	uint16_t cnt[TF_EM_TBL_TYPE_MAX];
+};
+
 /**
  * tf_session_resources parameter definition.
  */
 struct tf_session_resources {
-	/** [in] Requested Identifier Resources
+	/**
+	 * [in] Requested Identifier Resources
 	 *
-	 * The number of identifier resources requested for the session.
-	 * The index used is tf_identifier_type.
+	 * Number of identifier resources requested for the
+	 * session.
 	 */
-	uint16_t identifier_cnt[TF_IDENT_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested Index Table resource counts
+	struct tf_identifier_resources ident_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested Index Table resource counts
 	 *
-	 * The number of index table resources requested for the session.
-	 * The index used is tf_tbl_type.
+	 * The number of index table resources requested for the
+	 * session.
 	 */
-	uint16_t tbl_cnt[TF_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested TCAM Table resource counts
+	struct tf_tbl_resources tbl_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested TCAM Table resource counts
 	 *
-	 * The number of TCAM table resources requested for the session.
-	 * The index used is tf_tcam_tbl_type.
+	 * The number of TCAM table resources requested for the
+	 * session.
 	 */
-	uint16_t tcam_tbl_cnt[TF_TCAM_TBL_TYPE_MAX][TF_DIR_MAX];
-	/** [in] Requested EM resource counts
+
+	struct tf_tcam_resources tcam_cnt[TF_DIR_MAX];
+	/**
+	 * [in] Requested EM resource counts
 	 *
-	 * The number of internal EM table resources requested for the session
-	 * The index used is tf_em_tbl_type.
+	 * The number of internal EM table resources requested for the
+	 * session.
 	 */
-	uint16_t em_tbl_cnt[TF_EM_TBL_TYPE_MAX][TF_DIR_MAX];
+	struct tf_em_resources em_cnt[TF_DIR_MAX];
 };
 
 /**
@@ -497,9 +550,6 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
-int tf_open_session_new(struct tf *tfp,
-			struct tf_open_session_parms *parms);
-
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -565,8 +615,6 @@ struct tf_attach_session_parms {
  */
 int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
-int tf_attach_session_new(struct tf *tfp,
-			  struct tf_attach_session_parms *parms);
 
 /**
  * Closes an existing session. Cleans up all hardware and firmware
@@ -576,7 +624,6 @@ int tf_attach_session_new(struct tf *tfp,
  * Returns success or failure code.
  */
 int tf_close_session(struct tf *tfp);
-int tf_close_session_new(struct tf *tfp);
 
 /**
  * @page  ident Identity Management
@@ -631,8 +678,6 @@ struct tf_free_identifier_parms {
  */
 int tf_alloc_identifier(struct tf *tfp,
 			struct tf_alloc_identifier_parms *parms);
-int tf_alloc_identifier_new(struct tf *tfp,
-			    struct tf_alloc_identifier_parms *parms);
 
 /**
  * free identifier resource
@@ -645,8 +690,6 @@ int tf_alloc_identifier_new(struct tf *tfp,
  */
 int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
-int tf_free_identifier_new(struct tf *tfp,
-			   struct tf_free_identifier_parms *parms);
 
 /**
  * @page dram_table DRAM Table Scope Interface
@@ -1277,7 +1320,7 @@ struct tf_bulk_get_tbl_entry_parms {
  * provided data buffer is too small for the data type requested.
  */
 int tf_bulk_get_tbl_entry(struct tf *tfp,
-		     struct tf_bulk_get_tbl_entry_parms *parms);
+			  struct tf_bulk_get_tbl_entry_parms *parms);
 
 /**
  * @page exact_match Exact Match Table
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 4c46cadc6..b474e8c25 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -43,6 +43,14 @@ dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
+	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Initial function initialization */
+	dev_handle->ops = &tf_dev_ops_p4_init;
+
 	/* Initialize the modules */
 
 	ident_cfg.num_elements = TF_IDENT_TYPE_MAX;
@@ -78,7 +86,7 @@ dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
-	dev_handle->type = TF_DEVICE_TYPE_WH;
+	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 32d9a5442..c31bf2357 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -407,6 +407,7 @@ struct tf_dev_ops {
 /**
  * Supported device operation structures
  */
+extern const struct tf_dev_ops tf_dev_ops_p4_init;
 extern const struct tf_dev_ops tf_dev_ops_p4;
 
 #endif /* _TF_DEVICE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 77fb693dd..9e332c594 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -75,6 +75,26 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 	return 0;
 }
 
+/**
+ * Truflow P4 device specific functions
+ */
+const struct tf_dev_ops tf_dev_ops_p4_init = {
+	.tf_dev_get_max_types = tf_dev_p4_get_max_types,
+	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
+	.tf_dev_alloc_ident = NULL,
+	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_tbl = NULL,
+	.tf_dev_alloc_search_tbl = NULL,
+	.tf_dev_set_tbl = NULL,
+	.tf_dev_get_tbl = NULL,
+	.tf_dev_alloc_tcam = NULL,
+	.tf_dev_free_tcam = NULL,
+	.tf_dev_alloc_search_tcam = NULL,
+	.tf_dev_set_tcam = NULL,
+	.tf_dev_get_tcam = NULL,
+};
+
 /**
  * Truflow P4 device specific functions
  */
@@ -85,14 +105,14 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
-	.tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
+	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
-	.tf_dev_alloc_search_tcam = tf_tcam_alloc_search,
+	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
-	.tf_dev_get_tcam = tf_tcam_get,
+	.tf_dev_get_tcam = NULL,
 	.tf_dev_insert_em_entry = tf_em_insert_entry,
 	.tf_dev_delete_em_entry = tf_em_delete_entry,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index e89f9768b..ee07a6aea 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -45,19 +45,22 @@ tf_ident_bind(struct tf *tfp,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->identifier_cnt[i];
-		db_cfg.rm_db = ident_db[i];
+		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
+		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
 				    "%s: Identifier DB creation failed\n",
 				    tf_dir_2_str(i));
+
 			return rc;
 		}
 	}
 
 	init = 1;
 
+	printf("Identifier - initialized\n");
+
 	return 0;
 }
 
@@ -73,8 +76,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 	/* Bail if nothing has been initialized done silent as to
 	 * allow for creation cleanup.
 	 */
-	if (!init)
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Identifier DBs created\n");
 		return -EINVAL;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
@@ -96,6 +102,7 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 	       struct tf_ident_alloc_parms *parms)
 {
 	int rc;
+	uint32_t id;
 	struct tf_rm_allocate_parms aparms = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -109,17 +116,19 @@ tf_ident_alloc(struct tf *tfp __rte_unused,
 
 	/* Allocate requested element */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
-	aparms.index = (uint32_t *)&parms->id;
+	aparms.db_index = parms->type;
+	aparms.index = &id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Failed allocate, type:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type);
+			    parms->type);
 		return rc;
 	}
 
+	*parms->id = id;
+
 	return 0;
 }
 
@@ -143,7 +152,7 @@ tf_ident_free(struct tf *tfp __rte_unused,
 
 	/* Check if element is in use */
 	aparms.rm_db = ident_db[parms->dir];
-	aparms.db_index = parms->ident_type;
+	aparms.db_index = parms->type;
 	aparms.index = parms->id;
 	aparms.allocated = &allocated;
 	rc = tf_rm_is_allocated(&aparms);
@@ -154,21 +163,21 @@ tf_ident_free(struct tf *tfp __rte_unused,
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
 
 	/* Free requested element */
 	fparms.rm_db = ident_db[parms->dir];
-	fparms.db_index = parms->ident_type;
+	fparms.db_index = parms->type;
 	fparms.index = parms->id;
 	rc = tf_rm_free(&fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Free failed, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
-			    parms->ident_type,
+			    parms->type,
 			    parms->id);
 		return rc;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.h b/drivers/net/bnxt/tf_core/tf_identifier.h
index 1c5319b5e..6e36c525f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.h
+++ b/drivers/net/bnxt/tf_core/tf_identifier.h
@@ -43,7 +43,7 @@ struct tf_ident_alloc_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [out] Identifier allocated
 	 */
@@ -61,7 +61,7 @@ struct tf_ident_free_parms {
 	/**
 	 * [in] Identifier type
 	 */
-	enum tf_identifier_type ident_type;
+	enum tf_identifier_type type;
 	/**
 	 * [in] ID to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b50e1d48c..a2e3840f0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -12,6 +12,7 @@
 #include "tf_device.h"
 #include "tf_msg.h"
 #include "tf_util.h"
+#include "tf_common.h"
 #include "tf_session.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -935,13 +936,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	struct tf_rm_resc_req_entry *data;
 	int dma_size;
 
-	if (size == 0 || query == NULL || resv_strategy == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Resource QCAPS parameter error, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-EINVAL));
-		return -EINVAL;
-	}
+	TF_CHECK_PARMS3(tfp, query, resv_strategy);
 
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
@@ -962,7 +957,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.qcaps_size = size;
-	req.qcaps_addr = qcaps_buf.pa_addr;
+	req.qcaps_addr = tfp_cpu_to_le_64(qcaps_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_QCAPS;
 	parms.req_data = (uint32_t *)&req;
@@ -980,18 +975,29 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: QCAPS message error, rc:%s\n",
+			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
+
+	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_cpu_to_le_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
+
+		printf("type: %d(0x%x) %d %d\n",
+		       query[i].type,
+		       query[i].type,
+		       query[i].min,
+		       query[i].max);
+
 	}
 
 	*resv_strategy = resp.flags &
@@ -1021,6 +1027,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	struct tf_rm_resc_entry *resv_data;
 	int dma_size;
 
+	TF_CHECK_PARMS3(tfp, request, resv);
+
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1053,8 +1061,8 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		req_data[i].max = tfp_cpu_to_le_16(request[i].max);
 	}
 
-	req.req_addr = req_buf.pa_addr;
-	req.resp_addr = resv_buf.pa_addr;
+	req.req_addr = tfp_cpu_to_le_64(req_buf.pa_addr);
+	req.resc_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
 
 	parms.tf_type = HWRM_TF_SESSION_RESC_ALLOC;
 	parms.req_data = (uint32_t *)&req;
@@ -1072,18 +1080,28 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	 */
 	if (resp.size != size) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Alloc message error, rc:%s\n",
+			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
 		return -EINVAL;
 	}
 
+	printf("\nRESV\n");
+	printf("size: %d\n", resp.size);
+
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
 		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
 		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
 		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+
+		printf("%d type: %d(0x%x) %d %d\n",
+		       i,
+		       resv[i].type,
+		       resv[i].type,
+		       resv[i].start,
+		       resv[i].stride);
 	}
 
 	tf_msg_free_dma_buf(&req_buf);
@@ -1460,7 +1478,8 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
 	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
 
-	data_size = (params->num_entries * params->entry_sz_in_bytes);
+	data_size = params->num_entries * params->entry_sz_in_bytes;
+
 	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
 
 	MSG_PREP(parms,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index a3e0f7bba..fb635f6dc 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_rm_new.h"
+#include "tf_tcam.h"
 
 struct tf;
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 7cadb231f..6abf79aa1 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -10,6 +10,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_rm_new.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
 #include "tf_device.h"
@@ -65,6 +66,46 @@ struct tf_rm_new_db {
 	struct tf_rm_element *db;
 };
 
+/**
+ * Adjust an index according to the allocation information.
+ *
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
+ *
+ * [in] cfg
+ *   Pointer to the DB configuration
+ *
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
+ *
+ * [in] count
+ *   Number of DB configuration elements
+ *
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static void
+tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
+{
+	int i;
+	uint16_t cnt = 0;
+
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
+	}
+
+	*valid_count = cnt;
+}
 
 /**
  * Resource Manager Adjust of base index definitions.
@@ -132,6 +173,7 @@ tf_rm_create_db(struct tf *tfp,
 {
 	int rc;
 	int i;
+	int j;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	uint16_t max_types;
@@ -143,6 +185,9 @@ tf_rm_create_db(struct tf *tfp,
 	struct tf_rm_new_db *rm_db;
 	struct tf_rm_element *db;
 	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -177,10 +222,19 @@ tf_rm_create_db(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	/* Process capabilities against db requirements */
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
 
 	/* Alloc request, alignment already set */
-	cparms.nitems = parms->num_elements;
+	cparms.nitems = (size_t)hcapi_items;
 	cparms.size = sizeof(struct tf_rm_resc_req_entry);
 	rc = tfp_calloc(&cparms);
 	if (rc)
@@ -195,15 +249,24 @@ tf_rm_create_db(struct tf *tfp,
 	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
 
 	/* Build the request */
-	for (i = 0; i < parms->num_elements; i++) {
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
 		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			req[i].type = parms->cfg[i].hcapi_type;
-			/* Check that we can get the full amount allocated */
-			if (parms->alloc_num[i] <=
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
 			    query[parms->cfg[i].hcapi_type].max) {
-				req[i].min = parms->alloc_num[i];
-				req[i].max = parms->alloc_num[i];
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
 			} else {
 				TFP_DRV_LOG(ERR,
 					    "%s: Resource failure, type:%d\n",
@@ -211,19 +274,16 @@ tf_rm_create_db(struct tf *tfp,
 					    parms->cfg[i].hcapi_type);
 				TFP_DRV_LOG(ERR,
 					"req:%d, avail:%d\n",
-					parms->alloc_num[i],
+					parms->alloc_cnt[i],
 					query[parms->cfg[i].hcapi_type].max);
 				return -EINVAL;
 			}
-		} else {
-			/* Skip the element */
-			req[i].type = CFA_RESOURCE_TYPE_INVALID;
 		}
 	}
 
 	rc = tf_msg_session_resc_alloc(tfp,
 				       parms->dir,
-				       parms->num_elements,
+				       hcapi_items,
 				       req,
 				       resv);
 	if (rc)
@@ -246,42 +306,74 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
 	db = rm_db->db;
-	for (i = 0; i < parms->num_elements; i++) {
-		/* If allocation failed for a single entry the DB
-		 * creation is considered a failure.
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
+
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
+
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
 		 */
-		if (parms->alloc_num[i] != resv[i].stride) {
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
 				    "%s: Alloc failed, type:%d\n",
 				    tf_dir_2_str(parms->dir),
-				    i);
+				    db[i].cfg_type);
 			TFP_DRV_LOG(ERR,
 				    "req:%d, alloc:%d\n",
-				    parms->alloc_num[i],
-				    resv[i].stride);
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
 			goto fail;
 		}
-
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-		db[i].alloc.entry.start = resv[i].start;
-		db[i].alloc.entry.stride = resv[i].stride;
-
-		/* Create pool */
-		pool_size = (BITALLOC_SIZEOF(resv[i].stride) /
-			     sizeof(struct bitalloc));
-		/* Alloc request, alignment already set */
-		cparms.nitems = pool_size;
-		cparms.size = sizeof(struct bitalloc);
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-		db[i].pool = (struct bitalloc *)cparms.mem_va;
 	}
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
-	parms->rm_db = (void *)rm_db;
+	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
@@ -307,13 +399,15 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 	int i;
 	struct tf_rm_new_db *rm_db;
 
+	TF_CHECK_PARMS1(parms);
+
 	/* Traverse the DB and clear each pool.
 	 * NOTE:
 	 *   Firmware is not cleared. It will be cleared on close only.
 	 */
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db->pool);
+		tfp_free((void *)rm_db->db[i].pool);
 
 	tfp_free((void *)parms->rm_db);
 
@@ -325,11 +419,11 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc = 0;
 	int id;
+	uint32_t index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -339,6 +433,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		TFP_DRV_LOG(ERR,
@@ -353,15 +458,17 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 				TF_RM_ADJUST_ADD_BASE,
 				parms->db_index,
 				id,
-				parms->index);
+				&index);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc adjust of base index failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -1;
+		return -EINVAL;
 	}
 
+	*parms->index = index;
+
 	return rc;
 }
 
@@ -373,8 +480,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -384,6 +490,17 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -409,8 +526,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -420,6 +536,17 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Adjust for any non zero start value */
 	rc = tf_rm_adjust_index(rm_db->db,
 				TF_RM_ADJUST_RM_BASE,
@@ -442,8 +569,7 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
@@ -465,8 +591,7 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (parms == NULL || parms->rm_db == NULL)
-		return -EINVAL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index 6d8234ddc..ebf38c411 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -135,13 +135,16 @@ struct tf_rm_create_db_parms {
 	 */
 	struct tf_rm_element_cfg *cfg;
 	/**
-	 * Allocation number array. Array size is num_elements.
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
 	 */
-	uint16_t *alloc_num;
+	uint16_t *alloc_cnt;
 	/**
 	 * [out] RM DB Handle
 	 */
-	void *rm_db;
+	void **rm_db;
 };
 
 /**
@@ -382,7 +385,7 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
  * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI type.
+ * requested HCAPI RM type.
  *
  * [in] parms
  *   Pointer to get hcapi parameters
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 1917f8100..3a602618c 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -95,21 +95,11 @@ tf_session_open_session(struct tf *tfp,
 		      parms->open_cfg->device_type,
 		      session->shadow_copy,
 		      &parms->open_cfg->resources,
-		      session->dev);
+		      &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
 
-	/* Query for Session Config
-	 */
-	rc = tf_msg_session_qcfg(tfp);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "Query config message send failed, rc:%s\n",
-			    strerror(-rc));
-		goto cleanup_close;
-	}
-
 	session->ref_count++;
 
 	return 0;
@@ -119,10 +109,6 @@ tf_session_open_session(struct tf *tfp,
 	tfp_free(tfp->session);
 	tfp->session = NULL;
 	return rc;
-
- cleanup_close:
-	tf_close_session(tfp);
-	return -EINVAL;
 }
 
 int
@@ -231,17 +217,7 @@ int
 tf_session_get_device(struct tf_session *tfs,
 		      struct tf_dev_info **tfd)
 {
-	int rc;
-
-	if (tfs->dev == NULL) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR,
-			    "Device not created, rc:%s\n",
-			    strerror(-rc));
-		return rc;
-	}
-
-	*tfd = tfs->dev;
+	*tfd = &tfs->dev;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 92792518b..705bb0955 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -97,7 +97,7 @@ struct tf_session {
 	uint8_t ref_count;
 
 	/** Device handle */
-	struct tf_dev_info *dev;
+	struct tf_dev_info dev;
 
 	/** Session HW and SRAM resources */
 	struct tf_rm_db resc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index a68335304..e594f0248 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -761,163 +761,6 @@ tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
 	return 0;
 }
 
-/**
- * Internal function to set a Table Entry. Supports all internal Table Types
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_set_tbl_entry_internal(struct tf *tfp,
-			  struct tf_set_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_get_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -EINVAL;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  parms->type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return rc;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -1145,266 +988,6 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
-/**
- * Allocate External Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_external(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	uint32_t index;
-	struct tf_session *tfs;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	/* Allocate an element
-	 */
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type);
-		return rc;
-	}
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Allocate Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
-				 struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-	int id;
-	int free_cnt;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	id = ba_alloc(session_pool);
-	if (id == -1) {
-		free_cnt = ba_free_count(session_pool);
-
-		TFP_DRV_LOG(ERR,
-		   "%s, Allocation failed, type:%d, free:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   free_cnt);
-		return -ENOMEM;
-	}
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_ADD_BASE,
-			    id,
-			    &index);
-	parms->idx = index;
-	return rc;
-}
-
-/**
- * Free External Tbl entry to the session pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry freed
- *
- * - Failure, entry not successfully freed for these reasons
- *  -ENOMEM
- *  -EOPNOTSUPP
- *  -EINVAL
- */
-static int
-tf_free_tbl_entry_pool_external(struct tf *tfp,
-				struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_session *tfs;
-	uint32_t index;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct stack *pool;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Get the pool info from the table scope
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tfs, parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
-
-	index = parms->idx;
-
-	rc = stack_push(pool, index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "%s, consistency error, stack full, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-	}
-	return rc;
-}
-
-/**
- * Free Internal Tbl entry from the Session Pool.
- *
- * [in] tfp
- *   Pointer to Truflow Handle
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOMEM - Failure, entry not allocated, out of resources
- */
-static int
-tf_free_tbl_entry_pool_internal(struct tf *tfp,
-		       struct tf_free_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	int id;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs;
-	uint32_t index;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
-	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
-	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B &&
-	    parms->type != TF_TBL_TYPE_ACT_STATS_64) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Type not supported, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return -EOPNOTSUPP;
-	}
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
-	if (rc)
-		return rc;
-
-	index = parms->idx;
-
-	/* Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
-			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->idx,
-			    &index);
-
-	/* Check if element was indeed allocated */
-	id = ba_inuse_free(session_pool, index);
-	if (id == -1) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Element not previously alloc'ed, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   index);
-		return -ENOMEM;
-	}
-
-	return rc;
-}
-
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
 tbl_scope_cb_find(struct tf_session *session,
@@ -1584,113 +1167,7 @@ tf_alloc_eem_tbl_scope(struct tf *tfp,
 	return -EINVAL;
 }
 
-/* API defined in tf_core.h */
-int
-tf_set_tbl_entry(struct tf *tfp,
-		 struct tf_set_tbl_entry_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		void *base_addr;
-		uint32_t offset = parms->idx;
-		uint32_t tbl_scope_id;
-
-		session = (struct tf_session *)(tfp->session->core_data);
-
-		tbl_scope_id = parms->tbl_scope_id;
-
-		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-			TFP_DRV_LOG(ERR,
-				    "%s, Table scope not allocated\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		/* Get the table scope control block associated with the
-		 * external pool
-		 */
-		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
-
-		if (tbl_scope_cb == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, table scope error\n",
-				    tf_dir_2_str(parms->dir));
-				return -EINVAL;
-		}
-
-		/* External table, implicitly the Action table */
-		base_addr = (void *)(uintptr_t)
-		hcapi_get_table_page(&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE], offset);
-
-		if (base_addr == NULL) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Base address lookup failed\n",
-				    tf_dir_2_str(parms->dir));
-			return -EINVAL;
-		}
-
-		offset %= TF_EM_PAGE_SIZE;
-		rte_memcpy((char *)base_addr + offset,
-			   parms->data,
-			   parms->data_sz_in_bytes);
-	} else {
-		/* Internal table type processing */
-		rc = tf_set_tbl_entry_internal(tfp, parms);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s, Set failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-		}
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_get_tbl_entry(struct tf *tfp,
-		 struct tf_get_tbl_entry_parms *parms)
-{
-	int rc = 0;
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
-		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
-			    tf_dir_2_str(parms->dir));
-
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_get_tbl_entry_internal(tfp, parms);
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s, Get failed, type:%d, rc:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    parms->type,
-				    strerror(-rc));
-	}
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
+ /* API defined in tf_core.h */
 int
 tf_bulk_get_tbl_entry(struct tf *tfp,
 		 struct tf_bulk_get_tbl_entry_parms *parms)
@@ -1749,92 +1226,6 @@ tf_free_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_entry(struct tf *tfp,
-		   struct tf_alloc_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-	/*
-	 * No shadow copy support for external tables, allocate and return
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
-		/* Entry found and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_entry(struct tf *tfp,
-		  struct tf_free_tbl_entry_parms *parms)
-{
-	int rc;
-#if (TF_SHADOW == 1)
-	struct tf_session *tfs;
-#endif /* TF_SHADOW */
-
-	TF_CHECK_PARMS_SESSION(tfp, parms);
-
-	/*
-	 * No shadow of external tables so just free the entry
-	 */
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		rc = tf_free_tbl_entry_pool_external(tfp, parms);
-		return rc;
-	}
-
-#if (TF_SHADOW == 1)
-	tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Search the Shadow DB for requested element. If not found go
-	 * allocate one from the Session Pool
-	 */
-	if (parms->search_enable && tfs->shadow_copy) {
-		rc = tf_free_tbl_entry_shadow(tfs, parms);
-		/* Entry free'ed and parms populated with return data */
-		if (rc == 0)
-			return rc;
-	}
-#endif /* TF_SHADOW */
-
-	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
-
-	if (rc)
-		TFP_DRV_LOG(ERR, "%s, Alloc failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-	return rc;
-}
-
-
 static void
 tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
 			struct hcapi_cfa_em_page_tbl *tp_next)
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index b79706f97..51f8f0740 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -6,13 +6,18 @@
 #include <rte_common.h>
 
 #include "tf_tbl_type.h"
+#include "tf_common.h"
+#include "tf_rm_new.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
 
 struct tf;
 
 /**
  * Table DBs.
  */
-/* static void *tbl_db[TF_DIR_MAX]; */
+static void *tbl_db[TF_DIR_MAX];
 
 /**
  * Table Shadow DBs
@@ -22,7 +27,7 @@ struct tf;
 /**
  * Init flag, set on bind and cleared on unbind
  */
-/* static uint8_t init; */
+static uint8_t init;
 
 /**
  * Shadow init flag, set on bind and cleared on unbind
@@ -30,29 +35,164 @@ struct tf;
 /* static uint8_t shadow_init; */
 
 int
-tf_tbl_bind(struct tf *tfp __rte_unused,
-	    struct tf_tbl_cfg_parms *parms __rte_unused)
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
 {
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
 	return 0;
 }
 
 int
 tf_tbl_unbind(struct tf *tfp __rte_unused)
 {
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No Table DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
 	return 0;
 }
 
 int
 tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms __rte_unused)
+	     struct tf_tbl_alloc_parms *parms)
 {
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
 	return 0;
 }
 
 int
 tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms __rte_unused)
+	    struct tf_tbl_free_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
+
 	return 0;
 }
 
@@ -64,15 +204,107 @@ tf_tbl_alloc_search(struct tf *tfp __rte_unused,
 }
 
 int
-tf_tbl_set(struct tf *tfp __rte_unused,
-	   struct tf_tbl_set_parms *parms __rte_unused)
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
 
 int
-tf_tbl_get(struct tf *tfp __rte_unused,
-	   struct tf_tbl_get_parms *parms __rte_unused)
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
+	int rc;
+	struct tf_rm_is_allocated_parms aparms;
+	int allocated = 0;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
index 11f2aa333..3474489a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.h
@@ -55,7 +55,7 @@ struct tf_tbl_alloc_parms {
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
-	uint32_t idx;
+	uint32_t *idx;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b9dba5323..e0fac31f2 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -38,8 +38,8 @@ static uint8_t init;
 /* static uint8_t shadow_init; */
 
 int
-tf_tcam_bind(struct tf *tfp __rte_unused,
-	     struct tf_tcam_cfg_parms *parms __rte_unused)
+tf_tcam_bind(struct tf *tfp,
+	     struct tf_tcam_cfg_parms *parms)
 {
 	int rc;
 	int i;
@@ -59,8 +59,8 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 		db_cfg.dir = i;
 		db_cfg.num_elements = parms->num_elements;
 		db_cfg.cfg = parms->cfg;
-		db_cfg.alloc_num = parms->resources->tcam_tbl_cnt[i];
-		db_cfg.rm_db = tcam_db[i];
+		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
+		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
@@ -72,11 +72,13 @@ tf_tcam_bind(struct tf *tfp __rte_unused,
 
 	init = 1;
 
+	printf("TCAM - initialized\n");
+
 	return 0;
 }
 
 int
-tf_tcam_unbind(struct tf *tfp __rte_unused)
+tf_tcam_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index 4099629ea..ad8edaf30 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -10,32 +10,57 @@
 
 /**
  * Helper function converting direction to text string
+ *
+ * [in] dir
+ *   Receive or transmit direction identifier
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the direction
  */
-const char
-*tf_dir_2_str(enum tf_dir dir);
+const char *tf_dir_2_str(enum tf_dir dir);
 
 /**
  * Helper function converting identifier to text string
+ *
+ * [in] id_type
+ *   Identifier type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the identifier
  */
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type);
+const char *tf_ident_2_str(enum tf_identifier_type id_type);
 
 /**
  * Helper function converting tcam type to text string
+ *
+ * [in] tcam_type
+ *   TCAM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the tcam
  */
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+const char *tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
 
 /**
  * Helper function converting tbl type to text string
+ *
+ * [in] tbl_type
+ *   Table type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the table type
  */
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
+const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
 
 /**
  * Helper function converting em tbl type to text string
+ *
+ * [in] em_type
+ *   EM type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
  */
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
+const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
 #endif /* _TF_UTIL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 3bce3ade1..69d1c9a1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -102,13 +102,13 @@ tfp_calloc(struct tfp_calloc_parms *parms)
 				    (parms->nitems * parms->size),
 				    parms->alignment);
 	if (parms->mem_va == NULL) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_va\n");
 		return -ENOMEM;
 	}
 
 	parms->mem_pa = (void *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
 	if (parms->mem_pa == (void *)((uintptr_t)RTE_BAD_IOVA)) {
-		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		TFP_DRV_LOG(ERR, "Allocate failed mem_pa\n");
 		return -ENOMEM;
 	}
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 19/51] net/bnxt: update identifier with remap support
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (17 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 18/51] net/bnxt: multiple device implementation Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
                           ` (33 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add Identifier L2 CTXT Remap to the P4 device and updated the
  cfa_resource_types.h to get the type support.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 110 ++++++++++--------
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   2 +-
 2 files changed, 60 insertions(+), 52 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 11e8892f4..058d8cc88 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -20,46 +20,48 @@
 
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P59_L2_CTXT_TCAM    0x0UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P59_L2_CTXT_REMAP   0x1UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x1UL
+#define CFA_RESOURCE_TYPE_P59_PROF_FUNC       0x2UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x2UL
+#define CFA_RESOURCE_TYPE_P59_PROF_TCAM       0x3UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x3UL
+#define CFA_RESOURCE_TYPE_P59_EM_PROF_ID      0x4UL
 /* Wildcard TCAM Profile Id */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x4UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM_PROF_ID 0x5UL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x5UL
+#define CFA_RESOURCE_TYPE_P59_WC_TCAM         0x6UL
 /* Meter Profile */
-#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x6UL
+#define CFA_RESOURCE_TYPE_P59_METER_PROF      0x7UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_METER           0x7UL
+#define CFA_RESOURCE_TYPE_P59_METER           0x8UL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P59_MIRROR          0x8UL
+#define CFA_RESOURCE_TYPE_P59_MIRROR          0x9UL
 /* Source Properties TCAM */
-#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0x9UL
+#define CFA_RESOURCE_TYPE_P59_SP_TCAM         0xaUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xaUL
+#define CFA_RESOURCE_TYPE_P59_EM_FKB          0xbUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xbUL
+#define CFA_RESOURCE_TYPE_P59_WC_FKB          0xcUL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xcUL
+#define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
-#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xdUL
+#define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
 /* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xeUL
+#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0xfUL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x10UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x11UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x13UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x14UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
@@ -105,30 +107,32 @@
 #define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x15UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x17UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x18UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1aUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1bUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1cUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1dUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1eUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
 #define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
@@ -176,26 +180,28 @@
 #define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x1fUL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
 #define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
 
 
@@ -243,24 +249,26 @@
 #define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
 /* L2 Context TCAM */
 #define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+/* L2 Context REMAP */
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1aUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
 #define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
 
 
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 5cd02b298..235d81f96 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,7 +12,7 @@
 #include "tf_rm_new.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_PRIVATE, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 20/51] net/bnxt: update RM with residual checker
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (18 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
                           ` (32 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add residual checker to the TF Host RM as well as new RM APIs. On
  close it will scan the DB and check of any remaining elements. If
  found they will be logged and FW msg sent for FW to scrub that
  specific type of resources.
- Update the module bind to be aware of the module type, for each of
  the modules.
- Add additional type 2 string util functions.
- Fix the device naming to be in compliance with TF.
- Update the device unbind order as to assure TCAMs gets flushed
  first.
- Update the close functionality such that the session gets
  closed after the device is unbound.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device.c     |  53 +++--
 drivers/net/bnxt/tf_core/tf_device.h     |  25 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   1 -
 drivers/net/bnxt/tf_core/tf_identifier.c |  10 +-
 drivers/net/bnxt/tf_core/tf_msg.c        |  67 +++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |   7 +
 drivers/net/bnxt/tf_core/tf_rm_new.c     | 287 +++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_rm_new.h     |  45 +++-
 drivers/net/bnxt/tf_core/tf_session.c    |  58 +++--
 drivers/net/bnxt/tf_core/tf_tbl_type.c   |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.c       |   5 +-
 drivers/net/bnxt/tf_core/tf_tcam.h       |   4 +
 drivers/net/bnxt/tf_core/tf_util.c       |  55 ++++-
 drivers/net/bnxt/tf_core/tf_util.h       |  32 +++
 14 files changed, 561 insertions(+), 93 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index b474e8c25..441d0c678 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -10,7 +10,7 @@
 struct tf;
 
 /* Forward declarations */
-static int dev_unbind_p4(struct tf *tfp);
+static int tf_dev_unbind_p4(struct tf *tfp);
 
 /**
  * Device specific bind function, WH+
@@ -32,10 +32,10 @@ static int dev_unbind_p4(struct tf *tfp);
  *   - (-EINVAL) on parameter or internal failure.
  */
 static int
-dev_bind_p4(struct tf *tfp,
-	    bool shadow_copy,
-	    struct tf_session_resources *resources,
-	    struct tf_dev_info *dev_handle)
+tf_dev_bind_p4(struct tf *tfp,
+	       bool shadow_copy,
+	       struct tf_session_resources *resources,
+	       struct tf_dev_info *dev_handle)
 {
 	int rc;
 	int frc;
@@ -93,7 +93,7 @@ dev_bind_p4(struct tf *tfp,
 
  fail:
 	/* Cleanup of already created modules */
-	frc = dev_unbind_p4(tfp);
+	frc = tf_dev_unbind_p4(tfp);
 	if (frc)
 		return frc;
 
@@ -111,7 +111,7 @@ dev_bind_p4(struct tf *tfp,
  *   - (-EINVAL) on failure.
  */
 static int
-dev_unbind_p4(struct tf *tfp)
+tf_dev_unbind_p4(struct tf *tfp)
 {
 	int rc = 0;
 	bool fail = false;
@@ -119,25 +119,28 @@ dev_unbind_p4(struct tf *tfp)
 	/* Unbind all the support modules. As this is only done on
 	 * close we only report errors as everything has to be cleaned
 	 * up regardless.
+	 *
+	 * In case of residuals TCAMs are cleaned up first as to
+	 * invalidate the pipeline in a clean manner.
 	 */
-	rc = tf_ident_unbind(tfp);
+	rc = tf_tcam_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Identifier\n");
+			    "Device unbind failed, TCAM\n");
 		fail = true;
 	}
 
-	rc = tf_tbl_unbind(tfp);
+	rc = tf_ident_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, Table Type\n");
+			    "Device unbind failed, Identifier\n");
 		fail = true;
 	}
 
-	rc = tf_tcam_unbind(tfp);
+	rc = tf_tbl_unbind(tfp);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "Device unbind failed, TCAM\n");
+			    "Device unbind failed, Table Type\n");
 		fail = true;
 	}
 
@@ -148,18 +151,18 @@ dev_unbind_p4(struct tf *tfp)
 }
 
 int
-dev_bind(struct tf *tfp __rte_unused,
-	 enum tf_device_type type,
-	 bool shadow_copy,
-	 struct tf_session_resources *resources,
-	 struct tf_dev_info *dev_handle)
+tf_dev_bind(struct tf *tfp __rte_unused,
+	    enum tf_device_type type,
+	    bool shadow_copy,
+	    struct tf_session_resources *resources,
+	    struct tf_dev_info *dev_handle)
 {
 	switch (type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_bind_p4(tfp,
-				   shadow_copy,
-				   resources,
-				   dev_handle);
+		return tf_dev_bind_p4(tfp,
+				      shadow_copy,
+				      resources,
+				      dev_handle);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
@@ -168,12 +171,12 @@ dev_bind(struct tf *tfp __rte_unused,
 }
 
 int
-dev_unbind(struct tf *tfp,
-	   struct tf_dev_info *dev_handle)
+tf_dev_unbind(struct tf *tfp,
+	      struct tf_dev_info *dev_handle)
 {
 	switch (dev_handle->type) {
 	case TF_DEVICE_TYPE_WH:
-		return dev_unbind_p4(tfp);
+		return tf_dev_unbind_p4(tfp);
 	default:
 		TFP_DRV_LOG(ERR,
 			    "No such device\n");
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c31bf2357..c8feac55d 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -14,6 +14,17 @@
 struct tf;
 struct tf_session;
 
+/**
+ *
+ */
+enum tf_device_module_type {
+	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	TF_DEVICE_MODULE_TYPE_TABLE,
+	TF_DEVICE_MODULE_TYPE_TCAM,
+	TF_DEVICE_MODULE_TYPE_EM,
+	TF_DEVICE_MODULE_TYPE_MAX
+};
+
 /**
  * The Device module provides a general device template. A supported
  * device type should implement one or more of the listed function
@@ -60,11 +71,11 @@ struct tf_dev_info {
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_bind(struct tf *tfp,
-	     enum tf_device_type type,
-	     bool shadow_copy,
-	     struct tf_session_resources *resources,
-	     struct tf_dev_info *dev_handle);
+int tf_dev_bind(struct tf *tfp,
+		enum tf_device_type type,
+		bool shadow_copy,
+		struct tf_session_resources *resources,
+		struct tf_dev_info *dev_handle);
 
 /**
  * Device release handles cleanup of the device specific information.
@@ -80,8 +91,8 @@ int dev_bind(struct tf *tfp,
  *   - (-EINVAL) parameter failure.
  *   - (-ENODEV) no such device supported.
  */
-int dev_unbind(struct tf *tfp,
-	       struct tf_dev_info *dev_handle);
+int tf_dev_unbind(struct tf *tfp,
+		  struct tf_dev_info *dev_handle);
 
 /**
  * Truflow device specific function hooks structure
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 235d81f96..411e21637 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -77,5 +77,4 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
-
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index ee07a6aea..b197bb271 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -39,12 +39,12 @@ tf_ident_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_IDENTIFIER;
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->ident_cnt[i].cnt;
 		db_cfg.rm_db = &ident_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
@@ -86,8 +86,10 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 		fparms.dir = i;
 		fparms.rm_db = ident_db[i];
 		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "rm free failed on unbind\n");
+		}
 
 		ident_db[i] = NULL;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index a2e3840f0..c015b0ce2 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1110,6 +1110,69 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	return rc;
 }
 
+int
+tf_msg_session_resc_flush(struct tf *tfp,
+			  enum tf_dir dir,
+			  uint16_t size,
+			  struct tf_rm_resc_entry *resv)
+{
+	int rc;
+	int i;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_session_resc_flush_input req = { 0 };
+	struct hwrm_tf_session_resc_flush_output resp = { 0 };
+	uint8_t fw_session_id;
+	struct tf_msg_dma_buf resv_buf = { 0 };
+	struct tf_rm_resc_entry *resv_data;
+	int dma_size;
+
+	TF_CHECK_PARMS2(tfp, resv);
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Prepare DMA buffers */
+	dma_size = size * sizeof(struct tf_rm_resc_entry);
+	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
+	if (rc)
+		return rc;
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.flush_size = size;
+
+	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
+	for (i = 0; i < size; i++) {
+		resv_data[i].type = tfp_cpu_to_le_32(resv[i].type);
+		resv_data[i].start = tfp_cpu_to_le_16(resv[i].start);
+		resv_data[i].stride = tfp_cpu_to_le_16(resv[i].stride);
+	}
+
+	req.flush_addr = tfp_cpu_to_le_64(resv_buf.pa_addr);
+
+	parms.tf_type = HWRM_TF_SESSION_RESC_FLUSH;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp, &parms);
+	if (rc)
+		return rc;
+
+	tf_msg_free_dma_buf(&resv_buf);
+
+	return rc;
+}
+
 /**
  * Sends EM mem register request to Firmware
  */
@@ -1512,9 +1575,7 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	uint8_t *data = NULL;
 	int data_size = 0;
 
-	rc = tf_tcam_tbl_2_hwrm(parms->type, &req.type);
-	if (rc != 0)
-		return rc;
+	req.type = parms->type;
 
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fb635f6dc..1ff1044e8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -181,6 +181,13 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 			      struct tf_rm_resc_req_entry *request,
 			      struct tf_rm_resc_entry *resv);
 
+/**
+ * Sends session resource flush request to TF Firmware
+ */
+int tf_msg_session_resc_flush(struct tf *tfp,
+			      enum tf_dir dir,
+			      uint16_t size,
+			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
  */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 6abf79aa1..02b4b5c8f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -60,6 +60,11 @@ struct tf_rm_new_db {
 	 */
 	enum tf_dir dir;
 
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+
 	/**
 	 * The DB consists of an array of elements
 	 */
@@ -167,6 +172,178 @@ tf_rm_adjust_index(struct tf_rm_element *db,
 	return rc;
 }
 
+/**
+ * Logs an array of found residual entries to the console.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] type
+ *   Type of Device Module
+ *
+ * [in] count
+ *   Number of entries in the residual array
+ *
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
+ */
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
+{
+	int i;
+
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
+	 */
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
+	}
+}
+
+/**
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
+ *
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
+ *
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
+ *
+ * Returns:
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
+ */
+static int
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
+{
+	int rc;
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc)
+		return rc;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
+	}
+
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
+		if (rc)
+			return rc;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
+		}
+		*resv_size = found;
+	}
+
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
 int
 tf_rm_create_db(struct tf *tfp,
 		struct tf_rm_create_db_parms *parms)
@@ -373,6 +550,7 @@ tf_rm_create_db(struct tf *tfp,
 
 	rm_db->num_entries = i;
 	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
 	tfp_free((void *)req);
@@ -392,20 +570,69 @@ tf_rm_create_db(struct tf *tfp,
 }
 
 int
-tf_rm_free_db(struct tf *tfp __rte_unused,
+tf_rm_free_db(struct tf *tfp,
 	      struct tf_rm_free_db_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
 	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
 
-	TF_CHECK_PARMS1(parms);
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	/* Traverse the DB and clear each pool.
-	 * NOTE:
-	 *   Firmware is not cleared. It will be cleared on close only.
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
 	 */
+
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
+	 */
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
+			TFP_DRV_LOG(ERR,
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
+	}
+
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -417,7 +644,7 @@ tf_rm_free_db(struct tf *tfp __rte_unused,
 int
 tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	int id;
 	uint32_t index;
 	struct tf_rm_new_db *rm_db;
@@ -446,11 +673,12 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 
 	id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
+		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
 			    "%s: Allocation failed, rc:%s\n",
 			    tf_dir_2_str(rm_db->dir),
 			    strerror(-rc));
-		return -ENOMEM;
+		return rc;
 	}
 
 	/* Adjust for any non zero start value */
@@ -475,7 +703,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 int
 tf_rm_free(struct tf_rm_free_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -521,7 +749,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 int
 tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = 0;
+	int rc;
 	uint32_t adj_index;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
@@ -565,7 +793,6 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 int
 tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -579,15 +806,16 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
 		return -ENOTSUP;
 
-	parms->info = &rm_db->db[parms->db_index].alloc;
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
-	return rc;
+	return 0;
 }
 
 int
 tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
 	struct tf_rm_new_db *rm_db;
 	enum tf_rm_elem_cfg_type cfg_type;
 
@@ -603,5 +831,36 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
 
+	return 0;
+}
+
+int
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
+{
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
+	}
+
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
 	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index ebf38c411..a40296ed2 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -8,6 +8,7 @@
 
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
 
@@ -57,9 +58,9 @@ struct tf_rm_new_entry {
 enum tf_rm_elem_cfg_type {
 	/** No configuration */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled' */
+	/** HCAPI 'controlled', uses a Pool for internal storage */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled' */
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
 	TF_RM_ELEM_CFG_PRIVATE,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
@@ -123,7 +124,11 @@ struct tf_rm_alloc_info {
  */
 struct tf_rm_create_db_parms {
 	/**
-	 * [in] Receive or transmit direction
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
 	 */
 	enum tf_dir dir;
 	/**
@@ -263,6 +268,25 @@ struct tf_rm_get_hcapi_parms {
 	uint16_t *hcapi_type;
 };
 
+/**
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
 /**
  * @page rm Resource Manager
  *
@@ -279,6 +303,8 @@ struct tf_rm_get_hcapi_parms {
  * @ref tf_rm_get_info
  *
  * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
 
 /**
@@ -396,4 +422,17 @@ int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
  */
 int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
+/**
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
+ *
+ * [in] parms
+ *   Pointer to get inuse parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
+
 #endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 3a602618c..b08d06306 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -91,11 +91,11 @@ tf_session_open_session(struct tf *tfp,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
-	rc = dev_bind(tfp,
-		      parms->open_cfg->device_type,
-		      session->shadow_copy,
-		      &parms->open_cfg->resources,
-		      &session->dev);
+	rc = tf_dev_bind(tfp,
+			 parms->open_cfg->device_type,
+			 session->shadow_copy,
+			 &parms->open_cfg->resources,
+			 &session->dev);
 	/* Logging handled by dev_bind */
 	if (rc)
 		return rc;
@@ -151,6 +151,8 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	tfs->ref_count--;
+
 	/* Record the session we're closing so the caller knows the
 	 * details.
 	 */
@@ -164,6 +166,32 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
+	if (tfs->ref_count > 0) {
+		/* In case we're attached only the session client gets
+		 * closed.
+		 */
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			TFP_DRV_LOG(ERR,
+				    "FW Session close failed, rc:%s\n",
+				    strerror(-rc));
+		}
+
+		return 0;
+	}
+
+	/* Final cleanup as we're last user of the session */
+
+	/* Unbind the device */
+	rc = tf_dev_unbind(tfp, tfd);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
 	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
@@ -173,23 +201,9 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	tfs->ref_count--;
-
-	/* Final cleanup as we're last user of the session */
-	if (tfs->ref_count == 0) {
-		/* Unbind the device */
-		rc = dev_unbind(tfp, tfd);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "Device unbind failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		tfp_free(tfp->session->core_data);
-		tfp_free(tfp->session);
-		tfp->session = NULL;
-	}
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index 51f8f0740..bdf7d2089 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -51,11 +51,12 @@ tf_tbl_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
 		db_cfg.rm_db = &tbl_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index e0fac31f2..2f4441de8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -54,11 +54,12 @@ tf_tcam_bind(struct tf *tfp,
 	}
 
 	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
 		db_cfg.alloc_cnt = parms->resources->tcam_cnt[i].cnt;
 		db_cfg.rm_db = &tcam_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 67c3bcb49..5090dfd9f 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -146,6 +146,10 @@ struct tf_tcam_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Entry index to write to
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index a9010543d..16c43eb67 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -7,8 +7,8 @@
 
 #include "tf_util.h"
 
-const char
-*tf_dir_2_str(enum tf_dir dir)
+const char *
+tf_dir_2_str(enum tf_dir dir)
 {
 	switch (dir) {
 	case TF_DIR_RX:
@@ -20,8 +20,8 @@ const char
 	}
 }
 
-const char
-*tf_ident_2_str(enum tf_identifier_type id_type)
+const char *
+tf_ident_2_str(enum tf_identifier_type id_type)
 {
 	switch (id_type) {
 	case TF_IDENT_TYPE_L2_CTXT:
@@ -39,8 +39,8 @@ const char
 	}
 }
 
-const char
-*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+const char *
+tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
 {
 	switch (tcam_type) {
 	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
@@ -60,8 +60,8 @@ const char
 	}
 }
 
-const char
-*tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
+const char *
+tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 {
 	switch (tbl_type) {
 	case TF_TBL_TYPE_FULL_ACT_RECORD:
@@ -131,8 +131,8 @@ const char
 	}
 }
 
-const char
-*tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
+const char *
+tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type)
 {
 	switch (em_type) {
 	case TF_EM_TBL_TYPE_EM_RECORD:
@@ -143,3 +143,38 @@ const char
 		return "Invalid EM type";
 	}
 }
+
+const char *
+tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
+				    uint16_t mod_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return tf_ident_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return tf_tcam_tbl_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return tf_em_tbl_type_2_str(mod_type);
+	default:
+		return "Invalid Device Module type";
+	}
+}
+
+const char *
+tf_device_module_type_2_str(enum tf_device_module_type dm_type)
+{
+	switch (dm_type) {
+	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
+		return "Identifer";
+	case TF_DEVICE_MODULE_TYPE_TABLE:
+		return "Table";
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return "TCAM";
+	case TF_DEVICE_MODULE_TYPE_EM:
+		return "EM";
+	default:
+		return "Invalid Device Module type";
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/tf_util.h b/drivers/net/bnxt/tf_core/tf_util.h
index ad8edaf30..c97e2a66a 100644
--- a/drivers/net/bnxt/tf_core/tf_util.h
+++ b/drivers/net/bnxt/tf_core/tf_util.h
@@ -7,6 +7,7 @@
 #define _TF_UTIL_H_
 
 #include "tf_core.h"
+#include "tf_device.h"
 
 /**
  * Helper function converting direction to text string
@@ -63,4 +64,35 @@ const char *tf_tbl_type_2_str(enum tf_tbl_type tbl_type);
  */
 const char *tf_em_tbl_type_2_str(enum tf_em_tbl_type em_type);
 
+/**
+ * Helper function converting device module type and module type to
+ * text string.
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_subtype_2_str
+					(enum tf_device_module_type dm_type,
+					 uint16_t mod_type);
+
+/**
+ * Helper function converting device module type to text string
+ *
+ * [in] dm_type
+ *   Device Module type
+ *
+ * [in] mod_type
+ *   Module specific type
+ *
+ * Returns:
+ *   Pointer to a char string holding the string for the EM type
+ */
+const char *tf_device_module_type_2_str(enum tf_device_module_type dm_type);
+
 #endif /* _TF_UTIL_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 21/51] net/bnxt: support two level priority for TCAMs
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (19 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 22/51] net/bnxt: use table scope for EM and TCAM lookup Ajit Khaparde
                           ` (31 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

Allow TCAM indexes to be allocated from top or bottom.
If the priority is set to 0, allocate from the
lowest tcam indexes i.e. from top. Any other value,
allocate it from the highest tcam indexes i.e. from
bottom.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/bitalloc.c  | 107 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h  |   5 ++
 drivers/net/bnxt/tf_core/tf_rm_new.c |   9 ++-
 drivers/net/bnxt/tf_core/tf_rm_new.h |   8 ++
 drivers/net/bnxt/tf_core/tf_tcam.c   |   1 +
 5 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
index fb4df9a19..918cabf19 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.c
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -7,6 +7,40 @@
 
 #define BITALLOC_MAX_LEVELS 6
 
+
+/* Finds the last bit set plus 1, equivalent to gcc __builtin_fls */
+static int
+ba_fls(bitalloc_word_t v)
+{
+	int c = 32;
+
+	if (!v)
+		return 0;
+
+	if (!(v & 0xFFFF0000u)) {
+		v <<= 16;
+		c -= 16;
+	}
+	if (!(v & 0xFF000000u)) {
+		v <<= 8;
+		c -= 8;
+	}
+	if (!(v & 0xF0000000u)) {
+		v <<= 4;
+		c -= 4;
+	}
+	if (!(v & 0xC0000000u)) {
+		v <<= 2;
+		c -= 2;
+	}
+	if (!(v & 0x80000000u)) {
+		v <<= 1;
+		c -= 1;
+	}
+
+	return c;
+}
+
 /* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
 static int
 ba_ffs(bitalloc_word_t v)
@@ -120,6 +154,79 @@ ba_alloc(struct bitalloc *pool)
 	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
 }
 
+/**
+ * Help function to alloc entry from highest available index
+ *
+ * Searching the pool from highest index for the empty entry.
+ *
+ * [in] pool
+ *   Pointer to the resource pool
+ *
+ * [in] offset
+ *   Offset of the storage in the pool
+ *
+ * [in] words
+ *   Number of words in this level
+ *
+ * [in] size
+ *   Number of entries in this level
+ *
+ * [in] index
+ *   Index of words that has the entry
+ *
+ * [in] clear
+ *   Indicate if a bit needs to be clear due to the entry is allocated
+ *
+ * Returns:
+ *     0 - Success
+ *    -1 - Failure
+ */
+static int
+ba_alloc_reverse_helper(struct bitalloc *pool,
+			int offset,
+			int words,
+			unsigned int size,
+			int index,
+			int *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int loc = ba_fls(storage[index]);
+	int r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_reverse_helper(pool,
+					    offset + words + 1,
+					    storage[words],
+					    size * 32,
+					    index * 32 + loc,
+					    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_reverse(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_reverse_helper(pool, 0, 1, 32, 0, &clear);
+}
+
 static int
 ba_alloc_index_helper(struct bitalloc *pool,
 		      int              offset,
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
index 563c8531a..2825bb37e 100644
--- a/drivers/net/bnxt/tf_core/bitalloc.h
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -72,6 +72,11 @@ int ba_init(struct bitalloc *pool, int size);
 int ba_alloc(struct bitalloc *pool);
 int ba_alloc_index(struct bitalloc *pool, int index);
 
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc_reverse(struct bitalloc *pool);
+
 /**
  * Query a particular index in a pool to check if its in use.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index 02b4b5c8f..de8f11955 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -671,7 +671,14 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 		return rc;
 	}
 
-	id = ba_alloc(rm_db->db[parms->db_index].pool);
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
+	 */
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
 	if (id == BA_FAIL) {
 		rc = -ENOMEM;
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
index a40296ed2..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.h
@@ -185,6 +185,14 @@ struct tf_rm_allocate_parms {
 	 * i.e. Full Action Record offsets.
 	 */
 	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 2f4441de8..260fb15a6 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -157,6 +157,7 @@ tf_tcam_alloc(struct tf *tfp,
 	/* Allocate requested element */
 	aparms.rm_db = tcam_db[parms->dir];
 	aparms.db_index = parms->type;
+	aparms.priority = parms->priority;
 	aparms.index = (uint32_t *)&parms->idx;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 22/51] net/bnxt: use table scope for EM and TCAM lookup
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (20 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 23/51] net/bnxt: update table get to use new design Ajit Khaparde
                           ` (30 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Support for table scope within the EM module
- Support for host and system memory
- Update TCAM set/free.
- Replace TF device type by HCAPI RM type.
- Update TCAM set and free for HCAPI RM type

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build                  |    5 +-
 drivers/net/bnxt/tf_core/Makefile             |    5 +-
 drivers/net/bnxt/tf_core/cfa_resource_types.h |    8 +-
 drivers/net/bnxt/tf_core/hwrm_tf.h            |  864 +-----------
 drivers/net/bnxt/tf_core/tf_core.c            |  100 +-
 drivers/net/bnxt/tf_core/tf_device.c          |   50 +-
 drivers/net/bnxt/tf_core/tf_device.h          |   86 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c       |   14 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h       |   20 +-
 drivers/net/bnxt/tf_core/tf_em.c              |  360 -----
 drivers/net/bnxt/tf_core/tf_em.h              |  310 +++-
 drivers/net/bnxt/tf_core/tf_em_common.c       |  281 ++++
 drivers/net/bnxt/tf_core/tf_em_common.h       |  107 ++
 drivers/net/bnxt/tf_core/tf_em_host.c         | 1146 +++++++++++++++
 drivers/net/bnxt/tf_core/tf_em_internal.c     |  312 +++++
 drivers/net/bnxt/tf_core/tf_em_system.c       |  118 ++
 drivers/net/bnxt/tf_core/tf_msg.c             | 1248 ++++-------------
 drivers/net/bnxt/tf_core/tf_msg.h             |  233 +--
 drivers/net/bnxt/tf_core/tf_rm.c              |   89 +-
 drivers/net/bnxt/tf_core/tf_rm_new.c          |   40 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1134 ---------------
 drivers/net/bnxt/tf_core/tf_tbl_type.c        |   39 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |   25 +-
 drivers/net/bnxt/tf_core/tf_tcam.h            |    4 +
 drivers/net/bnxt/tf_core/tf_util.c            |    4 +-
 25 files changed, 3030 insertions(+), 3572 deletions(-)
 delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 33e6ebd66..35038dc8b 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -28,7 +28,10 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_msg.c',
 	'tf_core/rand.c',
 	'tf_core/stack.c',
-	'tf_core/tf_em.c',
+        'tf_core/tf_em_common.c',
+        'tf_core/tf_em_host.c',
+        'tf_core/tf_em_internal.c',
+        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 5ed32f12a..f186741e4 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -12,8 +12,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 058d8cc88..6e79facec 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -202,7 +202,9 @@
 #define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
 /* VEB TCAM */
 #define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_VEB_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
+#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
@@ -269,7 +271,9 @@
 #define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
 /* Source Property TCAM */
 #define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_SP_TCAM
+/* Table Scope */
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
+#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 1e78296c6..26836e488 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
+ * Copyright(c) 2019 Broadcom
  * All rights reserved.
  */
 #ifndef _HWRM_TF_H_
@@ -13,20 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_SESSION_ATTACH = 712,
-	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
-	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
-	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
-	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
-	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
-	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
-	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
-	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
-	HWRM_TFT_TBL_SCOPE_CFG = 731,
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
-	HWRM_TFT_TBL_TYPE_SET = 823,
-	HWRM_TFT_TBL_TYPE_GET = 824,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
 } tf_subtype_t;
@@ -66,858 +54,8 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
-struct tf_session_attach_input;
-struct tf_session_hw_resc_qcaps_input;
-struct tf_session_hw_resc_qcaps_output;
-struct tf_session_hw_resc_alloc_input;
-struct tf_session_hw_resc_alloc_output;
-struct tf_session_hw_resc_free_input;
-struct tf_session_hw_resc_flush_input;
-struct tf_session_sram_resc_qcaps_input;
-struct tf_session_sram_resc_qcaps_output;
-struct tf_session_sram_resc_alloc_input;
-struct tf_session_sram_resc_alloc_output;
-struct tf_session_sram_resc_free_input;
-struct tf_session_sram_resc_flush_input;
-struct tf_tbl_type_set_input;
-struct tf_tbl_type_get_input;
-struct tf_tbl_type_get_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
-/* Input params for session attach */
-typedef struct tf_session_attach_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* Session Name */
-	char				 session_name[TF_SESSION_NAME_MAX];
-} tf_session_attach_input_t, *ptf_session_attach_input_t;
-
-/* Input params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
-
-/* Output params for session resource HW qcaps */
-typedef struct tf_session_hw_resc_qcaps_output {
-	/* Control Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Unused */
-	uint8_t			  unused[4];
-	/* Minimum guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_min;
-	/* Maximum non-guaranteed number of L2 Ctx */
-	uint16_t			 l2_ctx_tcam_entries_max;
-	/* Minimum guaranteed number of profile functions */
-	uint16_t			 prof_func_min;
-	/* Maximum non-guaranteed number of profile functions */
-	uint16_t			 prof_func_max;
-	/* Minimum guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_min;
-	/* Maximum non-guaranteed number of profile TCAM entries */
-	uint16_t			 prof_tcam_entries_max;
-	/* Minimum guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_min;
-	/* Maximum non-guaranteed number of EM profile ID */
-	uint16_t			 em_prof_id_max;
-	/* Minimum guaranteed number of EM records entries */
-	uint16_t			 em_record_entries_min;
-	/* Maximum non-guaranteed number of EM record entries */
-	uint16_t			 em_record_entries_max;
-	/* Minimum guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_min;
-	/* Maximum non-guaranteed number of WC TCAM profile ID */
-	uint16_t			 wc_tcam_prof_id_max;
-	/* Minimum guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_min;
-	/* Maximum non-guaranteed number of WC TCAM entries */
-	uint16_t			 wc_tcam_entries_max;
-	/* Minimum guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_min;
-	/* Maximum non-guaranteed number of meter profiles */
-	uint16_t			 meter_profiles_max;
-	/* Minimum guaranteed number of meter instances */
-	uint16_t			 meter_inst_min;
-	/* Maximum non-guaranteed number of meter instances */
-	uint16_t			 meter_inst_max;
-	/* Minimum guaranteed number of mirrors */
-	uint16_t			 mirrors_min;
-	/* Maximum non-guaranteed number of mirrors */
-	uint16_t			 mirrors_max;
-	/* Minimum guaranteed number of UPAR */
-	uint16_t			 upar_min;
-	/* Maximum non-guaranteed number of UPAR */
-	uint16_t			 upar_max;
-	/* Minimum guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_min;
-	/* Maximum non-guaranteed number of SP TCAM entries */
-	uint16_t			 sp_tcam_entries_max;
-	/* Minimum guaranteed number of L2 Functions */
-	uint16_t			 l2_func_min;
-	/* Maximum non-guaranteed number of L2 Functions */
-	uint16_t			 l2_func_max;
-	/* Minimum guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_min;
-	/* Maximum non-guaranteed number of flexible key templates */
-	uint16_t			 flex_key_templ_max;
-	/* Minimum guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_min;
-	/* Maximum non-guaranteed number of table Scopes */
-	uint16_t			 tbl_scope_max;
-	/* Minimum guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_min;
-	/* Maximum non-guaranteed number of epoch0 entries */
-	uint16_t			 epoch0_entries_max;
-	/* Minimum guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_min;
-	/* Maximum non-guaranteed number of epoch1 entries */
-	uint16_t			 epoch1_entries_max;
-	/* Minimum guaranteed number of metadata */
-	uint16_t			 metadata_min;
-	/* Maximum non-guaranteed number of metadata */
-	uint16_t			 metadata_max;
-	/* Minimum guaranteed number of CT states */
-	uint16_t			 ct_state_min;
-	/* Maximum non-guaranteed number of CT states */
-	uint16_t			 ct_state_max;
-	/* Minimum guaranteed number of range profiles */
-	uint16_t			 range_prof_min;
-	/* Maximum non-guaranteed number range profiles */
-	uint16_t			 range_prof_max;
-	/* Minimum guaranteed number of range entries */
-	uint16_t			 range_entries_min;
-	/* Maximum non-guaranteed number of range entries */
-	uint16_t			 range_entries_max;
-	/* Minimum guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_min;
-	/* Maximum non-guaranteed number of LAG table entries */
-	uint16_t			 lag_tbl_entries_max;
-} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
-
-/* Input params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of L2 CTX TCAM entries to be allocated */
-	uint16_t			 num_l2_ctx_tcam_entries;
-	/* Number of profile functions to be allocated */
-	uint16_t			 num_prof_func_entries;
-	/* Number of profile TCAM entries to be allocated */
-	uint16_t			 num_prof_tcam_entries;
-	/* Number of EM profile ids to be allocated */
-	uint16_t			 num_em_prof_id;
-	/* Number of EM records entries to be allocated */
-	uint16_t			 num_em_record_entries;
-	/* Number of WC profiles ids to be allocated */
-	uint16_t			 num_wc_tcam_prof_id;
-	/* Number of WC TCAM entries to be allocated */
-	uint16_t			 num_wc_tcam_entries;
-	/* Number of meter profiles to be allocated */
-	uint16_t			 num_meter_profiles;
-	/* Number of meter instances to be allocated */
-	uint16_t			 num_meter_inst;
-	/* Number of mirrors to be allocated */
-	uint16_t			 num_mirrors;
-	/* Number of UPAR to be allocated */
-	uint16_t			 num_upar;
-	/* Number of SP TCAM entries to be allocated */
-	uint16_t			 num_sp_tcam_entries;
-	/* Number of L2 functions to be allocated */
-	uint16_t			 num_l2_func;
-	/* Number of flexible key templates to be allocated */
-	uint16_t			 num_flex_key_templ;
-	/* Number of table scopes to be allocated */
-	uint16_t			 num_tbl_scope;
-	/* Number of epoch0 entries to be allocated */
-	uint16_t			 num_epoch0_entries;
-	/* Number of epoch1 entries to be allocated */
-	uint16_t			 num_epoch1_entries;
-	/* Number of metadata to be allocated */
-	uint16_t			 num_metadata;
-	/* Number of CT states to be allocated */
-	uint16_t			 num_ct_state;
-	/* Number of range profiles to be allocated */
-	uint16_t			 num_range_prof;
-	/* Number of range Entries to be allocated */
-	uint16_t			 num_range_entries;
-	/* Number of LAG table entries to be allocated */
-	uint16_t			 num_lag_tbl_entries;
-} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
-
-/* Output params for session resource HW alloc */
-typedef struct tf_session_hw_resc_alloc_output {
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
-
-/* Input params for session resource HW free */
-typedef struct tf_session_hw_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
-
-/* Input params for session resource HW flush */
-typedef struct tf_session_hw_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of L2 CTX TCAM entries allocated to the session */
-	uint16_t			 l2_ctx_tcam_entries_start;
-	/* Number of L2 CTX TCAM entries allocated */
-	uint16_t			 l2_ctx_tcam_entries_stride;
-	/* Starting index of profile functions allocated to the session */
-	uint16_t			 prof_func_start;
-	/* Number of profile functions allocated */
-	uint16_t			 prof_func_stride;
-	/* Starting index of profile TCAM entries allocated to the session */
-	uint16_t			 prof_tcam_entries_start;
-	/* Number of profile TCAM entries allocated */
-	uint16_t			 prof_tcam_entries_stride;
-	/* Starting index of EM profile ids allocated to the session */
-	uint16_t			 em_prof_id_start;
-	/* Number of EM profile ids allocated */
-	uint16_t			 em_prof_id_stride;
-	/* Starting index of EM record entries allocated to the session */
-	uint16_t			 em_record_entries_start;
-	/* Number of EM record entries allocated */
-	uint16_t			 em_record_entries_stride;
-	/* Starting index of WC TCAM profiles ids allocated to the session */
-	uint16_t			 wc_tcam_prof_id_start;
-	/* Number of WC TCAM profile ids allocated */
-	uint16_t			 wc_tcam_prof_id_stride;
-	/* Starting index of WC TCAM entries allocated to the session */
-	uint16_t			 wc_tcam_entries_start;
-	/* Number of WC TCAM allocated */
-	uint16_t			 wc_tcam_entries_stride;
-	/* Starting index of meter profiles allocated to the session */
-	uint16_t			 meter_profiles_start;
-	/* Number of meter profiles allocated */
-	uint16_t			 meter_profiles_stride;
-	/* Starting index of meter instance allocated to the session */
-	uint16_t			 meter_inst_start;
-	/* Number of meter instance allocated */
-	uint16_t			 meter_inst_stride;
-	/* Starting index of mirrors allocated to the session */
-	uint16_t			 mirrors_start;
-	/* Number of mirrors allocated */
-	uint16_t			 mirrors_stride;
-	/* Starting index of UPAR allocated to the session */
-	uint16_t			 upar_start;
-	/* Number of UPAR allocated */
-	uint16_t			 upar_stride;
-	/* Starting index of SP TCAM entries allocated to the session */
-	uint16_t			 sp_tcam_entries_start;
-	/* Number of SP TCAM entries allocated */
-	uint16_t			 sp_tcam_entries_stride;
-	/* Starting index of L2 functions allocated to the session */
-	uint16_t			 l2_func_start;
-	/* Number of L2 functions allocated */
-	uint16_t			 l2_func_stride;
-	/* Starting index of flexible key templates allocated to the session */
-	uint16_t			 flex_key_templ_start;
-	/* Number of flexible key templates allocated */
-	uint16_t			 flex_key_templ_stride;
-	/* Starting index of table scopes allocated to the session */
-	uint16_t			 tbl_scope_start;
-	/* Number of table scopes allocated */
-	uint16_t			 tbl_scope_stride;
-	/* Starting index of epoch0 entries allocated to the session */
-	uint16_t			 epoch0_entries_start;
-	/* Number of epoch0 entries allocated */
-	uint16_t			 epoch0_entries_stride;
-	/* Starting index of epoch1 entries allocated to the session */
-	uint16_t			 epoch1_entries_start;
-	/* Number of epoch1 entries allocated */
-	uint16_t			 epoch1_entries_stride;
-	/* Starting index of metadata allocated to the session */
-	uint16_t			 metadata_start;
-	/* Number of metadata allocated */
-	uint16_t			 metadata_stride;
-	/* Starting index of CT states allocated to the session */
-	uint16_t			 ct_state_start;
-	/* Number of CT states allocated */
-	uint16_t			 ct_state_stride;
-	/* Starting index of range profiles allocated to the session */
-	uint16_t			 range_prof_start;
-	/* Number range profiles allocated */
-	uint16_t			 range_prof_stride;
-	/* Starting index of range enntries allocated to the session */
-	uint16_t			 range_entries_start;
-	/* Number of range entries allocated */
-	uint16_t			 range_entries_stride;
-	/* Starting index of LAG table entries allocated to the session */
-	uint16_t			 lag_tbl_entries_start;
-	/* Number of LAG table entries allocated */
-	uint16_t			 lag_tbl_entries_stride;
-} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
-
-/* Input params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
-} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
-
-/* Output params for session resource SRAM qcaps */
-typedef struct tf_session_sram_resc_qcaps_output {
-	/* Flags */
-	uint32_t			 flags;
-	/* When set to 0, indicates Static partitioning */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
-	/* When set to 1, indicates Strategy 1 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
-	/* When set to 1, indicates Strategy 2 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
-	/* When set to 1, indicates Strategy 3 */
-#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
-	/* Minimum guaranteed number of Full Action */
-	uint16_t			 full_action_min;
-	/* Maximum non-guaranteed number of Full Action */
-	uint16_t			 full_action_max;
-	/* Minimum guaranteed number of MCG */
-	uint16_t			 mcg_min;
-	/* Maximum non-guaranteed number of MCG */
-	uint16_t			 mcg_max;
-	/* Minimum guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_min;
-	/* Maximum non-guaranteed number of Encap 8B */
-	uint16_t			 encap_8b_max;
-	/* Minimum guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_min;
-	/* Maximum non-guaranteed number of Encap 16B */
-	uint16_t			 encap_16b_max;
-	/* Minimum guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_min;
-	/* Maximum non-guaranteed number of Encap 64B */
-	uint16_t			 encap_64b_max;
-	/* Minimum guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_min;
-	/* Maximum non-guaranteed number of SP SMAC */
-	uint16_t			 sp_smac_max;
-	/* Minimum guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv4 */
-	uint16_t			 sp_smac_ipv4_max;
-	/* Minimum guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_min;
-	/* Maximum non-guaranteed number of SP SMAC IPv6 */
-	uint16_t			 sp_smac_ipv6_max;
-	/* Minimum guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_min;
-	/* Maximum non-guaranteed number of Counter 64B */
-	uint16_t			 counter_64b_max;
-	/* Minimum guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_min;
-	/* Maximum non-guaranteed number of NAT SPORT */
-	uint16_t			 nat_sport_max;
-	/* Minimum guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_min;
-	/* Maximum non-guaranteed number of NAT DPORT */
-	uint16_t			 nat_dport_max;
-	/* Minimum guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_min;
-	/* Maximum non-guaranteed number of NAT S_IPV4 */
-	uint16_t			 nat_s_ipv4_max;
-	/* Minimum guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_min;
-	/* Maximum non-guaranteed number of NAT D_IPV4 */
-	uint16_t			 nat_d_ipv4_max;
-} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
-
-/* Input params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_input {
-	/* FW Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Number of full action SRAM entries to be allocated */
-	uint16_t			 num_full_action;
-	/* Number of multicast groups to be allocated */
-	uint16_t			 num_mcg;
-	/* Number of Encap 8B entries to be allocated */
-	uint16_t			 num_encap_8b;
-	/* Number of Encap 16B entries to be allocated */
-	uint16_t			 num_encap_16b;
-	/* Number of Encap 64B entries to be allocated */
-	uint16_t			 num_encap_64b;
-	/* Number of SP SMAC entries to be allocated */
-	uint16_t			 num_sp_smac;
-	/* Number of SP SMAC IPv4 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv4;
-	/* Number of SP SMAC IPv6 entries to be allocated */
-	uint16_t			 num_sp_smac_ipv6;
-	/* Number of Counter 64B entries to be allocated */
-	uint16_t			 num_counter_64b;
-	/* Number of NAT source ports to be allocated */
-	uint16_t			 num_nat_sport;
-	/* Number of NAT destination ports to be allocated */
-	uint16_t			 num_nat_dport;
-	/* Number of NAT source iPV4 addresses to be allocated */
-	uint16_t			 num_nat_s_ipv4;
-	/* Number of NAT destination IPV4 addresses to be allocated */
-	uint16_t			 num_nat_d_ipv4;
-} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
-
-/* Output params for session resource SRAM alloc */
-typedef struct tf_session_sram_resc_alloc_output {
-	/* Unused */
-	uint8_t			  unused[2];
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
-
-/* Input params for session resource SRAM free */
-typedef struct tf_session_sram_resc_free_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the query apply to RX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the query apply to TX */
-#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
-
-/* Input params for session resource SRAM flush */
-typedef struct tf_session_sram_resc_flush_input {
-	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the flush apply to RX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
-	/* When set to 1, indicates the flush apply to TX */
-#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
-	/* Starting index of full action SRAM entries allocated to the session */
-	uint16_t			 full_action_start;
-	/* Number of full action SRAM entries allocated */
-	uint16_t			 full_action_stride;
-	/* Starting index of multicast groups allocated to this session */
-	uint16_t			 mcg_start;
-	/* Number of multicast groups allocated */
-	uint16_t			 mcg_stride;
-	/* Starting index of encap 8B entries allocated to the session */
-	uint16_t			 encap_8b_start;
-	/* Number of encap 8B entries allocated */
-	uint16_t			 encap_8b_stride;
-	/* Starting index of encap 16B entries allocated to the session */
-	uint16_t			 encap_16b_start;
-	/* Number of encap 16B entries allocated */
-	uint16_t			 encap_16b_stride;
-	/* Starting index of encap 64B entries allocated to the session */
-	uint16_t			 encap_64b_start;
-	/* Number of encap 64B entries allocated */
-	uint16_t			 encap_64b_stride;
-	/* Starting index of SP SMAC entries allocated to the session */
-	uint16_t			 sp_smac_start;
-	/* Number of SP SMAC entries allocated */
-	uint16_t			 sp_smac_stride;
-	/* Starting index of SP SMAC IPv4 entries allocated to the session */
-	uint16_t			 sp_smac_ipv4_start;
-	/* Number of SP SMAC IPv4 entries allocated */
-	uint16_t			 sp_smac_ipv4_stride;
-	/* Starting index of SP SMAC IPv6 entries allocated to the session */
-	uint16_t			 sp_smac_ipv6_start;
-	/* Number of SP SMAC IPv6 entries allocated */
-	uint16_t			 sp_smac_ipv6_stride;
-	/* Starting index of Counter 64B entries allocated to the session */
-	uint16_t			 counter_64b_start;
-	/* Number of Counter 64B entries allocated */
-	uint16_t			 counter_64b_stride;
-	/* Starting index of NAT source ports allocated to the session */
-	uint16_t			 nat_sport_start;
-	/* Number of NAT source ports allocated */
-	uint16_t			 nat_sport_stride;
-	/* Starting index of NAT destination ports allocated to the session */
-	uint16_t			 nat_dport_start;
-	/* Number of NAT destination ports allocated */
-	uint16_t			 nat_dport_stride;
-	/* Starting index of NAT source IPV4 addresses allocated to the session */
-	uint16_t			 nat_s_ipv4_start;
-	/* Number of NAT source IPV4 addresses allocated */
-	uint16_t			 nat_s_ipv4_stride;
-	/*
-	 * Starting index of NAT destination IPV4 addresses allocated to the
-	 * session
-	 */
-	uint16_t			 nat_d_ipv4_start;
-	/* Number of NAT destination IPV4 addresses allocated */
-	uint16_t			 nat_d_ipv4_stride;
-} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
-
-/* Input params for table type set */
-typedef struct tf_tbl_type_set_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Size of the data to set in bytes */
-	uint16_t			 size;
-	/* Data to set */
-	uint8_t			  data[TF_BULK_SEND];
-	/* Index to set */
-	uint32_t			 index;
-} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
-
-/* Input params for table type get */
-typedef struct tf_tbl_type_get_input {
-	/* Session Id */
-	uint32_t			 fw_session_id;
-	/* flags */
-	uint16_t			 flags;
-	/* When set to 0, indicates the get apply to RX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
-	/* When set to 1, indicates the get apply to TX */
-#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
-	/* Type of the object to set */
-	uint32_t			 type;
-	/* Index to get */
-	uint32_t			 index;
-} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
-
-/* Output params for table type get */
-typedef struct tf_tbl_type_get_output {
-	/* Size of the data read in bytes */
-	uint16_t			 size;
-	/* Data read */
-	uint8_t			  data[TF_BULK_RECV];
-} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3e23d0513..8b3e15c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -208,7 +208,15 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_insert_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL &&
+		dev->ops->tf_dev_insert_ext_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL &&
+		dev->ops->tf_dev_insert_int_em_entry != NULL)
+		rc = dev->ops->tf_dev_insert_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM insert failed, rc:%s\n",
@@ -217,7 +225,7 @@ int tf_insert_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	return -EINVAL;
+	return 0;
 }
 
 /** Delete EM hash entry API
@@ -255,7 +263,13 @@ int tf_delete_em_entry(struct tf *tfp,
 		return rc;
 	}
 
-	rc = dev->ops->tf_dev_delete_em_entry(tfp, parms);
+	if (parms->mem == TF_MEM_EXTERNAL)
+		rc = dev->ops->tf_dev_delete_ext_em_entry(tfp, parms);
+	else if (parms->mem == TF_MEM_INTERNAL)
+		rc = dev->ops->tf_dev_delete_int_em_entry(tfp, parms);
+	else
+		return -EINVAL;
+
 	if (rc) {
 		TFP_DRV_LOG(ERR,
 			    "%s: EM delete failed, rc:%s\n",
@@ -806,3 +820,83 @@ tf_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_alloc_tbl_scope != NULL) {
+		rc = dev->ops->tf_dev_alloc_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Alloc table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	int rc;
+
+	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup device, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_free_tbl_scope) {
+		rc = dev->ops->tf_dev_free_tbl_scope(tfp, parms);
+	} else {
+		TFP_DRV_LOG(ERR,
+			    "Free table scope not supported by device\n");
+		return -EINVAL;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 441d0c678..20b0c5948 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -6,6 +6,7 @@
 #include "tf_device.h"
 #include "tf_device_p4.h"
 #include "tfp.h"
+#include "tf_em.h"
 
 struct tf;
 
@@ -42,10 +43,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_ident_cfg_parms ident_cfg;
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
-
-	dev_handle->type = TF_DEVICE_TYPE_WH;
-	/* Initial function initialization */
-	dev_handle->ops = &tf_dev_ops_p4_init;
+	struct tf_em_cfg_parms em_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -86,6 +84,36 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * EEM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_ext_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
+
+	rc = tf_em_ext_common_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EEM initialization failure\n");
+		goto fail;
+	}
+
+	/*
+	 * EM
+	 */
+	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
+	em_cfg.cfg = tf_em_int_p4;
+	em_cfg.resources = resources;
+	em_cfg.mem_type = 0; /* Not used by EM */
+
+	rc = tf_em_int_bind(tfp, &em_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EM initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -144,6 +172,20 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_em_ext_common_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EEM\n");
+		fail = true;
+	}
+
+	rc = tf_em_int_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, EM\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index c8feac55d..2712d1039 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -15,12 +15,24 @@ struct tf;
 struct tf_session;
 
 /**
- *
+ * Device module types
  */
 enum tf_device_module_type {
+	/**
+	 * Identifier module
+	 */
 	TF_DEVICE_MODULE_TYPE_IDENTIFIER,
+	/**
+	 * Table type module
+	 */
 	TF_DEVICE_MODULE_TYPE_TABLE,
+	/**
+	 * TCAM module
+	 */
 	TF_DEVICE_MODULE_TYPE_TCAM,
+	/**
+	 * EM module
+	 */
 	TF_DEVICE_MODULE_TYPE_EM,
 	TF_DEVICE_MODULE_TYPE_MAX
 };
@@ -395,8 +407,8 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_insert_em_entry)(struct tf *tfp,
-				      struct tf_insert_em_entry_parms *parms);
+	int (*tf_dev_insert_int_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
 
 	/**
 	 * Delete EM hash entry API
@@ -411,8 +423,72 @@ struct tf_dev_ops {
 	 *    0       - Success
 	 *    -EINVAL - Error
 	 */
-	int (*tf_dev_delete_em_entry)(struct tf *tfp,
-				      struct tf_delete_em_entry_parms *parms);
+	int (*tf_dev_delete_int_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Insert EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM insert parameters
+	 *
+	 *  Returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_insert_ext_em_entry)(struct tf *tfp,
+					  struct tf_insert_em_entry_parms *parms);
+
+	/**
+	 * Delete EEM hash entry API
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to E/EM delete parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_delete_ext_em_entry)(struct tf *tfp,
+					  struct tf_delete_em_entry_parms *parms);
+
+	/**
+	 * Allocate EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope alloc parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_alloc_tbl_scope)(struct tf *tfp,
+				      struct tf_alloc_tbl_scope_parms *parms);
+
+	/**
+	 * Free EEM table scope
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table scope free parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
+				     struct tf_free_tbl_scope_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9e332c594..127c655a6 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -93,6 +93,12 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = NULL,
 	.tf_dev_get_tcam = NULL,
+	.tf_dev_insert_int_em_entry = NULL,
+	.tf_dev_delete_int_em_entry = NULL,
+	.tf_dev_insert_ext_em_entry = NULL,
+	.tf_dev_delete_ext_em_entry = NULL,
+	.tf_dev_alloc_tbl_scope = NULL,
+	.tf_dev_free_tbl_scope = NULL,
 };
 
 /**
@@ -113,6 +119,10 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tcam = NULL,
 	.tf_dev_set_tcam = tf_tcam_set,
 	.tf_dev_get_tcam = NULL,
-	.tf_dev_insert_em_entry = tf_em_insert_entry,
-	.tf_dev_delete_em_entry = tf_em_delete_entry,
+	.tf_dev_insert_int_em_entry = tf_em_insert_int_entry,
+	.tf_dev_delete_int_em_entry = tf_em_delete_int_entry,
+	.tf_dev_insert_ext_em_entry = tf_em_insert_ext_entry,
+	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
+	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
+	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 411e21637..da6dd65a3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -36,13 +36,12 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_ENCAP_32B */
+	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV4 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	/* CFA_RESOURCE_TYPE_P4_SRAM_SP_SMAC_IPV6 */
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
@@ -77,4 +76,17 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EXT */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
+
+struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
+	/* CFA_RESOURCE_TYPE_P4_EM_REC */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+};
+
+struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_REC },
+	/* CFA_RESOURCE_TYPE_P4_TBL_SCOPE */
+	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
deleted file mode 100644
index fcbbd7eca..000000000
--- a/drivers/net/bnxt/tf_core/tf_em.c
+++ /dev/null
@@ -1,360 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-#include <rte_common.h>
-#include <rte_errno.h>
-#include <rte_log.h>
-
-#include "tf_core.h"
-#include "tf_em.h"
-#include "tf_msg.h"
-#include "tfp.h"
-#include "lookup3.h"
-#include "tf_ext_flow_handle.h"
-
-#include "bnxt.h"
-
-
-static uint32_t tf_em_get_key_mask(int num_entries)
-{
-	uint32_t mask = num_entries - 1;
-
-	if (num_entries & 0x7FFF)
-		return 0;
-
-	if (num_entries > (128 * 1024 * 1024))
-		return 0;
-
-	return mask;
-}
-
-static void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
-				   uint8_t	       *in_key,
-				   struct cfa_p4_eem_64b_entry *key_entry)
-{
-	key_entry->hdr.word1 = result->word1;
-
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
-
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
-#endif
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int tf_insert_eem_entry(struct tf_tbl_scope_cb	   *tbl_scope_cb,
-			       struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t	   mask;
-	uint32_t	   key0_hash;
-	uint32_t	   key1_hash;
-	uint32_t	   key0_index;
-	uint32_t	   key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t	   index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t	   gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/**
- * Insert EM internal entry API
- *
- *  returns:
- *     0 - Success
- */
-static int tf_insert_em_internal_entry(struct tf                       *tfp,
-				       struct tf_insert_em_entry_parms *parms)
-{
-	int       rc;
-	uint32_t  gfid;
-	uint16_t  rptr_index = 0;
-	uint8_t   rptr_entry = 0;
-	uint8_t   num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-	uint32_t index;
-
-	rc = stack_pop(pool, &index);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
-		return rc;
-	}
-
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
-	rc = tf_msg_insert_em_internal_entry(tfp,
-					     parms,
-					     &rptr_index,
-					     &rptr_entry,
-					     &num_of_entries);
-	if (rc != 0)
-		return -1;
-
-	PMD_DRV_LOG(ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
-		   rptr_index,
-		   rptr_entry,
-		   num_of_entries);
-
-	TF_SET_GFID(gfid,
-		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
-		     rptr_entry),
-		    0); /* N/A for internal table */
-
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_INTERNAL,
-		       parms->dir);
-
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     num_of_entries,
-				     0,
-				     0,
-				     rptr_index,
-				     rptr_entry,
-				     0);
-	return 0;
-}
-
-/** Delete EM internal entry API
- *
- * returns:
- * 0
- * -EINVAL
- */
-static int tf_delete_em_internal_entry(struct tf                       *tfp,
-				       struct tf_delete_em_entry_parms *parms)
-{
-	int rc;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
-
-	rc = tf_msg_delete_em_entry(tfp, parms);
-
-	/* Return resource to pool */
-	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
-
-	return rc;
-}
-
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-			       struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	/* Process the EM entry per Table Scope type */
-	if (parms->mem == TF_MEM_EXTERNAL)
-		/* External EEM */
-		return tf_insert_eem_entry
-			(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		/* Internal EM */
-		return tf_insert_em_internal_entry(tfp,	parms);
-
-	return -EINVAL;
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-	if (parms->mem == TF_MEM_EXTERNAL)
-		return tf_delete_eem_entry(tbl_scope_cb, parms);
-	else if (parms->mem == TF_MEM_INTERNAL)
-		return tf_delete_em_internal_entry(tfp, parms);
-
-	return -EINVAL;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 2262ae7cc..cf799c200 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,6 +9,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
+#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
@@ -19,6 +20,9 @@
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
+#define TF_EM_MAX_MASK 0x7FFF
+#define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
+
 /*
  * Used to build GFID:
  *
@@ -44,6 +48,47 @@ struct tf_em_64b_entry {
 	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct cfa_p4_eem_entry_hdr)];
 };
 
+/** EEM Memory Type
+ *
+ */
+enum tf_mem_type {
+	TF_EEM_MEM_TYPE_INVALID,
+	TF_EEM_MEM_TYPE_HOST,
+	TF_EEM_MEM_TYPE_SYSTEM
+};
+
+/**
+ * tf_em_cfg_parms definition
+ */
+struct tf_em_cfg_parms {
+	/**
+	 * [in] Num entries in resource config
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Resource config
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+	/**
+	 * [in] Memory type.
+	 */
+	enum tf_mem_type mem_type;
+};
+
+/**
+ * @page table Table
+ *
+ * @ref tf_alloc_eem_tbl_scope
+ *
+ * @ref tf_free_eem_tbl_scope_cb
+ *
+ * @ref tbl_scope_cb_find
+ */
+
 /**
  * Allocates EEM Table scope
  *
@@ -78,29 +123,258 @@ int tf_free_eem_tbl_scope_cb(struct tf *tfp,
 			     struct tf_free_tbl_scope_parms *parms);
 
 /**
- * Function to search for table scope control block structure
- * with specified table scope ID.
+ * Insert record in to internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_int_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from internal EM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_int_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
+
+/**
+ * Insert record in to external EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Insert record from external EEM table
  *
- * [in] session
- *   Session to use for the search of the table scope control block
- * [in] tbl_scope_id
- *   Table scope ID to search for
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
  *
  * Returns:
- *  Pointer to the found table scope control block struct or NULL if
- *  table scope control block struct not found
+ *   0       - Success
+ *   -EINVAL - Parameter error
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+int tf_em_delete_ext_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms);
 
-void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   enum tf_dir dir,
-			   uint32_t offset,
-			   enum hcapi_cfa_em_table_type table_type);
+/**
+ * Insert record in to external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_insert_ext_sys_entry(struct tf *tfp,
+			       struct tf_insert_em_entry_parms *parms);
+
+/**
+ * Delete record from external system EEM table
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_delete_ext_sys_entry(struct tf *tfp,
+			       struct tf_delete_em_entry_parms *parms);
 
-int tf_em_insert_entry(struct tf *tfp,
-		       struct tf_insert_em_entry_parms *parms);
+/**
+ * Bind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_bind(struct tf *tfp,
+		   struct tf_em_cfg_parms *parms);
 
-int tf_em_delete_entry(struct tf *tfp,
-		       struct tf_delete_em_entry_parms *parms);
+/**
+ * Unbind internal EM device interface
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_int_unbind(struct tf *tfp);
+
+/**
+ * Common bind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_bind(struct tf *tfp,
+			  struct tf_em_cfg_parms *parms);
+
+/**
+ * Common unbind for EEM device interface. Used for both host and
+ * system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_unbind(struct tf *tfp);
+
+/**
+ * Alloc for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using host memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_host_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Alloc for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_alloc(struct tf *tfp,
+			 struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free for external EEM using system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_system_free(struct tf *tfp,
+			struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common free for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_free(struct tf *tfp,
+			  struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Common alloc for external EEM using host or system memory
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_ext_common_alloc(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
new file mode 100644
index 000000000..ba6aa7ac1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_device.h"
+#include "tf_ext_flow_handle.h"
+#include "cfa_resource_types.h"
+
+#include "bnxt.h"
+
+
+/**
+ * EM DBs.
+ */
+void *eem_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Host or system
+ */
+static enum tf_mem_type mem_type;
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+	struct tf_rm_is_allocated_parms parms;
+	int allocated;
+
+	/* Check that id is valid */
+	parms.rm_db = eem_db[TF_DIR_RX];
+	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.allocated = &allocated;
+
+	i = tf_rm_is_allocated(&parms);
+
+	if (i < 0 || !allocated)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+int
+tf_create_tbl_pool_external(enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i;
+	int32_t j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
+			    tf_dir_2_str(dir), strerror(ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes in reverse
+	 */
+	j = (num_entries - 1) * entry_sz_bytes;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+				    tf_dir_2_str(dir), strerror(-rc));
+			goto cleanup;
+		}
+
+		if (j < 0) {
+			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
+				    dir, j);
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
+			    tf_dir_2_str(dir), strerror(-rc));
+		goto cleanup;
+	}
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ */
+void
+tf_destroy_tbl_pool_external(enum tf_dir dir,
+			     struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_act_pool_mem =
+		tbl_scope_cb->ext_act_pool_mem[dir];
+
+	tfp_free(ext_act_pool_mem);
+}
+
+uint32_t
+tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & TF_EM_MAX_MASK)
+		return 0;
+
+	if (num_entries > TF_EM_MAX_ENTRY)
+		return 0;
+
+	return mask;
+}
+
+void
+tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+		       uint8_t *in_key,
+		       struct cfa_p4_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+#endif
+}
+
+int
+tf_em_ext_common_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+		db_cfg.rm_db = &eem_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	mem_type = parms->mem_type;
+	init = 1;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = eem_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		eem_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_em_ext_common_alloc(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_alloc(tfp, parms);
+	else
+		return tf_em_ext_system_alloc(tfp, parms);
+}
+
+int
+tf_em_ext_common_free(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_em_ext_host_free(tfp, parms);
+	else
+		return tf_em_ext_system_free(tfp, parms);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
new file mode 100644
index 000000000..45699a7c3
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_COMMON_H_
+#define _TF_EM_COMMON_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *   table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+/**
+ * Create and initialize a stack to use for action entries
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ * [in] num_entries
+ *   Number of EEM entries
+ * [in] entry_sz_bytes
+ *   Size of the entry
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memory
+ *   -EINVAL - Failure
+ */
+int tf_create_tbl_pool_external(enum tf_dir dir,
+				struct tf_tbl_scope_cb *tbl_scope_cb,
+				uint32_t num_entries,
+				uint32_t entry_sz_bytes);
+
+/**
+ * Delete and cleanup action record allocation stack
+ *
+ * [in] dir
+ *   Direction
+ * [in] tbl_scope_id
+ *   Table scope ID
+ *
+ */
+void tf_destroy_tbl_pool_external(enum tf_dir dir,
+				  struct tf_tbl_scope_cb *tbl_scope_cb);
+
+/**
+ * Get hash mask for current EEM table size
+ *
+ * [in] num_entries
+ *   Number of EEM entries
+ */
+uint32_t tf_em_get_key_mask(int num_entries);
+
+/**
+ * Populate key_entry
+ *
+ * [in] result
+ *   Entry data
+ * [in] in_key
+ *   Key data
+ * [out] key_entry
+ *   Completed key record
+ */
+void tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
+			    uint8_t	       *in_key,
+			    struct cfa_p4_eem_64b_entry *key_entry);
+
+/**
+ * Find base page address for offset into specified table type
+ *
+ * [in] tbl_scope_cb
+ *   Table scope
+ * [in] dir
+ *   Direction
+ * [in] Offset
+ *   Offset in to table
+ * [in] table_type
+ *   Table type
+ *
+ * Returns:
+ *
+ * 0                                 - Failure
+ * Void pointer to page base address - Success
+ */
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum hcapi_cfa_em_table_type table_type);
+
+#endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
new file mode 100644
index 000000000..8be39afdd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -0,0 +1,1146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * EM DBs.
+ */
+extern void *eem_db[TF_DIR_MAX];
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			TFP_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)(uintptr_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+		TFP_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			TFP_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d, rc:%s\n",
+				i,
+				strerror(-rc));
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			TFP_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(void *)(uintptr_t)tp->pg_va_tbl[j],
+				(void *)(uintptr_t)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
+		      struct hcapi_cfa_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
+{
+	struct hcapi_cfa_em_page_tbl *tp_next;
+	struct hcapi_cfa_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
+
+#ifdef TF_EEM_DEBUG
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
+#endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 = (uint8_t *)
+		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset =
+			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 = (uint8_t *)
+	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
+							  TF_KEY0_TABLE :
+							  TF_KEY1_TABLE)];
+	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb =
+	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
+			  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_host_alloc(struct tf *tfp,
+		     struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *em_tables;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
+	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[TF_KEY0_TABLE].num_entries,
+				   em_tables[TF_KEY0_TABLE].ctx_id,
+				   em_tables[TF_KEY1_TABLE].ctx_id,
+				   em_tables[TF_RECORD_TABLE].ctx_id,
+				   em_tables[TF_EFC_TABLE].ctx_id,
+				   parms->hw_flow_cache_flush_timer,
+				   dir);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "TBL: Unable to configure EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(dir,
+					    tbl_scope_cb,
+					    em_tables[TF_RECORD_TABLE].num_entries,
+					    em_tables[TF_RECORD_TABLE].entry_size);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_host_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_host_free(struct tf *tfp,
+		    struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Table scope error\n");
+		return -EINVAL;
+	}
+
+	/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
new file mode 100644
index 000000000..9be91ad5d
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -0,0 +1,312 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/**
+ * EM DBs.
+ */
+static void *em_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] num_entries
+ *   number of entries to write
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_em_pool(struct tf_session *session,
+		  enum tf_dir dir,
+		  uint32_t num_entries)
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &session->em_pool[dir];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	rc = tfp_calloc(&parms);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc) {
+			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+		j--;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Create EM Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ *
+ * Return:
+ */
+static void
+tf_free_em_pool(struct tf_session *session,
+		enum tf_dir dir)
+{
+	struct stack *pool = &session->em_pool[dir];
+	uint32_t *ptr;
+
+	ptr = stack_items(pool);
+
+	if (ptr != NULL)
+		tfp_free(ptr);
+}
+
+/**
+ * Insert EM internal entry API
+ *
+ *  returns:
+ *     0 - Success
+ */
+int
+tf_em_insert_int_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	int rc;
+	uint32_t gfid;
+	uint16_t rptr_index = 0;
+	uint8_t rptr_entry = 0;
+	uint8_t num_of_entries = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+	uint32_t index;
+
+	rc = stack_pop(pool, &index);
+
+	if (rc) {
+		PMD_DRV_LOG
+		  (ERR,
+		   "dir:%d, EM entry index allocation failed\n",
+		   parms->dir);
+		return rc;
+	}
+
+	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rc = tf_msg_insert_em_internal_entry(tfp,
+					     parms,
+					     &rptr_index,
+					     &rptr_entry,
+					     &num_of_entries);
+	if (rc)
+		return -1;
+
+	PMD_DRV_LOG
+		  (ERR,
+		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   rptr_index,
+		   rptr_entry,
+		   num_of_entries);
+
+	TF_SET_GFID(gfid,
+		    ((rptr_index << TF_EM_INTERNAL_INDEX_SHIFT) |
+		     rptr_entry),
+		    0); /* N/A for internal table */
+
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_INTERNAL,
+		       parms->dir);
+
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     (uint32_t)num_of_entries,
+				     0,
+				     0,
+				     rptr_index,
+				     rptr_entry,
+				     0);
+	return 0;
+}
+
+
+/** Delete EM internal entry API
+ *
+ * returns:
+ * 0
+ * -EINVAL
+ */
+int
+tf_em_delete_int_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *session =
+		(struct tf_session *)(tfp->session->core_data);
+	struct stack *pool = &session->em_pool[parms->dir];
+
+	rc = tf_msg_delete_em_entry(tfp, parms);
+
+	/* Return resource to pool */
+	if (rc == 0)
+		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+
+	return rc;
+}
+
+int
+tf_em_int_bind(struct tf *tfp,
+	       struct tf_em_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Identifier already initialized\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		tf_create_em_pool(session,
+				  i,
+				  TF_SESSION_EM_POOL_SIZE);
+	}
+
+	/*
+	 * I'm not sure that this code is needed.
+	 * leaving for now until resolved
+	 */
+	if (parms->num_elements) {
+		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+		db_cfg.num_elements = parms->num_elements;
+		db_cfg.cfg = parms->cfg;
+
+		for (i = 0; i < TF_DIR_MAX; i++) {
+			db_cfg.dir = i;
+			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+			db_cfg.rm_db = &em_db[i];
+			rc = tf_rm_create_db(tfp, &db_cfg);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: EM DB creation failed\n",
+					    tf_dir_2_str(i));
+
+				return rc;
+			}
+		}
+	}
+
+	init = 1;
+	return 0;
+}
+
+int
+tf_em_int_unbind(struct tf *tfp)
+{
+	int rc;
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+	struct tf_session *session;
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized done silent as to
+	 * allow for creation cleanup.
+	 */
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "No EM DBs created\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		tf_free_em_pool(session, i);
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = em_db[i];
+		if (em_db[i] != NULL) {
+			rc = tf_rm_free_db(tfp, &fparms);
+			if (rc)
+				return rc;
+		}
+
+		em_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
new file mode 100644
index 000000000..ee18a0c70
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_em_common.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_insert_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
+		    struct tf_delete_em_entry_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_sys_entry(struct tf *tfp,
+			   struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb, parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_sys_entry(struct tf *tfp,
+			   struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find
+		((struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
+}
+
+int
+tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
+		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_em_ext_system_free(struct tf *tfp __rte_unused,
+		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c015b0ce2..d8b80bc84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,82 +18,6 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
-/**
- * Endian converts min and max values from the HW response to the query
- */
-#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
-	(query)->hw_query[index].min =                                       \
-		tfp_le_to_cpu_16(response. element ## _min);                 \
-	(query)->hw_query[index].max =                                       \
-		tfp_le_to_cpu_16(response. element ## _max);                 \
-} while (0)
-
-/**
- * Endian converts the number of entries from the alloc to the request
- */
-#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
-	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(hw_entry[index].start);                     \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
-	hw_entry[index].start =                                              \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	hw_entry[index].stride =                                             \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
-/**
- * Endian converts min and max values from the SRAM response to the
- * query
- */
-#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
-	(query)->sram_query[index].min =                                     \
-		tfp_le_to_cpu_16(response.element ## _min);                  \
-	(query)->sram_query[index].max =                                     \
-		tfp_le_to_cpu_16(response.element ## _max);                  \
-} while (0)
-
-/**
- * Endian converts the number of entries from the action (alloc) to
- * the request
- */
-#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
-	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
-
-/**
- * Endian converts the start and stride value from the free to the request
- */
-#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
-	request.element ## _start =                                          \
-		tfp_cpu_to_le_16(sram_entry[index].start);                   \
-	request.element ## _stride =                                         \
-		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
-} while (0)
-
-/**
- * Endian converts the start and stride from the HW response to the
- * alloc
- */
-#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
-	sram_entry[index].start =                                            \
-		tfp_le_to_cpu_16(response.element ## _start);                \
-	sram_entry[index].stride =                                           \
-		tfp_le_to_cpu_16(response.element ## _stride);               \
-} while (0)
-
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -107,39 +31,6 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
-static int
-tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
-		   uint32_t *hwrm_type)
-{
-	int rc = 0;
-
-	switch (tcam_type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-		rc = -EOPNOTSUPP;
-		break;
-	default:
-		rc = -EOPNOTSUPP;
-		break;
-	}
-
-	return rc;
-}
-
 /**
  * Allocates a DMA buffer that can be used for message transfer.
  *
@@ -185,13 +76,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 	tfp_free(buf->va_addr);
 }
 
-/**
- * NEW HWRM direct messages
- */
+/* HWRM Direct messages */
 
-/**
- * Sends session open request to TF Firmware
- */
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
@@ -222,9 +108,6 @@ tf_msg_session_open(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends session attach request to TF Firmware
- */
 int
 tf_msg_session_attach(struct tf *tfp __rte_unused,
 		      char *ctrl_chan_name __rte_unused,
@@ -233,9 +116,6 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
-/**
- * Sends session close request to TF Firmware
- */
 int
 tf_msg_session_close(struct tf *tfp)
 {
@@ -261,14 +141,11 @@ tf_msg_session_close(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session query config request to TF Firmware
- */
 int
 tf_msg_session_qcfg(struct tf *tfp)
 {
 	int rc;
-	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
@@ -289,636 +166,6 @@ tf_msg_session_qcfg(struct tf *tfp)
 	return rc;
 }
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_query *query)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_qcaps_input req = { 0 };
-	struct tf_session_hw_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(query, 0, sizeof(*query));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
-			    resp, meter_inst);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
-			     enum tf_dir dir,
-			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
-			     struct tf_rm_entry *hw_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_alloc_input req = { 0 };
-	struct tf_session_hw_resc_alloc_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			   l2_ctx_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			   prof_func_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			   prof_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			   em_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
-			   em_record_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			   wc_tcam_prof_id);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
-			   wc_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
-			   meter_profiles);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
-			   meter_inst);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
-			   mirrors);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
-			   upar);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
-			   sp_tcam_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
-			   l2_func);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
-			   flex_key_templ);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			   tbl_scope);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
-			   epoch0_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
-			   epoch1_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
-			   metadata);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
-			   ct_state);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			   range_prof);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			   range_entries);
-	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			   lag_tbl_entries);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
-			    l2_ctx_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
-			    prof_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
-			    prof_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
-			    em_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
-			    em_record_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
-			    wc_tcam_prof_id);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
-			    wc_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
-			    meter_profiles);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
-			    meter_inst);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
-			    mirrors);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
-			    upar);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
-			    sp_tcam_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
-			    l2_func);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
-			    flex_key_templ);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
-			    tbl_scope);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
-			    epoch0_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
-			    epoch1_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
-			    metadata);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
-			    ct_state);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
-			    range_prof);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
-			    range_entries);
-	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
-			    lag_tbl_entries);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_free(struct tf *tfp,
-			    enum tf_dir dir,
-			    struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(hw_entry, 0, sizeof(*hw_entry));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_HW_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int
-tf_msg_session_hw_resc_flush(struct tf *tfp,
-			     enum tf_dir dir,
-			     struct tf_rm_entry *hw_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_hw_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
-			  l2_ctx_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
-			  prof_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
-			  prof_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
-			  em_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
-			  em_record_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
-			  wc_tcam_prof_id);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
-			  wc_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
-			  meter_profiles);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
-			  meter_inst);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
-			  mirrors);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
-			  upar);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
-			  sp_tcam_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
-			  l2_func);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
-			  flex_key_templ);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
-			  tbl_scope);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
-			  epoch0_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
-			  epoch1_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
-			  metadata);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
-			  ct_state);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
-			  range_prof);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
-			  range_entries);
-	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
-			  lag_tbl_entries);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_query *query __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_qcaps_input req = { 0 };
-	struct tf_session_sram_resc_qcaps_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
-			      full_action);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
-			      sp_smac_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
-			      sp_smac_ipv6);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
-			       enum tf_dir dir,
-			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
-			       struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_alloc_input req = { 0 };
-	struct tf_session_sram_resc_alloc_output resp;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	memset(&resp, 0, sizeof(resp));
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			     full_action);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
-			     mcg);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			     encap_8b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			     encap_16b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			     encap_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			     sp_smac);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			     req, sp_smac_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			     req, sp_smac_ipv6);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
-			     req, counter_64b);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			     nat_sport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			     nat_dport);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			     nat_s_ipv4);
-	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			     nat_d_ipv4);
-
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	/* Process the response */
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
-			      resp, full_action);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
-			      mcg);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
-			      encap_8b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
-			      encap_16b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
-			      encap_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
-			      sp_smac);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			      resp, sp_smac_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			      resp, sp_smac_ipv6);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
-			      counter_64b);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
-			      nat_sport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
-			      nat_dport);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
-			      nat_s_ipv4);
-	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
-			      nat_d_ipv4);
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
-			      enum tf_dir dir,
-			      struct tf_rm_entry *sram_entry __rte_unused)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int
-tf_msg_session_sram_resc_flush(struct tf *tfp,
-			       enum tf_dir dir,
-			       struct tf_rm_entry *sram_entry)
-{
-	int rc;
-	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session_sram_resc_free_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(dir);
-
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
-			    full_action);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
-			    mcg);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
-			    encap_8b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
-			    encap_16b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
-			    encap_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
-			    sp_smac);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
-			    sp_smac_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
-			    sp_smac_ipv6);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
-			    counter_64b);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
-			    nat_sport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
-			    nat_dport);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
-			    nat_s_ipv4);
-	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
-			    nat_d_ipv4);
-
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 TF_TYPE_TRUFLOW,
-			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
-			 req);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
-	if (rc)
-		return rc;
-
-	return tfp_le_to_cpu_32(parms.tf_resp_code);
-}
-
 int
 tf_msg_session_resc_qcaps(struct tf *tfp,
 			  enum tf_dir dir,
@@ -973,7 +220,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -981,14 +228,14 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
 	printf("\nQCAPS\n");
 	for (i = 0; i < size; i++) {
-		query[i].type = tfp_cpu_to_le_32(data[i].type);
+		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
@@ -1078,7 +325,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	/* Process the response
 	 * Should always get expected number of entries
 	 */
-	if (resp.size != size) {
+	if (tfp_le_to_cpu_32(resp.size) != size) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
@@ -1087,14 +334,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 	}
 
 	printf("\nRESV\n");
-	printf("size: %d\n", resp.size);
+	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
 	for (i = 0; i < size; i++) {
-		resv[i].type = tfp_cpu_to_le_32(resv_data[i].type);
-		resv[i].start = tfp_cpu_to_le_16(resv_data[i].start);
-		resv[i].stride = tfp_cpu_to_le_16(resv_data[i].stride);
+		resv[i].type = tfp_le_to_cpu_32(resv_data[i].type);
+		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
+		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
@@ -1173,24 +420,112 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem register request to Firmware
- */
-int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id)
+int
+tf_msg_insert_em_internal_entry(struct tf *tfp,
+				struct tf_insert_em_entry_parms *em_parms,
+				uint16_t *rptr_index,
+				uint8_t *rptr_entry,
+				uint8_t *num_of_entries)
 {
 	int rc;
-	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
-	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_insert_input req = { 0 };
+	struct hwrm_tf_em_insert_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_em_64b_entry *em_result =
+		(struct tf_em_64b_entry *)em_parms->em_record;
+	uint32_t flags;
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	tfp_memcpy(req.em_key,
+		   em_parms->key,
+		   ((em_parms->key_sz_in_bits + 7) / 8));
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.strength = (em_result->hdr.word1 &
+			CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
+			CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
+	req.em_key_bitlen = em_parms->key_sz_in_bits;
+	req.action_ptr = em_result->hdr.pointer;
+	req.em_record_idx = *rptr_index;
+
+	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*rptr_entry = resp.rptr_entry;
+	*rptr_index = resp.rptr_index;
+	*num_of_entries = resp.num_of_entries;
+
+	return 0;
+}
+
+int
+tf_msg_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *em_parms)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct hwrm_tf_em_delete_input req = { 0 };
+	struct hwrm_tf_em_delete_output resp = { 0 };
+	uint32_t flags;
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	flags = (em_parms->dir == TF_DIR_TX ?
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_16(flags);
+	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+
+	parms.tf_type = HWRM_TF_EM_DELETE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+
+	return 0;
+}
+
+int
+tf_msg_em_mem_rgtr(struct tf *tfp,
+		   int page_lvl,
+		   int page_size,
+		   uint64_t dma_addr,
+		   uint16_t *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
 
-	req.page_level = page_lvl;
-	req.page_size = page_size;
-	req.page_dir = tfp_cpu_to_le_64(dma_addr);
-
 	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
@@ -1208,11 +543,9 @@ int tf_msg_em_mem_rgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM mem unregister request to Firmware
- */
-int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t  *ctx_id)
+int
+tf_msg_em_mem_unrgtr(struct tf *tfp,
+		     uint16_t *ctx_id)
 {
 	int rc;
 	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
@@ -1233,12 +566,10 @@ int tf_msg_em_mem_unrgtr(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM qcaps request to Firmware
- */
-int tf_msg_em_qcaps(struct tf *tfp,
-		    int dir,
-		    struct tf_em_caps *em_caps)
+int
+tf_msg_em_qcaps(struct tf *tfp,
+		int dir,
+		struct tf_em_caps *em_caps)
 {
 	int rc;
 	struct hwrm_tf_ext_em_qcaps_input  req = {0};
@@ -1273,17 +604,15 @@ int tf_msg_em_qcaps(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM config request to Firmware
- */
-int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t   num_entries,
-		  uint16_t   key0_ctx_id,
-		  uint16_t   key1_ctx_id,
-		  uint16_t   record_ctx_id,
-		  uint16_t   efc_ctx_id,
-		  uint8_t    flush_interval,
-		  int        dir)
+int
+tf_msg_em_cfg(struct tf *tfp,
+	      uint32_t num_entries,
+	      uint16_t key0_ctx_id,
+	      uint16_t key1_ctx_id,
+	      uint16_t record_ctx_id,
+	      uint16_t efc_ctx_id,
+	      uint8_t flush_interval,
+	      int dir)
 {
 	int rc;
 	struct hwrm_tf_ext_em_cfg_input  req = {0};
@@ -1317,42 +646,23 @@ int tf_msg_em_cfg(struct tf *tfp,
 	return rc;
 }
 
-/**
- * Sends EM internal insert request to Firmware
- */
-int tf_msg_insert_em_internal_entry(struct tf *tfp,
-				struct tf_insert_em_entry_parms *em_parms,
-				uint16_t *rptr_index,
-				uint8_t *rptr_entry,
-				uint8_t *num_of_entries)
+int
+tf_msg_em_op(struct tf *tfp,
+	     int dir,
+	     uint16_t op)
 {
-	int                         rc;
-	struct tfp_send_msg_parms        parms = { 0 };
-	struct hwrm_tf_em_insert_input   req = { 0 };
-	struct hwrm_tf_em_insert_output  resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_em_64b_entry *em_result =
-		(struct tf_em_64b_entry *)em_parms->em_record;
+	int rc;
+	struct hwrm_tf_ext_em_op_input req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
 	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	tfp_memcpy(req.em_key,
-		   em_parms->key,
-		   ((em_parms->key_sz_in_bits + 7) / 8));
-
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.strength =
-		(em_result->hdr.word1 & CFA_P4_EEM_ENTRY_STRENGTH_MASK) >>
-		CFA_P4_EEM_ENTRY_STRENGTH_SHIFT;
-	req.em_key_bitlen = em_parms->key_sz_in_bits;
-	req.action_ptr = em_result->hdr.pointer;
-	req.em_record_idx = *rptr_index;
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
 
-	parms.tf_type = HWRM_TF_EM_INSERT;
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1361,75 +671,86 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &parms);
-	if (rc)
-		return rc;
-
-	*rptr_entry = resp.rptr_entry;
-	*rptr_index = resp.rptr_index;
-	*num_of_entries = resp.num_of_entries;
-
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM delete insert request to Firmware
- */
-int tf_msg_delete_em_entry(struct tf *tfp,
-			   struct tf_delete_em_entry_parms *em_parms)
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_tcam_set_parms *parms)
 {
-	int                             rc;
-	struct tfp_send_msg_parms       parms = { 0 };
-	struct hwrm_tf_em_delete_input  req = { 0 };
-	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.type = parms->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
 
-	flags = (em_parms->dir == TF_DIR_TX ?
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_16(flags);
-	req.flow_handle = tfp_cpu_to_le_64(em_parms->flow_handle);
+	req.key_size = parms->key_size;
+	req.mask_offset = parms->key_size;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * parms->key_size;
+	req.result_size = parms->result_size;
+	data_size = 2 * req.key_size + req.result_size;
 
-	parms.tf_type = HWRM_TF_EM_DELETE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_alloc_dma_buf(&buf, data_size);
+		if (rc)
+			goto cleanup;
+		data = buf.va_addr;
+		tfp_memcpy(&req.dev_data[0],
+			   &buf.pa_addr,
+			   sizeof(buf.pa_addr));
+	}
+
+	tfp_memcpy(&data[0], parms->key, parms->key_size);
+	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
+	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp,
-				 &parms);
+				 &mparms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
-	em_parms->index = tfp_le_to_cpu_16(resp.em_index);
+cleanup:
+	tf_msg_free_dma_buf(&buf);
 
-	return 0;
+	return rc;
 }
 
-/**
- * Sends EM operation request to Firmware
- */
-int tf_msg_em_op(struct tf *tfp,
-		 int dir,
-		 uint16_t op)
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_tcam_free_parms *in_parms)
 {
 	int rc;
-	struct hwrm_tf_ext_em_op_input req = {0};
-	struct hwrm_tf_ext_em_op_output resp = {0};
-	uint32_t flags;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
 
-	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
-		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
-	req.flags = tfp_cpu_to_le_32(flags);
-	req.op = tfp_cpu_to_le_16(op);
+	req.type = in_parms->hcapi_type;
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
 
-	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.tf_type = HWRM_TF_TCAM_FREE;
 	parms.req_data = (uint32_t *)&req;
 	parms.req_size = sizeof(req);
 	parms.resp_data = (uint32_t *)&resp;
@@ -1444,21 +765,32 @@ int tf_msg_em_op(struct tf *tfp,
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_set_input req = { 0 };
+	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_set_input req = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
 	req.index = tfp_cpu_to_le_32(index);
 
@@ -1466,13 +798,15 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 		   data,
 		   size);
 
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_TBL_TYPE_SET,
-			 req);
+	parms.tf_type = HWRM_TF_TBL_TYPE_SET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1482,32 +816,43 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 int
 tf_msg_get_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
-		     enum tf_tbl_type type,
+		     uint16_t hcapi_type,
 		     uint16_t size,
 		     uint8_t *data,
 		     uint32_t index)
 {
 	int rc;
+	struct hwrm_tf_tbl_type_get_input req = { 0 };
+	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_tbl_type_get_input req = { 0 };
-	struct tf_tbl_type_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
-	req.type = tfp_cpu_to_le_32(type);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_TBL_TYPE_GET,
-		 req,
-		 resp);
+	parms.tf_type = HWRM_TF_TBL_TYPE_GET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
 	if (rc)
 		return rc;
 
@@ -1522,6 +867,8 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/* HWRM Tunneled messages */
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  struct tf_bulk_get_tbl_entry_parms *params)
@@ -1562,96 +909,3 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
-
-int
-tf_msg_tcam_entry_set(struct tf *tfp,
-		      struct tf_tcam_set_parms *parms)
-{
-	int rc;
-	struct tfp_send_msg_parms mparms = { 0 };
-	struct hwrm_tf_tcam_set_input req = { 0 };
-	struct hwrm_tf_tcam_set_output resp = { 0 };
-	struct tf_msg_dma_buf buf = { 0 };
-	uint8_t *data = NULL;
-	int data_size = 0;
-
-	req.type = parms->type;
-
-	req.idx = tfp_cpu_to_le_16(parms->idx);
-	if (parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
-
-	req.key_size = parms->key_size;
-	req.mask_offset = parms->key_size;
-	/* Result follows after key and mask, thus multiply by 2 */
-	req.result_offset = 2 * parms->key_size;
-	req.result_size = parms->result_size;
-	data_size = 2 * req.key_size + req.result_size;
-
-	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
-		/* use pci buffer */
-		data = &req.dev_data[0];
-	} else {
-		/* use dma buffer */
-		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
-		rc = tf_msg_alloc_dma_buf(&buf, data_size);
-		if (rc)
-			goto cleanup;
-		data = buf.va_addr;
-		tfp_memcpy(&req.dev_data[0],
-			   &buf.pa_addr,
-			   sizeof(buf.pa_addr));
-	}
-
-	tfp_memcpy(&data[0], parms->key, parms->key_size);
-	tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
-	tfp_memcpy(&data[req.result_offset], parms->result, parms->result_size);
-
-	mparms.tf_type = HWRM_TF_TCAM_SET;
-	mparms.req_data = (uint32_t *)&req;
-	mparms.req_size = sizeof(req);
-	mparms.resp_data = (uint32_t *)&resp;
-	mparms.resp_size = sizeof(resp);
-	mparms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &mparms);
-	if (rc)
-		goto cleanup;
-
-cleanup:
-	tf_msg_free_dma_buf(&buf);
-
-	return rc;
-}
-
-int
-tf_msg_tcam_entry_free(struct tf *tfp,
-		       struct tf_tcam_free_parms *in_parms)
-{
-	int rc;
-	struct hwrm_tf_tcam_free_input req =  { 0 };
-	struct hwrm_tf_tcam_free_output resp = { 0 };
-	struct tfp_send_msg_parms parms = { 0 };
-
-	/* Populate the request */
-	rc = tf_tcam_tbl_2_hwrm(in_parms->type, &req.type);
-	if (rc != 0)
-		return rc;
-
-	req.count = 1;
-	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
-	if (in_parms->dir == TF_DIR_TX)
-		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
-
-	parms.tf_type = HWRM_TF_TCAM_FREE;
-	parms.req_data = (uint32_t *)&req;
-	parms.req_size = sizeof(req);
-	parms.resp_data = (uint32_t *)&resp;
-	parms.resp_size = sizeof(resp);
-	parms.mailbox = TF_KONG_MB;
-
-	rc = tfp_send_msg_direct(tfp,
-				 &parms);
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 1ff1044e8..8e276d4c0 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -16,6 +16,8 @@
 
 struct tf;
 
+/* HWRM Direct messages */
+
 /**
  * Sends session open request to Firmware
  *
@@ -29,7 +31,7 @@ struct tf;
  *   Pointer to the fw_session_id that is allocated on firmware side
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
@@ -46,7 +48,7 @@ int tf_msg_session_open(struct tf *tfp,
  *   time of session open
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
@@ -59,73 +61,21 @@ int tf_msg_session_attach(struct tf *tfp,
  *   Pointer to session handle
  *
  * Returns:
- *
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_close(struct tf *tfp);
 
 /**
  * Sends session query config request to TF Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_qcfg(struct tf *tfp);
 
-/**
- * Sends session HW resource query capability request to TF Firmware
- */
-int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_query *hw_query);
-
-/**
- * Sends session HW resource allocation request to TF Firmware
- */
-int tf_msg_session_hw_resc_alloc(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_hw_alloc *hw_alloc,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource free request to TF Firmware
- */
-int tf_msg_session_hw_resc_free(struct tf *tfp,
-				enum tf_dir dir,
-				struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session HW resource flush request to TF Firmware
- */
-int tf_msg_session_hw_resc_flush(struct tf *tfp,
-				 enum tf_dir dir,
-				 struct tf_rm_entry *hw_entry);
-
-/**
- * Sends session SRAM resource query capability request to TF Firmware
- */
-int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_query *sram_query);
-
-/**
- * Sends session SRAM resource allocation request to TF Firmware
- */
-int tf_msg_session_sram_resc_alloc(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_sram_alloc *sram_alloc,
-				   struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource free request to TF Firmware
- */
-int tf_msg_session_sram_resc_free(struct tf *tfp,
-				  enum tf_dir dir,
-				  struct tf_rm_entry *sram_entry);
-
-/**
- * Sends session SRAM resource flush request to TF Firmware
- */
-int tf_msg_session_sram_resc_flush(struct tf *tfp,
-				   enum tf_dir dir,
-				   struct tf_rm_entry *sram_entry);
-
 /**
  * Sends session HW resource query capability request to TF Firmware
  *
@@ -183,6 +133,21 @@ int tf_msg_session_resc_alloc(struct tf *tfp,
 
 /**
  * Sends session resource flush request to TF Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] size
+ *   Number of elements in the req and resv arrays
+ *
+ * [in] resv
+ *   Pointer to an array of reserved elements that needs to be flushed
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_session_resc_flush(struct tf *tfp,
 			      enum tf_dir dir,
@@ -190,6 +155,24 @@ int tf_msg_session_resc_flush(struct tf *tfp,
 			      struct tf_rm_resc_entry *resv);
 /**
  * Sends EM internal insert request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to em insert parameter list
+ *
+ * [in] rptr_index
+ *   Record ptr index
+ *
+ * [in] rptr_entry
+ *   Record ptr entry
+ *
+ * [in] num_of_entries
+ *   Number of entries to insert
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    struct tf_insert_em_entry_parms *params,
@@ -198,26 +181,75 @@ int tf_msg_insert_em_internal_entry(struct tf *tfp,
 				    uint8_t *num_of_entries);
 /**
  * Sends EM internal delete request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] em_parms
+ *   Pointer to em delete parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_delete_em_entry(struct tf *tfp,
 			   struct tf_delete_em_entry_parms *em_parms);
+
 /**
  * Sends EM mem register request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] page_lvl
+ *   Page level
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] dma_addr
+ *   DMA Address for the memory page
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_rgtr(struct tf *tfp,
-		       int           page_lvl,
-		       int           page_size,
-		       uint64_t      dma_addr,
-		       uint16_t     *ctx_id);
+		       int page_lvl,
+		       int page_size,
+		       uint64_t dma_addr,
+		       uint16_t *ctx_id);
 
 /**
  * Sends EM mem unregister request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] ctx_id
+ *   Context id
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_mem_unrgtr(struct tf *tfp,
-			 uint16_t     *ctx_id);
+			 uint16_t *ctx_id);
 
 /**
  * Sends EM qcaps request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] em_caps
+ *   Pointer to EM capabilities
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_qcaps(struct tf *tfp,
 		    int dir,
@@ -225,22 +257,63 @@ int tf_msg_em_qcaps(struct tf *tfp,
 
 /**
  * Sends EM config request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] num_entries
+ *   EM Table, key 0, number of entries to configure
+ *
+ * [in] key0_ctx_id
+ *   EM Table, Key 0 context id
+ *
+ * [in] key1_ctx_id
+ *   EM Table, Key 1 context id
+ *
+ * [in] record_ctx_id
+ *   EM Table, Record context id
+ *
+ * [in] efc_ctx_id
+ *   EM Table, EFC Table context id
+ *
+ * [in] flush_interval
+ *   Flush pending HW cached flows every 1/10th of value set in
+ *   seconds, both idle and active flows are flushed from the HW
+ *   cache. If set to 0, this feature will be disabled.
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_cfg(struct tf *tfp,
-		  uint32_t      num_entries,
-		  uint16_t      key0_ctx_id,
-		  uint16_t      key1_ctx_id,
-		  uint16_t      record_ctx_id,
-		  uint16_t      efc_ctx_id,
-		  uint8_t       flush_interval,
-		  int           dir);
+		  uint32_t num_entries,
+		  uint16_t key0_ctx_id,
+		  uint16_t key1_ctx_id,
+		  uint16_t record_ctx_id,
+		  uint16_t efc_ctx_id,
+		  uint8_t flush_interval,
+		  int dir);
 
 /**
  * Sends EM operation request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or Transmit direction
+ *
+ * [in] op
+ *   CFA Operator
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
  */
 int tf_msg_em_op(struct tf *tfp,
-		 int        dir,
-		 uint16_t   op);
+		 int dir,
+		 uint16_t op);
 
 /**
  * Sends tcam entry 'set' to the Firmware.
@@ -281,7 +354,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to set
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to set
  *
  * [in] size
@@ -298,7 +371,7 @@ int tf_msg_tcam_entry_free(struct tf *tfp,
  */
 int tf_msg_set_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
@@ -312,7 +385,7 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  * [in] dir
  *   Direction location of the element to get
  *
- * [in] type
+ * [in] hcapi_type
  *   Type of the object to get
  *
  * [in] size
@@ -329,11 +402,13 @@ int tf_msg_set_tbl_entry(struct tf *tfp,
  */
 int tf_msg_get_tbl_entry(struct tf *tfp,
 			 enum tf_dir dir,
-			 enum tf_tbl_type type,
+			 uint16_t hcapi_type,
 			 uint16_t size,
 			 uint8_t *data,
 			 uint32_t index);
 
+/* HWRM Tunneled messages */
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index b6fe2f1ad..e0a84e64d 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -1818,16 +1818,8 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_entries = tfs->resc.tx.hw_entry;
 
 	/* Query for Session HW Resources */
-	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
 
+	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
 	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1846,16 +1838,6 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
 
 	/* Allocate Session HW Resources */
-	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform HW allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -1906,17 +1888,7 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	else
 		sram_entries = tfs->resc.tx.sram_entry;
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
+	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
 	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
 	if (rc) {
 		/* Log error */
@@ -1934,20 +1906,6 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
 		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
 	/* Perform SRAM allocation validation as its possible the
 	 * resource availability changed between qcaps and alloc
 	 */
@@ -2798,17 +2756,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_hw_flush(i, hw_flush_entries);
-			rc = tf_msg_session_hw_resc_flush(tfp,
-							  i,
-							  hw_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
 		}
 
 		/* Check for any not previously freed SRAM resources
@@ -2828,38 +2775,6 @@ tf_rm_close(struct tf *tfp)
 
 			/* Log the entries to be flushed */
 			tf_rm_log_sram_flush(i, sram_flush_entries);
-
-			rc = tf_msg_session_sram_resc_flush(tfp,
-							    i,
-							    sram_flush_entries);
-			if (rc) {
-				rc_close = rc;
-				/* Log error */
-				TFP_DRV_LOG(ERR,
-					    "%s, HW flush failed, rc:%s\n",
-					    tf_dir_2_str(i),
-					    strerror(-rc));
-			}
-		}
-
-		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, HW free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
-		}
-
-		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
-		if (rc) {
-			rc_close = rc;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, SRAM free failed, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
 		}
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
index de8f11955..2d9be654a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ b/drivers/net/bnxt/tf_core/tf_rm_new.c
@@ -95,7 +95,9 @@ struct tf_rm_new_db {
  *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
 			       uint16_t *reservations,
 			       uint16_t count,
 			       uint16_t *valid_count)
@@ -107,6 +109,26 @@ tf_rm_count_hcapi_reservations(struct tf_rm_element_cfg *cfg,
 		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
 		    reservations[i] > 0)
 			cnt++;
+
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
+		 */
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
+			TFP_DRV_LOG(ERR,
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
+				tf_dir_2_str(dir),
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+		}
 	}
 
 	*valid_count = cnt;
@@ -405,7 +427,9 @@ tf_rm_create_db(struct tf *tfp,
 	 * the DB holds them all as to give a fast lookup. We can also
 	 * remove entries where there are no request for elements.
 	 */
-	tf_rm_count_hcapi_reservations(parms->cfg,
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
 				       parms->alloc_cnt,
 				       parms->num_elements,
 				       &hcapi_items);
@@ -507,6 +531,11 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
 			/* Create pool */
 			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
 				     sizeof(struct bitalloc));
@@ -548,11 +577,16 @@ tf_rm_create_db(struct tf *tfp,
 		}
 	}
 
-	rm_db->num_entries = i;
+	rm_db->num_entries = parms->num_elements;
 	rm_db->dir = parms->dir;
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index e594f0248..d7f5de4c4 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -26,741 +26,6 @@
 #include "stack.h"
 #include "tf_common.h"
 
-#define PTU_PTE_VALID          0x1UL
-#define PTU_PTE_LAST           0x2UL
-#define PTU_PTE_NEXT_TO_LAST   0x4UL
-
-/* Number of pointers per page_size */
-#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
-
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define	TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
-/**
- * Function to free a page table
- *
- * [in] tp
- *   Pointer to the page table to free
- */
-static void
-tf_em_free_pg_tbl(struct hcapi_cfa_em_page_tbl *tp)
-{
-	uint32_t i;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		if (!tp->pg_va_tbl[i]) {
-			TFP_DRV_LOG(WARNING,
-				    "No mapping for page: %d table: %016" PRIu64 "\n",
-				    i,
-				    (uint64_t)(uintptr_t)tp);
-			continue;
-		}
-
-		tfp_free(tp->pg_va_tbl[i]);
-		tp->pg_va_tbl[i] = NULL;
-	}
-
-	tp->pg_count = 0;
-	tfp_free(tp->pg_va_tbl);
-	tp->pg_va_tbl = NULL;
-	tfp_free(tp->pg_pa_tbl);
-	tp->pg_pa_tbl = NULL;
-}
-
-/**
- * Function to free an EM table
- *
- * [in] tbl
- *   Pointer to the EM table to free
- */
-static void
-tf_em_free_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-		TFP_DRV_LOG(INFO,
-			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
-			   TF_EM_PAGE_SIZE,
-			    i,
-			    tp->pg_count);
-
-		tf_em_free_pg_tbl(tp);
-	}
-
-	tbl->l0_addr = NULL;
-	tbl->l0_dma_addr = 0;
-	tbl->num_lvl = 0;
-	tbl->num_data_pages = 0;
-}
-
-/**
- * Allocation of page tables
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] pg_count
- *   Page count to allocate
- *
- * [in] pg_size
- *   Size of each page
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_pg_tbl(struct hcapi_cfa_em_page_tbl *tp,
-		   uint32_t pg_count,
-		   uint32_t pg_size)
-{
-	uint32_t i;
-	struct tfp_calloc_parms parms;
-
-	parms.nitems = pg_count;
-	parms.size = sizeof(void *);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0)
-		return -ENOMEM;
-
-	tp->pg_va_tbl = parms.mem_va;
-
-	if (tfp_calloc(&parms) != 0) {
-		tfp_free(tp->pg_va_tbl);
-		return -ENOMEM;
-	}
-
-	tp->pg_pa_tbl = parms.mem_va;
-
-	tp->pg_count = 0;
-	tp->pg_size = pg_size;
-
-	for (i = 0; i < pg_count; i++) {
-		parms.nitems = 1;
-		parms.size = pg_size;
-		parms.alignment = TF_EM_PAGE_ALIGNMENT;
-
-		if (tfp_calloc(&parms) != 0)
-			goto cleanup;
-
-		tp->pg_pa_tbl[i] = (uintptr_t)parms.mem_pa;
-		tp->pg_va_tbl[i] = parms.mem_va;
-
-		memset(tp->pg_va_tbl[i], 0, pg_size);
-		tp->pg_count++;
-	}
-
-	return 0;
-
-cleanup:
-	tf_em_free_pg_tbl(tp);
-	return -ENOMEM;
-}
-
-/**
- * Allocates EM page tables
- *
- * [in] tbl
- *   Table to allocate pages for
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of memory
- */
-static int
-tf_em_alloc_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp;
-	int rc = 0;
-	int i;
-	uint32_t j;
-
-	for (i = 0; i < tbl->num_lvl; i++) {
-		tp = &tbl->pg_tbl[i];
-
-		rc = tf_em_alloc_pg_tbl(tp,
-					tbl->page_cnt[i],
-					TF_EM_PAGE_SIZE);
-		if (rc) {
-			TFP_DRV_LOG(WARNING,
-				"Failed to allocate page table: lvl: %d, rc:%s\n",
-				i,
-				strerror(-rc));
-			goto cleanup;
-		}
-
-		for (j = 0; j < tp->pg_count; j++) {
-			TFP_DRV_LOG(INFO,
-				"EEM: Allocated page table: size %u lvl %d cnt"
-				" %u VA:%p PA:%p\n",
-				TF_EM_PAGE_SIZE,
-				i,
-				tp->pg_count,
-				(uint32_t *)tp->pg_va_tbl[j],
-				(uint32_t *)(uintptr_t)tp->pg_pa_tbl[j]);
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_free_page_table(tbl);
-	return rc;
-}
-
-/**
- * Links EM page tables
- *
- * [in] tp
- *   Pointer to page table
- *
- * [in] tp_next
- *   Pointer to the next page table
- *
- * [in] set_pte_last
- *   Flag controlling if the page table is last
- */
-static void
-tf_em_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-		      struct hcapi_cfa_em_page_tbl *tp_next,
-		      bool set_pte_last)
-{
-	uint64_t *pg_pa = tp_next->pg_pa_tbl;
-	uint64_t *pg_va;
-	uint64_t valid;
-	uint32_t k = 0;
-	uint32_t i;
-	uint32_t j;
-
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			if (k == tp_next->pg_count - 2 && set_pte_last)
-				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
-			else if (k == tp_next->pg_count - 1 && set_pte_last)
-				valid = PTU_PTE_LAST | PTU_PTE_VALID;
-			else
-				valid = PTU_PTE_VALID;
-
-			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
-			if (++k >= tp_next->pg_count)
-				return;
-		}
-	}
-}
-
-/**
- * Setup a EM page table
- *
- * [in] tbl
- *   Pointer to EM page table
- */
-static void
-tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
-{
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_page_tbl *tp;
-	bool set_pte_last = 0;
-	int i;
-
-	for (i = 0; i < tbl->num_lvl - 1; i++) {
-		tp = &tbl->pg_tbl[i];
-		tp_next = &tbl->pg_tbl[i + 1];
-		if (i == tbl->num_lvl - 2)
-			set_pte_last = 1;
-		tf_em_link_page_table(tp, tp_next, set_pte_last);
-	}
-
-	tbl->l0_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_va_tbl[0];
-	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
-}
-
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
-/**
- * Unregisters EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- */
-static void
-tf_em_ctx_unreg(struct tf *tfp,
-		struct tf_tbl_scope_cb *tbl_scope_cb,
-		int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
-			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
-			tf_em_free_page_table(tbl);
-		}
-	}
-}
-
-/**
- * Registers EM Ctx in Firmware
- *
- * [in] tfp
- *   Pointer to a TruFlow handle
- *
- * [in] tbl_scope_cb
- *   Pointer to a table scope control block
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * Returns:
- *   0       - Success
- *   -ENOMEM - Out of Memory
- */
-static int
-tf_em_ctx_reg(struct tf *tfp,
-	      struct tf_tbl_scope_cb *tbl_scope_cb,
-	      int dir)
-{
-	struct hcapi_cfa_em_ctx_mem_info *ctxp =
-		&tbl_scope_cb->em_ctx_info[dir];
-	struct hcapi_cfa_em_table *tbl;
-	int rc = 0;
-	int i;
-
-	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
-		tbl = &ctxp->em_tables[i];
-
-		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
-
-			if (rc)
-				goto cleanup;
-
-			rc = tf_em_alloc_page_table(tbl);
-			if (rc)
-				goto cleanup;
-
-			tf_em_setup_page_table(tbl);
-			rc = tf_msg_em_mem_rgtr(tfp,
-						tbl->num_lvl - 1,
-						TF_EM_PAGE_SIZE_ENUM,
-						tbl->l0_dma_addr,
-						&tbl->ctx_id);
-			if (rc)
-				goto cleanup;
-		}
-	}
-	return rc;
-
-cleanup:
-	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	return rc;
-}
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries =
-		0;
-
-	return 0;
-}
-
 /**
  * Internal function to get a Table Entry. Supports all Table Types
  * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
@@ -883,289 +148,6 @@ tf_free_tbl_entry_shadow(struct tf_session *tfs,
 }
 #endif /* TF_SHADOW */
 
-/**
- * Create External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- * [in] num_entries
- *   number of entries to write
- * [in] entry_sz_bytes
- *   size of each entry
- *
- * Return:
- *  0       - Success, entry allocated - no search support
- *  -ENOMEM -EINVAL -EOPNOTSUPP
- *          - Failure, entry not allocated, out of resources
- */
-static int
-tf_create_tbl_pool_external(enum tf_dir dir,
-			    struct tf_tbl_scope_cb *tbl_scope_cb,
-			    uint32_t num_entries,
-			    uint32_t entry_sz_bytes)
-{
-	struct tfp_calloc_parms parms;
-	uint32_t i;
-	int32_t j;
-	int rc = 0;
-	struct stack *pool = &tbl_scope_cb->ext_act_pool[dir];
-
-	parms.nitems = num_entries;
-	parms.size = sizeof(uint32_t);
-	parms.alignment = 0;
-
-	if (tfp_calloc(&parms) != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: external pool failure %s\n",
-			    tf_dir_2_str(dir), strerror(ENOMEM));
-		return -ENOMEM;
-	}
-
-	/* Create empty stack
-	 */
-	rc = stack_init(num_entries, parms.mem_va, pool);
-
-	if (rc != 0) {
-		TFP_DRV_LOG(ERR, "%s: TBL: stack init failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-
-	/* Save the  malloced memory address so that it can
-	 * be freed when the table scope is freed.
-	 */
-	tbl_scope_cb->ext_act_pool_mem[dir] = (uint32_t *)parms.mem_va;
-
-	/* Fill pool with indexes in reverse
-	 */
-	j = (num_entries - 1) * entry_sz_bytes;
-
-	for (i = 0; i < num_entries; i++) {
-		rc = stack_push(pool, j);
-		if (rc != 0) {
-			TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-				    tf_dir_2_str(dir), strerror(-rc));
-			goto cleanup;
-		}
-
-		if (j < 0) {
-			TFP_DRV_LOG(ERR, "%d TBL: invalid offset (%d)\n",
-				    dir, j);
-			goto cleanup;
-		}
-		j -= entry_sz_bytes;
-	}
-
-	if (!stack_is_full(pool)) {
-		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "%s TBL: stack failure %s\n",
-			    tf_dir_2_str(dir), strerror(-rc));
-		goto cleanup;
-	}
-	return 0;
-cleanup:
-	tfp_free((void *)parms.mem_va);
-	return rc;
-}
-
-/**
- * Destroy External Tbl pool of memory indexes.
- *
- * [in] dir
- *   direction
- * [in] tbl_scope_cb
- *   pointer to the table scope
- *
- */
-static void
-tf_destroy_tbl_pool_external(enum tf_dir dir,
-			     struct tf_tbl_scope_cb *tbl_scope_cb)
-{
-	uint32_t *ext_act_pool_mem =
-		tbl_scope_cb->ext_act_pool_mem[dir];
-
-	tfp_free(ext_act_pool_mem);
-}
-
-/* API defined in tf_em.h */
-struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
-{
-	int i;
-
-	/* Check that id is valid */
-	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
-	if (i < 0)
-		return NULL;
-
-	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
-	}
-
-	return NULL;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			 struct tf_free_tbl_scope_parms *parms)
-{
-	int rc = 0;
-	enum tf_dir  dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
-
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Table scope error\n");
-		return -EINVAL;
-	}
-
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-
-	/* free table scope locks */
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/* Free associated external pools
-		 */
-		tf_destroy_tbl_pool_external(dir,
-					     tbl_scope_cb);
-		tf_msg_em_op(tfp,
-			     dir,
-			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
-
-		/* free table scope and all associated resources */
-		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
-	}
-
-	return rc;
-}
-
-/* API defined in tf_em.h */
-int
-tf_alloc_eem_tbl_scope(struct tf *tfp,
-		       struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-	enum tf_dir dir;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_table *em_tables;
-	int index;
-	struct tf_session *session;
-	struct tf_free_tbl_scope_parms free_parms;
-
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* Get Table Scope control block from the session pool */
-	index = ba_alloc(session->tbl_scope_pool_rx);
-	if (index == -1) {
-		TFP_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
-			    "Control Block\n");
-		return -ENOMEM;
-	}
-
-	tbl_scope_cb = &session->tbl_scopes[index];
-	tbl_scope_cb->index = index;
-	tbl_scope_cb->tbl_scope_id = index;
-	parms->tbl_scope_id = index;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		rc = tf_msg_em_qcaps(tfp,
-				     dir,
-				     &tbl_scope_cb->em_caps[dir]);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to query for EEM capability,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-	}
-
-	/*
-	 * Validate and setup table sizes
-	 */
-	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
-		goto cleanup;
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		/*
-		 * Allocate tables and signal configuration to FW
-		 */
-		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to register for EEM ctx,"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup;
-		}
-
-		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
-		rc = tf_msg_em_cfg(tfp,
-				   em_tables[TF_KEY0_TABLE].num_entries,
-				   em_tables[TF_KEY0_TABLE].ctx_id,
-				   em_tables[TF_KEY1_TABLE].ctx_id,
-				   em_tables[TF_RECORD_TABLE].ctx_id,
-				   em_tables[TF_EFC_TABLE].ctx_id,
-				   parms->hw_flow_cache_flush_timer,
-				   dir);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "TBL: Unable to configure EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		rc = tf_msg_em_op(tfp,
-				  dir,
-				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
-
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Unable to enable EEM in firmware"
-				    " rc:%s\n",
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-
-		/* Allocate the pool of offsets of the external memory.
-		 * Initially, this is a single fixed size pool for all external
-		 * actions related to a single table scope.
-		 */
-		rc = tf_create_tbl_pool_external(dir,
-				    tbl_scope_cb,
-				    em_tables[TF_RECORD_TABLE].num_entries,
-				    em_tables[TF_RECORD_TABLE].entry_size);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s TBL: Unable to allocate idx pools %s\n",
-				    tf_dir_2_str(dir),
-				    strerror(-rc));
-			goto cleanup_full;
-		}
-	}
-
-	return 0;
-
-cleanup_full:
-	free_parms.tbl_scope_id = index;
-	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
-	return -EINVAL;
-
-cleanup:
-	/* Free Table control block */
-	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
-	return -EINVAL;
-}
 
  /* API defined in tf_core.h */
 int
@@ -1196,119 +178,3 @@ tf_bulk_get_tbl_entry(struct tf *tfp,
 
 	return rc;
 }
-
-/* API defined in tf_core.h */
-int
-tf_alloc_tbl_scope(struct tf *tfp,
-		   struct tf_alloc_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	rc = tf_alloc_eem_tbl_scope(tfp, parms);
-
-	return rc;
-}
-
-/* API defined in tf_core.h */
-int
-tf_free_tbl_scope(struct tf *tfp,
-		  struct tf_free_tbl_scope_parms *parms)
-{
-	int rc;
-
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
-
-	/* free table scope and all associated resources */
-	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
-
-	return rc;
-}
-
-static void
-tf_dump_link_page_table(struct hcapi_cfa_em_page_tbl *tp,
-			struct hcapi_cfa_em_page_tbl *tp_next)
-{
-	uint64_t *pg_va;
-	uint32_t i;
-	uint32_t j;
-	uint32_t k = 0;
-
-	printf("pg_count:%d pg_size:0x%x\n",
-	       tp->pg_count,
-	       tp->pg_size);
-	for (i = 0; i < tp->pg_count; i++) {
-		pg_va = tp->pg_va_tbl[i];
-		printf("\t%p\n", (void *)pg_va);
-		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
-			printf("\t\t%p\n", (void *)(uintptr_t)pg_va[j]);
-			if (((pg_va[j] & 0x7) ==
-			     tfp_cpu_to_le_64(PTU_PTE_LAST |
-					      PTU_PTE_VALID)))
-				return;
-
-			if (!(pg_va[j] & tfp_cpu_to_le_64(PTU_PTE_VALID))) {
-				printf("** Invalid entry **\n");
-				return;
-			}
-
-			if (++k >= tp_next->pg_count) {
-				printf("** Shouldn't get here **\n");
-				return;
-			}
-		}
-	}
-}
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id);
-
-void tf_dump_dma(struct tf *tfp, uint32_t tbl_scope_id)
-{
-	struct tf_session      *session;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct hcapi_cfa_em_page_tbl *tp;
-	struct hcapi_cfa_em_page_tbl *tp_next;
-	struct hcapi_cfa_em_table *tbl;
-	int i;
-	int j;
-	int dir;
-
-	printf("called %s\n", __func__);
-
-	/* find session struct */
-	session = (struct tf_session *)tfp->session->core_data;
-
-	/* find control block for table scope */
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 tbl_scope_id);
-	if (tbl_scope_cb == NULL)
-		PMD_DRV_LOG(ERR, "No table scope\n");
-
-	for (dir = 0; dir < TF_DIR_MAX; dir++) {
-		printf("Direction %s:\n", (dir == TF_DIR_RX ? "Rx" : "Tx"));
-
-		for (j = TF_KEY0_TABLE; j < TF_MAX_TABLE; j++) {
-			tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[j];
-			printf
-	("Table: j:%d type:%d num_entries:%d entry_size:0x%x num_lvl:%d ",
-			       j,
-			       tbl->type,
-			       tbl->num_entries,
-			       tbl->entry_size,
-			       tbl->num_lvl);
-			if (tbl->pg_tbl[0].pg_va_tbl &&
-			    tbl->pg_tbl[0].pg_pa_tbl)
-				printf("%p %p\n",
-			       tbl->pg_tbl[0].pg_va_tbl[0],
-			       (void *)(uintptr_t)tbl->pg_tbl[0].pg_pa_tbl[0]);
-			for (i = 0; i < tbl->num_lvl - 1; i++) {
-				printf("Level:%d\n", i);
-				tp = &tbl->pg_tbl[i];
-				tp_next = &tbl->pg_tbl[i + 1];
-				tf_dump_link_page_table(tp, tp_next);
-			}
-			printf("\n");
-		}
-	}
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
index bdf7d2089..2f5af6060 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl_type.c
@@ -209,8 +209,10 @@ tf_tbl_set(struct tf *tfp,
 	   struct tf_tbl_set_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
 	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -240,9 +242,22 @@ tf_tbl_set(struct tf *tfp,
 	}
 
 	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	rc = tf_msg_set_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
@@ -262,8 +277,10 @@ tf_tbl_get(struct tf *tfp,
 	   struct tf_tbl_get_parms *parms)
 {
 	int rc;
-	struct tf_rm_is_allocated_parms aparms;
+	uint16_t hcapi_type;
 	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
 	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
@@ -292,10 +309,24 @@ tf_tbl_get(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Get the entry */
 	rc = tf_msg_get_tbl_entry(tfp,
 				  parms->dir,
-				  parms->type,
+				  hcapi_type,
 				  parms->data_sz_in_bytes,
 				  parms->data,
 				  parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 260fb15a6..a1761ad56 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -53,7 +53,6 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	db_cfg.num_elements = parms->num_elements;
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -174,14 +173,15 @@ tf_tcam_alloc(struct tf *tfp,
 }
 
 int
-tf_tcam_free(struct tf *tfp __rte_unused,
-	     struct tf_tcam_free_parms *parms __rte_unused)
+tf_tcam_free(struct tf *tfp,
+	     struct tf_tcam_free_parms *parms)
 {
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -253,6 +253,15 @@ tf_tcam_free(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
@@ -281,6 +290,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
 	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 	uint16_t num_slice_per_row = 1;
 	int allocated = 0;
 
@@ -338,6 +348,15 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 		return rc;
 	}
 
+	/* Convert TF type to HCAPI RM type */
+	hparms.rm_db = tcam_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 5090dfd9f..ee5bacc09 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -76,6 +76,10 @@ struct tf_tcam_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tcam_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
 	/**
 	 * [in] Index to free
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 16c43eb67..5472a9aac 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -152,9 +152,9 @@ tf_device_module_type_subtype_2_str(enum tf_device_module_type dm_type,
 	case TF_DEVICE_MODULE_TYPE_IDENTIFIER:
 		return tf_ident_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_TABLE:
-		return tf_tcam_tbl_2_str(mod_type);
-	case TF_DEVICE_MODULE_TYPE_TCAM:
 		return tf_tbl_type_2_str(mod_type);
+	case TF_DEVICE_MODULE_TYPE_TCAM:
+		return tf_tcam_tbl_2_str(mod_type);
 	case TF_DEVICE_MODULE_TYPE_EM:
 		return tf_em_tbl_type_2_str(mod_type);
 	default:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 23/51] net/bnxt: update table get to use new design
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (21 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 22/51] net/bnxt: use table scope for EM and TCAM lookup Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
                           ` (29 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Move bulk table get implementation to new Tbl Module design.
- Update messages for bulk table get
- Retrieve specified table element using bulk mechanism
- Remove deprecated resource definitions
- Update device type configuration for P4.
- Update RM DB HCAPI count check and fix EM internal and host
  code such that EM DBs can be created correctly.
- Update error logging to be info on unbind in the different modules.
- Move RTE RSVD out of tf_resources.h

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h      |  250 ++
 drivers/net/bnxt/hcapi/hcapi_cfa.h        |    2 +
 drivers/net/bnxt/meson.build              |    3 +-
 drivers/net/bnxt/tf_core/Makefile         |    2 -
 drivers/net/bnxt/tf_core/tf_common.h      |   55 +-
 drivers/net/bnxt/tf_core/tf_core.c        |   86 +-
 drivers/net/bnxt/tf_core/tf_device.h      |   24 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c   |    4 +-
 drivers/net/bnxt/tf_core/tf_device_p4.h   |    5 +-
 drivers/net/bnxt/tf_core/tf_em.h          |   88 +-
 drivers/net/bnxt/tf_core/tf_em_common.c   |   29 +-
 drivers/net/bnxt/tf_core/tf_em_internal.c |   59 +-
 drivers/net/bnxt/tf_core/tf_identifier.c  |   14 +-
 drivers/net/bnxt/tf_core/tf_msg.c         |   31 +-
 drivers/net/bnxt/tf_core/tf_msg.h         |    8 +-
 drivers/net/bnxt/tf_core/tf_resources.h   |  529 ---
 drivers/net/bnxt/tf_core/tf_rm.c          | 3695 ++++-----------------
 drivers/net/bnxt/tf_core/tf_rm.h          |  539 +--
 drivers/net/bnxt/tf_core/tf_rm_new.c      |  907 -----
 drivers/net/bnxt/tf_core/tf_rm_new.h      |  446 ---
 drivers/net/bnxt/tf_core/tf_session.h     |  214 +-
 drivers/net/bnxt/tf_core/tf_tbl.c         |  478 ++-
 drivers/net/bnxt/tf_core/tf_tbl.h         |  436 ++-
 drivers/net/bnxt/tf_core/tf_tbl_type.c    |  342 --
 drivers/net/bnxt/tf_core/tf_tbl_type.h    |  318 --
 drivers/net/bnxt/tf_core/tf_tcam.c        |   15 +-
 26 files changed, 2337 insertions(+), 6242 deletions(-)
 create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c
 delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
new file mode 100644
index 000000000..c30e4f49c
--- /dev/null
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+/*
+ * Name:  cfa_p40_tbl.h
+ *
+ * Description: header for SWE based on Truflow
+ *
+ * Date:  12/16/19 17:18:12
+ *
+ * Note:  This file was originally generated by tflib_decode.py.
+ *        Remainder is hand coded due to lack of availability of xml for
+ *        addtional tables at this time (EEM Record and union fields)
+ *
+ **/
+#ifndef _CFA_P40_TBL_H_
+#define _CFA_P40_TBL_H_
+
+#include "cfa_p40_hw.h"
+
+#include "hcapi_cfa_defs.h"
+
+const struct hcapi_cfa_field cfa_p40_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P40_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_act_veb_tcam_layout[] = {
+	{CFA_P40_ACT_VEB_TCAM_VALID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_VALID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_RESERVED_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_PARIF_IN_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_PARIF_IN_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_NUM_VTAGS_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_MAC_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_MAC_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_OVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_OVID_NUM_BITS},
+	{CFA_P40_ACT_VEB_TCAM_IVID_BITPOS,
+	 CFA_P40_ACT_VEB_TCAM_IVID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_lkup_tcam_record_mem_layout[] = {
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_VALID_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_ACT_REC_PTR_NUM_BITS},
+	{CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_BITPOS,
+	 CFA_P40_LKUP_TCAM_RECORD_MEM_STRENGTH_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_ctxt_remap_mem_layout[] = {
+	{CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_TPID_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PRI_ANTI_SPOOF_CTL_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_SP_LKUP_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_SP_REC_PTR_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BD_ACT_EN_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_TPID_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_DEFAULT_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ALLOWED_PRI_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PARIF_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_BYP_LKUP_EN_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_VNIC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_L2_CTXT_NUM_BITS},
+	{CFA_P40_PROF_CTXT_REMAP_MEM_ARP_BITPOS,
+	 CFA_P40_PROF_CTXT_REMAP_MEM_ARP_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_PL_BYP_LKUP_EN_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_EM_KEY_MASK_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_SEARCH_ENB_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+};
+
+const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
+	{CFA_P40_PROF_PROFILE_TCAM_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PKT_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RECYCLE_CNT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_AGG_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_PROF_FUNC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_RESERVED_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_RESERVED_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_HREC_NEXT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TL4_HDR_IS_UDP_TCP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_ERR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_TUN_HDR_FLAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_UC_MC_BC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_VTAG_PRESENT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L2_TWO_VTAGS_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_HDR_ISIP_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_SRC_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L3_IPV6_CMP_DEST_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_VALID_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_ERROR_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_L4_HDR_IS_UDP_TCP_NUM_BITS},
+};
+
+/**************************************************************************/
+/**
+ * Non-autogenerated fields
+ */
+
+const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
+	{CFA_P40_EEM_KEY_TBL_VALID_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_VALID_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_L1_CACHEABLE_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_STRENGTH_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_STRENGTH_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_KEY_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_KEY_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_REC_SZ_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_REC_SZ_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_ACT_REC_INT_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_ACT_REC_INT_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_EXT_FLOW_CTR_NUM_BITS},
+
+	{CFA_P40_EEM_KEY_TBL_AR_PTR_BITPOS,
+	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
+
+};
+#endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index f60af4e56..7a67493bd 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -243,6 +243,8 @@ int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,
 			       struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
+int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
+			     struct hcapi_cfa_data *mirror);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 35038dc8b..7f3ec6204 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -41,10 +41,9 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_identifier.c',
 	'tf_core/tf_shadow_tbl.c',
 	'tf_core/tf_shadow_tcam.c',
-	'tf_core/tf_tbl_type.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm_new.c',
+	'tf_core/tf_rm.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index f186741e4..9ba60e1c2 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -23,10 +23,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
diff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h
index ec3bca835..b982203db 100644
--- a/drivers/net/bnxt/tf_core/tf_common.h
+++ b/drivers/net/bnxt/tf_core/tf_common.h
@@ -6,52 +6,11 @@
 #ifndef _TF_COMMON_H_
 #define _TF_COMMON_H_
 
-/* Helper to check the parms */
-#define TF_CHECK_PARMS_SESSION(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "%s: session error\n", \
-				    tf_dir_2_str((parms)->dir)); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_PARMS(tfp, parms) do {	\
-		if ((parms) == NULL || (tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
-#define TF_CHECK_TFP_SESSION(tfp) do { \
-		if ((tfp) == NULL) { \
-			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n"); \
-			return -EINVAL; \
-		} \
-		if ((tfp)->session == NULL || \
-		    (tfp)->session->core_data == NULL) { \
-			TFP_DRV_LOG(ERR, "Session error\n"); \
-			return -EINVAL; \
-		} \
-	} while (0)
-
+/* Helpers to performs parameter check */
 
+/**
+ * Checks 1 parameter against NULL.
+ */
 #define TF_CHECK_PARMS1(parms) do {					\
 		if ((parms) == NULL) {					\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -59,6 +18,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 2 parameters against NULL.
+ */
 #define TF_CHECK_PARMS2(parms1, parms2) do {				\
 		if ((parms1) == NULL || (parms2) == NULL) {		\
 			TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");	\
@@ -66,6 +28,9 @@
 		}							\
 	} while (0)
 
+/**
+ * Checks 3 parameters against NULL.
+ */
 #define TF_CHECK_PARMS3(parms1, parms2, parms3) do {			\
 		if ((parms1) == NULL ||					\
 		    (parms2) == NULL ||					\
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8b3e15c8a..8727900c4 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -186,7 +186,7 @@ int tf_insert_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -241,7 +241,7 @@ int tf_delete_em_entry(struct tf *tfp,
 	struct tf_dev_info     *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -523,7 +523,7 @@ int
 tf_get_tcam_entry(struct tf *tfp __rte_unused,
 		  struct tf_get_tcam_entry_parms *parms __rte_unused)
 {
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 	return -EOPNOTSUPP;
 }
 
@@ -821,7 +821,80 @@ tf_get_tbl_entry(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
+int
+tf_bulk_get_tbl_entry(struct tf *tfp,
+		 struct tf_bulk_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_tbl_get_bulk_parms bparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Can't do static initialization due to UT enum check */
+	memset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		/* Not supported, yet */
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s, External table type not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+
+		return rc;
+	}
+
+	/* Internal table type processing */
+
+	if (dev->ops->tf_dev_get_bulk_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	bparms.dir = parms->dir;
+	bparms.type = parms->type;
+	bparms.starting_idx = parms->starting_idx;
+	bparms.num_entries = parms->num_entries;
+	bparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;
+	bparms.physical_mem_addr = parms->physical_mem_addr;
+	rc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Table get bulk failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_tbl_scope(struct tf *tfp,
 		   struct tf_alloc_tbl_scope_parms *parms)
@@ -830,7 +903,7 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
@@ -861,7 +934,6 @@ tf_alloc_tbl_scope(struct tf *tfp,
 	return rc;
 }
 
-/* API defined in tf_core.h */
 int
 tf_free_tbl_scope(struct tf *tfp,
 		  struct tf_free_tbl_scope_parms *parms)
@@ -870,7 +942,7 @@ tf_free_tbl_scope(struct tf *tfp,
 	struct tf_dev_info *dev;
 	int rc;
 
-	TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 2712d1039..93f3627d4 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -8,7 +8,7 @@
 
 #include "tf_core.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -293,7 +293,27 @@ struct tf_dev_ops {
 	 *   - (-EINVAL) on failure.
 	 */
 	int (*tf_dev_get_tbl)(struct tf *tfp,
-			       struct tf_tbl_get_parms *parms);
+			      struct tf_tbl_get_parms *parms);
+
+	/**
+	 * Retrieves the specified table type element using 'bulk'
+	 * mechanism.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get bulk parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_bulk_tbl)(struct tf *tfp,
+				   struct tf_tbl_get_bulk_parms *parms);
 
 	/**
 	 * Allocation of a tcam element.
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 127c655a6..e3526672f 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -8,7 +8,7 @@
 
 #include "tf_device.h"
 #include "tf_identifier.h"
-#include "tf_tbl_type.h"
+#include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
 
@@ -88,6 +88,7 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
+	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
 	.tf_dev_free_tcam = NULL,
 	.tf_dev_alloc_search_tcam = NULL,
@@ -114,6 +115,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
 	.tf_dev_get_tbl = tf_tbl_get,
+	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
 	.tf_dev_free_tcam = tf_tcam_free,
 	.tf_dev_alloc_search_tcam = NULL,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index da6dd65a3..473e4eae5 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -9,7 +9,7 @@
 #include <cfa_resource_types.h>
 
 #include "tf_core.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -41,8 +41,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */
-	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
+	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
 	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index cf799c200..6bfcbd59e 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -23,6 +23,56 @@
 #define TF_EM_MAX_MASK 0x7FFF
 #define TF_EM_MAX_ENTRY (128 * 1024 * 1024)
 
+/**
+ * Hardware Page sizes supported for EEM:
+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ *
+ * Round-down other page sizes to the lower hardware page
+ * size supported.
+ */
+#define TF_EM_PAGE_SIZE_4K 12
+#define TF_EM_PAGE_SIZE_8K 13
+#define TF_EM_PAGE_SIZE_64K 16
+#define TF_EM_PAGE_SIZE_256K 18
+#define TF_EM_PAGE_SIZE_1M 20
+#define TF_EM_PAGE_SIZE_2M 21
+#define TF_EM_PAGE_SIZE_4M 22
+#define TF_EM_PAGE_SIZE_1G 30
+
+/* Set page size */
+#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
+
+#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#else
+#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /*
  * Used to build GFID:
  *
@@ -80,13 +130,43 @@ struct tf_em_cfg_parms {
 };
 
 /**
- * @page table Table
+ * @page em EM
  *
  * @ref tf_alloc_eem_tbl_scope
  *
  * @ref tf_free_eem_tbl_scope_cb
  *
- * @ref tbl_scope_cb_find
+ * @ref tf_em_insert_int_entry
+ *
+ * @ref tf_em_delete_int_entry
+ *
+ * @ref tf_em_insert_ext_entry
+ *
+ * @ref tf_em_delete_ext_entry
+ *
+ * @ref tf_em_insert_ext_sys_entry
+ *
+ * @ref tf_em_delete_ext_sys_entry
+ *
+ * @ref tf_em_int_bind
+ *
+ * @ref tf_em_int_unbind
+ *
+ * @ref tf_em_ext_common_bind
+ *
+ * @ref tf_em_ext_common_unbind
+ *
+ * @ref tf_em_ext_host_alloc
+ *
+ * @ref tf_em_ext_host_free
+ *
+ * @ref tf_em_ext_system_alloc
+ *
+ * @ref tf_em_ext_system_free
+ *
+ * @ref tf_em_ext_common_free
+ *
+ * @ref tf_em_ext_common_alloc
  */
 
 /**
@@ -328,7 +408,7 @@ int tf_em_ext_host_free(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+			   struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using system memory
@@ -344,7 +424,7 @@ int tf_em_ext_system_alloc(struct tf *tfp,
  *   -EINVAL - Parameter error
  */
 int tf_em_ext_system_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
+			  struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index ba6aa7ac1..d0d80daeb 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -194,12 +194,13 @@ tf_em_ext_common_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Ext DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -210,19 +211,29 @@ tf_em_ext_common_bind(struct tf *tfp,
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		db_cfg.dir = i;
 		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Ext DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_TBL_SCOPE] == 0)
+			continue;
+
 		db_cfg.rm_db = &eem_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
 			TFP_DRV_LOG(ERR,
-				    "%s: EM DB creation failed\n",
+				    "%s: EM Ext DB creation failed\n",
 				    tf_dir_2_str(i));
 
 			return rc;
 		}
+		db_exists = 1;
 	}
 
-	mem_type = parms->mem_type;
-	init = 1;
+	if (db_exists) {
+		mem_type = parms->mem_type;
+		init = 1;
+	}
 
 	return 0;
 }
@@ -236,13 +247,11 @@ tf_em_ext_common_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Ext DBs created\n");
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 9be91ad5d..1c514747d 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -225,12 +225,13 @@ tf_em_int_bind(struct tf *tfp,
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 	struct tf_session *session;
+	uint8_t db_exists = 0;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "EM Int DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -242,31 +243,35 @@ tf_em_int_bind(struct tf *tfp,
 				  TF_SESSION_EM_POOL_SIZE);
 	}
 
-	/*
-	 * I'm not sure that this code is needed.
-	 * leaving for now until resolved
-	 */
-	if (parms->num_elements) {
-		db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
-		db_cfg.num_elements = parms->num_elements;
-		db_cfg.cfg = parms->cfg;
-
-		for (i = 0; i < TF_DIR_MAX; i++) {
-			db_cfg.dir = i;
-			db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
-			db_cfg.rm_db = &em_db[i];
-			rc = tf_rm_create_db(tfp, &db_cfg);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: EM DB creation failed\n",
-					    tf_dir_2_str(i));
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
 
-				return rc;
-			}
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;
+
+		/* Check if we got any request to support EEM, if so
+		 * we build an EM Int DB holding Table Scopes.
+		 */
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
+			continue;
+
+		db_cfg.rm_db = &em_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM Int DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
 		}
+		db_exists = 1;
 	}
 
-	init = 1;
+	if (db_exists)
+		init = 1;
+
 	return 0;
 }
 
@@ -280,13 +285,11 @@ tf_em_int_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No EM DBs created\n");
-		return -EINVAL;
+		TFP_DRV_LOG(INFO,
+			    "No EM Int DBs created\n");
+		return 0;
 	}
 
 	session = (struct tf_session *)tfp->session->core_data;
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index b197bb271..211371081 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -7,7 +7,7 @@
 
 #include "tf_identifier.h"
 #include "tf_common.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_util.h"
 #include "tfp.h"
 
@@ -35,7 +35,7 @@ tf_ident_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "Identifier already initialized\n");
+			    "Identifier DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -65,7 +65,7 @@ tf_ident_bind(struct tf *tfp,
 }
 
 int
-tf_ident_unbind(struct tf *tfp __rte_unused)
+tf_ident_unbind(struct tf *tfp)
 {
 	int rc;
 	int i;
@@ -73,13 +73,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
+	/* Bail if nothing has been initialized */
 	if (!init) {
-		TFP_DRV_LOG(ERR,
+		TFP_DRV_LOG(INFO,
 			    "No Identifier DBs created\n");
-		return -EINVAL;
+		return 0;
 	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index d8b80bc84..02d8a4971 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -871,26 +871,41 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *params)
+			  enum tf_dir dir,
+			  uint16_t hcapi_type,
+			  uint32_t starting_idx,
+			  uint16_t num_entries,
+			  uint16_t entry_sz_in_bytes,
+			  uint64_t physical_mem_addr)
 {
 	int rc;
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_session *tfs;
 	int data_size = 0;
 
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* Populate the request */
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
-	req.flags = tfp_cpu_to_le_16(params->dir);
-	req.type = tfp_cpu_to_le_32(params->type);
-	req.start_index = tfp_cpu_to_le_32(params->starting_idx);
-	req.num_entries = tfp_cpu_to_le_32(params->num_entries);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(hcapi_type);
+	req.start_index = tfp_cpu_to_le_32(starting_idx);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
 
-	data_size = params->num_entries * params->entry_sz_in_bytes;
+	data_size = num_entries * entry_sz_in_bytes;
 
-	req.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);
+	req.host_addr = tfp_cpu_to_le_64(physical_mem_addr);
 
 	MSG_PREP(parms,
 		 TF_KONG_MB,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 8e276d4c0..7432873d7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -11,7 +11,6 @@
 
 #include "tf_tbl.h"
 #include "tf_rm.h"
-#include "tf_rm_new.h"
 #include "tf_tcam.h"
 
 struct tf;
@@ -422,6 +421,11 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
  *  0 on Success else internal Truflow error
  */
 int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms);
+			      enum tf_dir dir,
+			      uint16_t hcapi_type,
+			      uint32_t starting_idx,
+			      uint16_t num_entries,
+			      uint16_t entry_sz_in_bytes,
+			      uint64_t physical_mem_addr);
 
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index b7b445102..4688514fc 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,535 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
-/* Common HW resources for all chip variants */
-#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
-					    * entries
-					    */
-#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
-#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
-					    * TCAM
-					    */
-#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
-					    * IDs
-					    */
-#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
-#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */
-#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
-#define TF_NUM_METER             1024      /* < Number of meter instances */
-#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
-#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
-
-/* Wh+/SR specific HW resources */
-#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
-					    * entries
-					    */
-
-/* SR/SR2 specific HW resources */
-#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
-
-
-/* Thor, SR2 common HW resources */
-#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
-					    * templates
-					    */
-
-/* SR2 specific HW resources */
 #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
-#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
-#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
-#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
-#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
-					    * States
-					    */
-#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
-#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
-#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
-
-/*
- * Common for the Reserved Resource defines below:
- *
- * - HW Resources
- *   For resources where a priority level plays a role, i.e. l2 ctx
- *   tcam entries, both a number of resources and a begin/end pair is
- *   required. The begin/end is used to assure TFLIB gets the correct
- *   priority setting for that resource.
- *
- *   For EM records there is no priority required thus a number of
- *   resources is sufficient.
- *
- *   Example, TCAM:
- *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
- *     0-63 as HW presents 0 as the highest priority entry.
- *
- * - SRAM Resources
- *   Handled as regular resources as there is no priority required.
- *
- * Common for these resources is that they are handled per direction,
- * rx/tx.
- */
-
-/* HW Resources */
-
-/* L2 CTX */
-#define TF_RSVD_L2_CTXT_TCAM_RX                   64
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
-#define TF_RSVD_L2_CTXT_TCAM_TX                   960
-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
-
-/* Profiler */
-#define TF_RSVD_PROF_FUNC_RX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
-#define TF_RSVD_PROF_FUNC_TX                      64
-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
-#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
-
-#define TF_RSVD_PROF_TCAM_RX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
-#define TF_RSVD_PROF_TCAM_TX                      64
-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
-#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
-
-/* EM Profiles IDs */
-#define TF_RSVD_EM_PROF_ID_RX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
-#define TF_RSVD_EM_PROF_ID_TX                     64
-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
-#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
-
-/* EM Records */
-#define TF_RSVD_EM_REC_RX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
-#define TF_RSVD_EM_REC_TX                         16000
-#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
-
-/* Wildcard */
-#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
-#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
-
-#define TF_RSVD_WC_TCAM_RX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_WC_TCAM_END_IDX_RX                63
-#define TF_RSVD_WC_TCAM_TX                        64
-#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_WC_TCAM_END_IDX_TX                63
-
-#define TF_RSVD_METER_PROF_RX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_PROF_END_IDX_RX             0
-#define TF_RSVD_METER_PROF_TX                     0
-#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_PROF_END_IDX_TX             0
-
-#define TF_RSVD_METER_INST_RX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
-#define TF_RSVD_METER_INST_END_IDX_RX             0
-#define TF_RSVD_METER_INST_TX                     0
-#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
-#define TF_RSVD_METER_INST_END_IDX_TX             0
-
-/* Mirror */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_MIRROR_RX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
-#define TF_RSVD_MIRROR_END_IDX_RX                 0
-#define TF_RSVD_MIRROR_TX                         0
-#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
-#define TF_RSVD_MIRROR_END_IDX_TX                 0
-
-/* UPAR */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_UPAR_RX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
-#define TF_RSVD_UPAR_END_IDX_RX                   0
-#define TF_RSVD_UPAR_TX                           0
-#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
-#define TF_RSVD_UPAR_END_IDX_TX                   0
-
-/* Source Properties */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SP_TCAM_RX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
-#define TF_RSVD_SP_TCAM_END_IDX_RX                0
-#define TF_RSVD_SP_TCAM_TX                        0
-#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
-#define TF_RSVD_SP_TCAM_END_IDX_TX                0
-
-/* L2 Func */
-#define TF_RSVD_L2_FUNC_RX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
-#define TF_RSVD_L2_FUNC_END_IDX_RX                0
-#define TF_RSVD_L2_FUNC_TX                        0
-#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
-#define TF_RSVD_L2_FUNC_END_IDX_TX                0
-
-/* FKB */
-#define TF_RSVD_FKB_RX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
-#define TF_RSVD_FKB_END_IDX_RX                    0
-#define TF_RSVD_FKB_TX                            0
-#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
-#define TF_RSVD_FKB_END_IDX_TX                    0
-
-/* TBL Scope */
-#define TF_RSVD_TBL_SCOPE_RX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
-#define TF_RSVD_TBL_SCOPE_TX                      1
-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
-#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
-
-/* EPOCH0 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH0_RX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH0_END_IDX_RX                 0
-#define TF_RSVD_EPOCH0_TX                         0
-#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH0_END_IDX_TX                 0
-
-/* EPOCH1 */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_EPOCH1_RX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
-#define TF_RSVD_EPOCH1_END_IDX_RX                 0
-#define TF_RSVD_EPOCH1_TX                         0
-#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
-#define TF_RSVD_EPOCH1_END_IDX_TX                 0
-
-/* METADATA */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_METADATA_RX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
-#define TF_RSVD_METADATA_END_IDX_RX               0
-#define TF_RSVD_METADATA_TX                       0
-#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
-#define TF_RSVD_METADATA_END_IDX_TX               0
-
-/* CT_STATE */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_CT_STATE_RX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
-#define TF_RSVD_CT_STATE_END_IDX_RX               0
-#define TF_RSVD_CT_STATE_TX                       0
-#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
-#define TF_RSVD_CT_STATE_END_IDX_TX               0
-
-/* RANGE_PROF */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_PROF_RX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
-#define TF_RSVD_RANGE_PROF_TX                     0
-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
-#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
-
-/* RANGE_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_RANGE_ENTRY_RX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
-#define TF_RSVD_RANGE_ENTRY_TX                    0
-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
-#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
-
-/* LAG_ENTRY */
-/* Not yet supported fully in the infra */
-#define TF_RSVD_LAG_ENTRY_RX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
-#define TF_RSVD_LAG_ENTRY_TX                      0
-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
-#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
-
-
-/* SRAM - Resources
- * Limited to the types that CFA provides.
- */
-#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
-
-/* Not yet supported fully in the infra */
-#define TF_RSVD_SRAM_MCG_RX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
-/* Multicast Group on TX is not supported */
-#define TF_RSVD_SRAM_MCG_TX                       0
-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
-
-/* First encap of 8B RX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
-/* First encap of 8B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
-
-#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
-/* First encap of 16B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
-
-/* Encap of 64B on RX is not supported */
-#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
-/* First encap of 64B TX is reserved by CFA */
-#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_SP_SMAC_RX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
-#define TF_RSVD_SRAM_SP_SMAC_TX                   0
-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
-
-/* SRAM SP IPV4 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
-
-/* SRAM SP IPV6 on RX is not supported */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
-/* Not yet supported fully in infra */
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
-
-#define TF_RSVD_SRAM_COUNTER_64B_RX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
-#define TF_RSVD_SRAM_COUNTER_64B_TX               160
-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
-
-#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
-#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
-
-#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
-
-#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
-#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
-
-/* HW Resource Pool names */
-
-#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
-#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
-#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
-
-#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
-#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
-#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
-
-#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
-#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
-#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
-
-#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
-#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
-#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
-
-#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
-#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
-
-#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
-#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
-#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
-
-#define TF_METER_PROF_POOL_NAME           meter_prof_pool
-#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
-#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
-
-#define TF_METER_INST_POOL_NAME           meter_inst_pool
-#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
-#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
-
-#define TF_MIRROR_POOL_NAME               mirror_pool
-#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
-#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
-
-#define TF_UPAR_POOL_NAME                 upar_pool
-#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
-#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
-
-#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
-#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
-#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
-
-#define TF_FKB_POOL_NAME                  fkb_pool
-#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
-#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
-
-#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
-#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
-#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
-
-#define TF_L2_FUNC_POOL_NAME              l2_func_pool
-#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
-#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
-
-#define TF_EPOCH0_POOL_NAME               epoch0_pool
-#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
-#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
-
-#define TF_EPOCH1_POOL_NAME               epoch1_pool
-#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
-#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
-
-#define TF_METADATA_POOL_NAME             metadata_pool
-#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
-#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
-
-#define TF_CT_STATE_POOL_NAME             ct_state_pool
-#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
-#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
-
-#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
-#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
-#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
-
-#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
-#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
-#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
-
-#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
-#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
-#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
-
-/* SRAM Resource Pool names */
-#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
-#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
-#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
-
-#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
-#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
-#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
-
-#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
-#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
-#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
-
-#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
-#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
-#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
-
-#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
-#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
-#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
-
-#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
-#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
-#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
-
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
-
-#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
-#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
-#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
-
-#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
-#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
-#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
-
-#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
-#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
-#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
-
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
-
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
-
-/* Sw Resource Pool Names */
-
-#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
-#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
-#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
-
-
-/** HW Resource types
- */
-enum tf_resource_type_hw {
-	/* Common HW resources for all chip variants */
-	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-	TF_RESC_TYPE_HW_PROF_FUNC,
-	TF_RESC_TYPE_HW_PROF_TCAM,
-	TF_RESC_TYPE_HW_EM_PROF_ID,
-	TF_RESC_TYPE_HW_EM_REC,
-	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-	TF_RESC_TYPE_HW_WC_TCAM,
-	TF_RESC_TYPE_HW_METER_PROF,
-	TF_RESC_TYPE_HW_METER_INST,
-	TF_RESC_TYPE_HW_MIRROR,
-	TF_RESC_TYPE_HW_UPAR,
-	/* Wh+/SR specific HW resources */
-	TF_RESC_TYPE_HW_SP_TCAM,
-	/* SR/SR2 specific HW resources */
-	TF_RESC_TYPE_HW_L2_FUNC,
-	/* Thor, SR2 common HW resources */
-	TF_RESC_TYPE_HW_FKB,
-	/* SR2 specific HW resources */
-	TF_RESC_TYPE_HW_TBL_SCOPE,
-	TF_RESC_TYPE_HW_EPOCH0,
-	TF_RESC_TYPE_HW_EPOCH1,
-	TF_RESC_TYPE_HW_METADATA,
-	TF_RESC_TYPE_HW_CT_STATE,
-	TF_RESC_TYPE_HW_RANGE_PROF,
-	TF_RESC_TYPE_HW_RANGE_ENTRY,
-	TF_RESC_TYPE_HW_LAG_ENTRY,
-	TF_RESC_TYPE_HW_MAX
-};
-
-/** HW Resource types
- */
-enum tf_resource_type_sram {
-	TF_RESC_TYPE_SRAM_FULL_ACTION,
-	TF_RESC_TYPE_SRAM_MCG,
-	TF_RESC_TYPE_SRAM_ENCAP_8B,
-	TF_RESC_TYPE_SRAM_ENCAP_16B,
-	TF_RESC_TYPE_SRAM_ENCAP_64B,
-	TF_RESC_TYPE_SRAM_SP_SMAC,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-	TF_RESC_TYPE_SRAM_COUNTER_64B,
-	TF_RESC_TYPE_SRAM_NAT_SPORT,
-	TF_RESC_TYPE_SRAM_NAT_DPORT,
-	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-	TF_RESC_TYPE_SRAM_MAX
-};
 
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0a84e64d..e0469b653 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -7,3171 +7,916 @@
 
 #include <rte_common.h>
 
+#include <cfa_resource_types.h>
+
 #include "tf_rm.h"
-#include "tf_core.h"
+#include "tf_common.h"
 #include "tf_util.h"
 #include "tf_session.h"
-#include "tf_resources.h"
-#include "tf_msg.h"
-#include "bnxt.h"
+#include "tf_device.h"
 #include "tfp.h"
+#include "tf_msg.h"
 
 /**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
- *   enum tf_dir               dir         - Direction to process
- *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
- *                                           in the hw query structure
- *   define                    def_value   - Define value to check against
- *   uint32_t                 *eflag       - Result of the check
- */
-#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
-	if ((dir) == TF_DIR_RX) {					      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	} else {							      \
-		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
-			*(eflag) |= 1 << (hcapi_type);			      \
-	}								      \
-} while (0)
-
-/**
- * Internal macro to perform HW resource allocation check between what
- * firmware reports vs what was statically requested.
- *
- * Parameters:
- *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
- *   enum tf_dir                dir         - Direction to process
- *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
- *                                            in the hw query structure
- *   define                     def_value   - Define value to check against
- *   uint32_t                  *eflag       - Result of the check
- */
-#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
-	if ((dir) == TF_DIR_RX) {					       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	} else {							       \
-		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
-			*(eflag) |= 1 << (hcapi_type);			       \
-	}								       \
-} while (0)
-
-/**
- * Internal macro to convert a reserved resource define name to be
- * direction specific.
- *
- * Parameters:
- *   enum tf_dir    dir         - Direction to process
- *   string         type        - Type name to append RX or TX to
- *   string         dtype       - Direction specific type
- *
- *
+ * Generic RM Element data type that an RM DB is build upon.
  */
-#define TF_RESC_RSVD(dir, type, dtype) do {	\
-		if ((dir) == TF_DIR_RX)		\
-			(dtype) = type ## _RX;	\
-		else				\
-			(dtype) = type ## _TX;	\
-	} while (0)
-
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
-{
-	switch (hw_type) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		return "L2 ctxt tcam";
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		return "Profile Func";
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		return "Profile tcam";
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		return "EM profile id";
-	case TF_RESC_TYPE_HW_EM_REC:
-		return "EM record";
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		return "WC tcam profile id";
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		return "WC tcam";
-	case TF_RESC_TYPE_HW_METER_PROF:
-		return "Meter profile";
-	case TF_RESC_TYPE_HW_METER_INST:
-		return "Meter instance";
-	case TF_RESC_TYPE_HW_MIRROR:
-		return "Mirror";
-	case TF_RESC_TYPE_HW_UPAR:
-		return "UPAR";
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		return "Source properties tcam";
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		return "L2 Function";
-	case TF_RESC_TYPE_HW_FKB:
-		return "FKB";
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		return "Table scope";
-	case TF_RESC_TYPE_HW_EPOCH0:
-		return "EPOCH0";
-	case TF_RESC_TYPE_HW_EPOCH1:
-		return "EPOCH1";
-	case TF_RESC_TYPE_HW_METADATA:
-		return "Metadata";
-	case TF_RESC_TYPE_HW_CT_STATE:
-		return "Connection tracking state";
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		return "Range profile";
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		return "Range entry";
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		return "LAG";
-	default:
-		return "Invalid identifier";
-	}
-}
-
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
-{
-	switch (sram_type) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		return "Full action";
-	case TF_RESC_TYPE_SRAM_MCG:
-		return "MCG";
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		return "Encap 8B";
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		return "Encap 16B";
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		return "Encap 64B";
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		return "Source properties SMAC";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		return "Source properties SMAC IPv4";
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		return "Source properties IPv6";
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		return "Counter 64B";
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		return "NAT source port";
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		return "NAT destination port";
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		return "NAT source IPv4";
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		return "NAT destination IPv4";
-	default:
-		return "Invalid identifier";
-	}
-}
+struct tf_rm_element {
+	/**
+	 * RM Element configuration type. If Private then the
+	 * hcapi_type can be ignored. If Null then the element is not
+	 * valid for the device.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
 
-/**
- * Helper function to perform a HW HCAPI resource type lookup against
- * the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
- */
-static int
-tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
-{
-	uint32_t value = -EOPNOTSUPP;
+	/**
+	 * HCAPI RM Type for the element.
+	 */
+	uint16_t hcapi_type;
 
-	switch (index) {
-	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_PROF_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_EM_REC:
-		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
-		break;
-	case TF_RESC_TYPE_HW_WC_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_METER_INST:
-		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
-		break;
-	case TF_RESC_TYPE_HW_MIRROR:
-		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
-		break;
-	case TF_RESC_TYPE_HW_UPAR:
-		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
-		break;
-	case TF_RESC_TYPE_HW_SP_TCAM:
-		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
-		break;
-	case TF_RESC_TYPE_HW_L2_FUNC:
-		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
-		break;
-	case TF_RESC_TYPE_HW_FKB:
-		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
-		break;
-	case TF_RESC_TYPE_HW_TBL_SCOPE:
-		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH0:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
-		break;
-	case TF_RESC_TYPE_HW_EPOCH1:
-		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
-		break;
-	case TF_RESC_TYPE_HW_METADATA:
-		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
-		break;
-	case TF_RESC_TYPE_HW_CT_STATE:
-		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_PROF:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
-		break;
-	case TF_RESC_TYPE_HW_RANGE_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
-		break;
-	case TF_RESC_TYPE_HW_LAG_ENTRY:
-		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
-		break;
-	default:
-		break;
-	}
+	/**
+	 * HCAPI RM allocated range information for the element.
+	 */
+	struct tf_rm_alloc_info alloc;
 
-	return value;
-}
+	/**
+	 * Bit allocator pool for the element. Pool size is controlled
+	 * by the struct tf_session_resources at time of session creation.
+	 * Null indicates that the element is not used for the device.
+	 */
+	struct bitalloc *pool;
+};
 
 /**
- * Helper function to perform a SRAM HCAPI resource type lookup
- * against the reserved value of the same static type.
- *
- * Returns:
- *   -EOPNOTSUPP - Reserved resource type not supported
- *   Value       - Integer value of the reserved value for the requested type
+ * TF RM DB definition
  */
-static int
-tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
-{
-	uint32_t value = -EOPNOTSUPP;
-
-	switch (index) {
-	case TF_RESC_TYPE_SRAM_FULL_ACTION:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
-		break;
-	case TF_RESC_TYPE_SRAM_MCG:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_8B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_16B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_ENCAP_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
-		break;
-	case TF_RESC_TYPE_SRAM_COUNTER_64B:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_SPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_DPORT:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
-		break;
-	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
-		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
-		break;
-	default:
-		break;
-	}
-
-	return value;
-}
+struct tf_rm_new_db {
+	/**
+	 * Number of elements in the DB
+	 */
+	uint16_t num_entries;
 
-/**
- * Helper function to print all the HW resource qcaps errors reported
- * in the error_flag.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] error_flag
- *   Pointer to the hw error flags created at time of the query check
- */
-static void
-tf_rm_print_hw_qcaps_error(enum tf_dir dir,
-			   struct tf_rm_hw_query *hw_query,
-			   uint32_t *error_flag)
-{
-	int i;
+	/**
+	 * Direction this DB controls.
+	 */
+	enum tf_dir dir;
 
-	TFP_DRV_LOG(ERR, "QCAPS errors HW\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
+	/**
+	 * Module type, used for logging purposes.
+	 */
+	enum tf_device_module_type type;
 
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_hw_2_str(i),
-				    hw_query->hw_query[i].max,
-				    tf_rm_rsvd_hw_value(dir, i));
-	}
-}
+	/**
+	 * The DB consists of an array of elements
+	 */
+	struct tf_rm_element *db;
+};
 
 /**
- * Helper function to print all the SRAM resource qcaps errors
- * reported in the error_flag.
+ * Adjust an index according to the allocation information.
  *
- * [in] dir
- *   Receive or transmit direction
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] error_flag
- *   Pointer to the sram error flags created at time of the query check
- */
-static void
-tf_rm_print_sram_qcaps_error(enum tf_dir dir,
-			     struct tf_rm_sram_query *sram_query,
-			     uint32_t *error_flag)
-{
-	int i;
-
-	TFP_DRV_LOG(ERR, "QCAPS errors SRAM\n");
-	TFP_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
-	TFP_DRV_LOG(ERR, "  Elements:\n");
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (*error_flag & 1 << i)
-			TFP_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
-				    tf_hcapi_sram_2_str(i),
-				    sram_query->sram_query[i].max,
-				    tf_rm_rsvd_sram_value(dir, i));
-	}
-}
-
-/**
- * Performs a HW resource check between what firmware capability
- * reports and what the core expects is available.
+ * [in] cfg
+ *   Pointer to the DB configuration
  *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
+ * [in] reservations
+ *   Pointer to the allocation values associated with the module
  *
- * [in] query
- *   Pointer to HW Query data structure. Query holds what the firmware
- *   offers of the HW resources.
+ * [in] count
+ *   Number of DB configuration elements
  *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
+ * [out] valid_count
+ *   Number of HCAPI entries with a reservation value greater than 0
  *
  * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
-			    enum tf_dir dir,
-			    uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
-			     TF_RSVD_L2_CTXT_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_FUNC,
-			     TF_RSVD_PROF_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_PROF_TCAM,
-			     TF_RSVD_PROF_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_PROF_ID,
-			     TF_RSVD_EM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EM_REC,
-			     TF_RSVD_EM_REC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
-			     TF_RSVD_WC_TCAM_PROF_ID,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_WC_TCAM,
-			     TF_RSVD_WC_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_PROF,
-			     TF_RSVD_METER_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METER_INST,
-			     TF_RSVD_METER_INST,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_MIRROR,
-			     TF_RSVD_MIRROR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_UPAR,
-			     TF_RSVD_UPAR,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_SP_TCAM,
-			     TF_RSVD_SP_TCAM,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_L2_FUNC,
-			     TF_RSVD_L2_FUNC,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_FKB,
-			     TF_RSVD_FKB,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_TBL_SCOPE,
-			     TF_RSVD_TBL_SCOPE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH0,
-			     TF_RSVD_EPOCH0,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_EPOCH1,
-			     TF_RSVD_EPOCH1,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_METADATA,
-			     TF_RSVD_METADATA,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_CT_STATE,
-			     TF_RSVD_CT_STATE,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_PROF,
-			     TF_RSVD_RANGE_PROF,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_RANGE_ENTRY,
-			     TF_RSVD_RANGE_ENTRY,
-			     error_flag);
-
-	TF_RM_CHECK_HW_ALLOC(query,
-			     dir,
-			     TF_RESC_TYPE_HW_LAG_ENTRY,
-			     TF_RSVD_LAG_ENTRY,
-			     error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Performs a SRAM resource check between what firmware capability
- * reports and what the core expects is available.
- *
- * Firmware performs the resource carving at AFM init time and the
- * resource capability is reported in the TruFlow qcaps msg.
- *
- * [in] query
- *   Pointer to SRAM Query data structure. Query holds what the
- *   firmware offers of the SRAM resources.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in/out] error_flag
- *   Pointer to a bit array indicating the error of a single HCAPI
- *   resource type. When a bit is set to 1, the HCAPI resource type
- *   failed static allocation.
- *
- * Returns:
- *  0       - Success
- *  -ENOMEM - Failure on one of the allocated resources. Check the
- *            error_flag for what types are flagged errored.
- */
-static int
-tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
-			      enum tf_dir dir,
-			      uint32_t *error_flag)
-{
-	*error_flag = 0;
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_FULL_ACTION,
-			       TF_RSVD_SRAM_FULL_ACTION,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_MCG,
-			       TF_RSVD_SRAM_MCG,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_8B,
-			       TF_RSVD_SRAM_ENCAP_8B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_16B,
-			       TF_RSVD_SRAM_ENCAP_16B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_ENCAP_64B,
-			       TF_RSVD_SRAM_ENCAP_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC,
-			       TF_RSVD_SRAM_SP_SMAC,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
-			       TF_RSVD_SRAM_SP_SMAC_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
-			       TF_RSVD_SRAM_SP_SMAC_IPV6,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_COUNTER_64B,
-			       TF_RSVD_SRAM_COUNTER_64B,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_SPORT,
-			       TF_RSVD_SRAM_NAT_SPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_DPORT,
-			       TF_RSVD_SRAM_NAT_DPORT,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
-			       TF_RSVD_SRAM_NAT_S_IPV4,
-			       error_flag);
-
-	TF_RM_CHECK_SRAM_ALLOC(query,
-			       dir,
-			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
-			       TF_RSVD_SRAM_NAT_D_IPV4,
-			       error_flag);
-
-	if (*error_flag != 0)
-		return -ENOMEM;
-
-	return 0;
-}
-
-/**
- * Internal function to mark pool entries used.
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static void
-tf_rm_reserve_range(uint32_t count,
-		    uint32_t rsv_begin,
-		    uint32_t rsv_end,
-		    uint32_t max,
-		    struct bitalloc *pool)
+tf_rm_count_hcapi_reservations(enum tf_dir dir,
+			       enum tf_device_module_type type,
+			       struct tf_rm_element_cfg *cfg,
+			       uint16_t *reservations,
+			       uint16_t count,
+			       uint16_t *valid_count)
 {
-	uint32_t i;
+	int i;
+	uint16_t cnt = 0;
 
-	/* If no resources has been requested we mark everything
-	 * 'used'
-	 */
-	if (count == 0)	{
-		for (i = 0; i < max; i++)
-			ba_alloc_index(pool, i);
-	} else {
-		/* Support 2 main modes
-		 * Reserved range starts from bottom up (with
-		 * pre-reserved value or not)
-		 * - begin = 0 to end xx
-		 * - begin = 1 to end xx
-		 *
-		 * Reserved range starts from top down
-		 * - begin = yy to end max
-		 */
+	for (i = 0; i < count; i++) {
+		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		    reservations[i] > 0)
+			cnt++;
 
-		/* Bottom up check, start from 0 */
-		if (rsv_begin == 0) {
-			for (i = rsv_end + 1; i < max; i++)
-				ba_alloc_index(pool, i);
-		}
-
-		/* Bottom up check, start from 1 or higher OR
-		 * Top Down
+		/* Only log msg if a type is attempted reserved and
+		 * not supported. We ignore EM module as its using a
+		 * split configuration array thus it would fail for
+		 * this type of check.
 		 */
-		if (rsv_begin >= 1) {
-			/* Allocate from 0 until start */
-			for (i = 0; i < rsv_begin; i++)
-				ba_alloc_index(pool, i);
-
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
-}
-
-/**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
- */
-static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
-	uint32_t end = 0;
-
-	/* l2 ctxt rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
-
-	/* l2 ctxt tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the profile tcam and profile func
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
-	uint32_t end = 0;
-
-	/* profile func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
-
-	/* profile func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_FUNC,
-			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_PROF_TCAM;
-
-	/* profile tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
-
-	/* profile tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_PROF_TCAM,
-			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the em profile id allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_em_prof(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
-	uint32_t end = 0;
-
-	/* em prof id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
-
-	/* em prof id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EM_PROF_ID,
-			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the wildcard tcam and profile id
- * resources that Truflow does not own.
- */
-static void
-tf_rm_rsvd_wc(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
-	uint32_t end = 0;
-
-	/* wc profile id rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
-
-	/* wc profile id tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_PROF_ID,
-			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_WC_TCAM;
-
-	/* wc tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_RX);
-
-	/* wc tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_WC_TCAM_ROW,
-			    tfs->TF_WC_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the meter resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_meter(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
-	uint32_t end = 0;
-
-	/* meter profiles rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_RX);
-
-	/* meter profiles tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER_PROF,
-			    tfs->TF_METER_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_METER_INST;
-
-	/* meter rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_RX);
-
-	/* meter tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METER,
-			    tfs->TF_METER_INST_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the mirror resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_mirror(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
-	uint32_t end = 0;
-
-	/* mirror rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_RX);
-
-	/* mirror tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_MIRROR,
-			    tfs->TF_MIRROR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the upar resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_upar(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_UPAR;
-	uint32_t end = 0;
-
-	/* upar rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_RX);
-
-	/* upar tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_UPAR,
-			    tfs->TF_UPAR_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp tcam resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
-	uint32_t end = 0;
-
-	/* sp tcam rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_RX);
-
-	/* sp tcam tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_SP_TCAM,
-			    tfs->TF_SP_TCAM_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 func resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
-	uint32_t end = 0;
-
-	/* l2 func rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
-
-	/* l2 func tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the fkb resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_fkb(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_FKB;
-	uint32_t end = 0;
-
-	/* fkb rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_RX);
-
-	/* fkb tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_FKB,
-			    tfs->TF_FKB_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the tbld scope resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
-	uint32_t end = 0;
-
-	/* tbl scope rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
-
-	/* tbl scope tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_TBL_SCOPE,
-			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the l2 epoch resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_epoch(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
-	uint32_t end = 0;
-
-	/* epoch0 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_RX);
-
-	/* epoch0 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH0,
-			    tfs->TF_EPOCH0_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_EPOCH1;
-
-	/* epoch1 rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_RX);
-
-	/* epoch1 tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_EPOCH1,
-			    tfs->TF_EPOCH1_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the metadata resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_metadata(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_METADATA;
-	uint32_t end = 0;
-
-	/* metadata rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_RX);
-
-	/* metadata tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_METADATA,
-			    tfs->TF_METADATA_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the ct state resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_ct_state(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
-	uint32_t end = 0;
-
-	/* ct state rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_RX);
-
-	/* ct state tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_CT_STATE,
-			    tfs->TF_CT_STATE_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the range resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_range(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
-	uint32_t end = 0;
-
-	/* range profile rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
-
-	/* range profile tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_PROF,
-			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
-
-	/* range entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
-
-	/* range entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_RANGE_ENTRY,
-			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the lag resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_lag_entry(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
-	uint32_t end = 0;
-
-	/* lag entry rx direction */
-	if (tfs->resc.rx.hw_entry[index].stride > 0)
-		end = tfs->resc.rx.hw_entry[index].start +
-			tfs->resc.rx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
-			    tfs->resc.rx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
-
-	/* lag entry tx direction */
-	if (tfs->resc.tx.hw_entry[index].stride > 0)
-		end = tfs->resc.tx.hw_entry[index].start +
-			tfs->resc.tx.hw_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
-			    tfs->resc.tx.hw_entry[index].start,
-			    end,
-			    TF_NUM_LAG_ENTRY,
-			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the full action resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
-	uint16_t end = 0;
-
-	/* full action rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_RX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
-
-	/* full action tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_FULL_ACTION_TX,
-			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the multicast group resources
- * allocated that Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
-	uint16_t end = 0;
-
-	/* multicast group rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_MCG_RX,
-			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
-
-	/* Multicast Group on TX is not supported */
-}
-
-/**
- * Internal function to mark all the encap resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_encap(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
-	uint16_t end = 0;
-
-	/* encap 8b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_RX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
-
-	/* encap 8b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_8B_TX,
-			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
-
-	/* encap 16b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_RX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
-
-	/* encap 16b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_16B_TX,
-			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
-
-	/* Encap 64B not supported on RX */
-
-	/* Encap 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_ENCAP_64B_TX,
-			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the sp resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_sp(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
-	uint16_t end = 0;
-
-	/* sp smac rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_RX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
-
-	/* sp smac tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_TX,
-			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-
-	/* SP SMAC IPv4 not supported on RX */
-
-	/* sp smac ipv4 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-
-	/* SP SMAC IPv6 not supported on RX */
-
-	/* sp smac ipv6 tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
-			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the stat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_stats(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
-	uint16_t end = 0;
-
-	/* counter 64b rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_RX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
-
-	/* counter 64b tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_COUNTER_64B_TX,
-			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
-}
-
-/**
- * Internal function to mark all the nat resources allocated that
- * Truflow does not own.
- */
-static void
-tf_rm_rsvd_sram_nat(struct tf_session *tfs)
-{
-	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
-	uint16_t end = 0;
-
-	/* nat source port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_RX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
-
-	/* nat source port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_SPORT_TX,
-			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
-
-	/* nat destination port rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_RX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
-
-	/* nat destination port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_DPORT_TX,
-			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-
-	/* nat source port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
-
-	/* nat source ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
-			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
-
-	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-
-	/* nat destination port ipv4 rx direction */
-	if (tfs->resc.rx.sram_entry[index].stride > 0)
-		end = tfs->resc.rx.sram_entry[index].start +
-			tfs->resc.rx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
-
-	/* nat destination ipv4 port tx direction */
-	if (tfs->resc.tx.sram_entry[index].stride > 0)
-		end = tfs->resc.tx.sram_entry[index].start +
-			tfs->resc.tx.sram_entry[index].stride - 1;
-
-	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
-			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
-			    end,
-			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
-			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
-}
-
-/**
- * Internal function used to validate the HW allocated resources
- * against the requested values.
- */
-static int
-tf_rm_hw_alloc_validate(enum tf_dir dir,
-			struct tf_rm_hw_alloc *hw_alloc,
-			struct tf_rm_entry *hw_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+		if (type != TF_DEVICE_MODULE_TYPE_EM &&
+		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
+		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				"%s, %s, %s allocation not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				hw_alloc->hw_num[i],
-				hw_entry[i].stride);
-			error = -1;
-		}
-	}
-
-	return error;
-}
-
-/**
- * Internal function used to validate the SRAM allocated resources
- * against the requested values.
- */
-static int
-tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
-			  struct tf_rm_sram_alloc *sram_alloc,
-			  struct tf_rm_entry *sram_entry)
-{
-	int error = 0;
-	int i;
-
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
-			TFP_DRV_LOG(ERR,
-				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_device_module_type_subtype_2_str(type, i));
+			printf("%s, %s, %s allocation of %d not supported\n",
+				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-				i,
-				sram_alloc->sram_num[i],
-				sram_entry[i].stride);
-			error = -1;
+			       tf_device_module_type_subtype_2_str(type, i),
+			       reservations[i]);
+
 		}
 	}
 
-	return error;
+	*valid_count = cnt;
 }
 
 /**
- * Internal function used to mark all the HW resources allocated that
- * Truflow does not own.
+ * Resource Manager Adjust of base index definitions.
  */
-static void
-tf_rm_reserve_hw(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_l2_ctxt(tfs);
-	tf_rm_rsvd_prof(tfs);
-	tf_rm_rsvd_em_prof(tfs);
-	tf_rm_rsvd_wc(tfs);
-	tf_rm_rsvd_mirror(tfs);
-	tf_rm_rsvd_meter(tfs);
-	tf_rm_rsvd_upar(tfs);
-	tf_rm_rsvd_sp_tcam(tfs);
-	tf_rm_rsvd_l2_func(tfs);
-	tf_rm_rsvd_fkb(tfs);
-	tf_rm_rsvd_tbl_scope(tfs);
-	tf_rm_rsvd_epoch(tfs);
-	tf_rm_rsvd_metadata(tfs);
-	tf_rm_rsvd_ct_state(tfs);
-	tf_rm_rsvd_range(tfs);
-	tf_rm_rsvd_lag_entry(tfs);
-}
-
-/**
- * Internal function used to mark all the SRAM resources allocated
- * that Truflow does not own.
- */
-static void
-tf_rm_reserve_sram(struct tf *tfp)
-{
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* TBD
-	 * There is no direct AFM resource allocation as it is carved
-	 * statically at AFM boot time. Thus the bit allocators work
-	 * on the full HW resource amount and we just mark everything
-	 * used except the resources that Truflow took ownership off.
-	 */
-	tf_rm_rsvd_sram_full_action(tfs);
-	tf_rm_rsvd_sram_mcg(tfs);
-	tf_rm_rsvd_sram_encap(tfs);
-	tf_rm_rsvd_sram_sp(tfs);
-	tf_rm_rsvd_sram_stats(tfs);
-	tf_rm_rsvd_sram_nat(tfs);
-}
-
-/**
- * Internal function used to allocate and validate all HW resources.
- */
-static int
-tf_rm_allocate_validate_hw(struct tf *tfp,
-			   enum tf_dir dir)
-{
-	int rc;
-	int i;
-	struct tf_rm_hw_query hw_query;
-	struct tf_rm_hw_alloc hw_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *hw_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		hw_entries = tfs->resc.rx.hw_entry;
-	else
-		hw_entries = tfs->resc.tx.hw_entry;
-
-	/* Query for Session HW Resources */
-
-	memset(&hw_query, 0, sizeof(hw_query)); /* RSXX */
-	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, HW QCAPS validation failed,"
-			"error_flag:0x%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
-		goto cleanup;
-	}
-
-	/* Post process HW capability */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
-		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
-
-	/* Allocate Session HW Resources */
-	/* Perform HW allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, HW Resource validation failed, rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
-	}
-
-	return 0;
-
- cleanup:
-
-	return -1;
-}
+enum tf_rm_adjust_type {
+	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
+	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
+};
 
 /**
- * Internal function used to allocate and validate all SRAM resources.
+ * Adjust an index according to the allocation information.
  *
- * [in] tfp
- *   Pointer to TF handle
+ * All resources are controlled in a 0 based pool. Some resources, by
+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
+ * need to be adjusted before they are handed out.
  *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
+ *
+ * [in] action
+ *   Adjust action
+ *
+ * [in] db_index
+ *   DB index for the element type
+ *
+ * [in] index
+ *   Index to convert
+ *
+ * [out] adj_index
+ *   Adjusted index
  *
  * Returns:
- *   0  - Success
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_allocate_validate_sram(struct tf *tfp,
-			     enum tf_dir dir)
+tf_rm_adjust_index(struct tf_rm_element *db,
+		   enum tf_rm_adjust_type action,
+		   uint32_t db_index,
+		   uint32_t index,
+		   uint32_t *adj_index)
 {
-	int rc;
-	int i;
-	struct tf_rm_sram_query sram_query;
-	struct tf_rm_sram_alloc sram_alloc;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-	struct tf_rm_entry *sram_entries;
-	uint32_t error_flag;
-
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
-
-	memset(&sram_query, 0, sizeof(sram_query)); /* RSXX */
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed,"
-			"error_flag:%x, rc:%s\n",
-			tf_dir_2_str(dir),
-			error_flag,
-			strerror(-rc));
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
-	}
+	int rc = 0;
+	uint32_t base_index;
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	base_index = db[db_index].alloc.entry.start;
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		TFP_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed,"
-			    " rc:%s\n",
-			    tf_dir_2_str(dir),
-			    strerror(-rc));
-		goto cleanup;
+	switch (action) {
+	case TF_RM_ADJUST_RM_BASE:
+		*adj_index = index - base_index;
+		break;
+	case TF_RM_ADJUST_ADD_BASE:
+		*adj_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
 	}
 
-	return 0;
-
- cleanup:
-
-	return -1;
+	return rc;
 }
 
 /**
- * Helper function used to prune a HW resource array to only hold
- * elements that needs to be flushed.
- *
- * [in] tfs
- *   Session handle
+ * Logs an array of found residual entries to the console.
  *
  * [in] dir
  *   Receive or transmit direction
  *
- * [in] hw_entries
- *   Master HW Resource database
+ * [in] type
+ *   Type of Device Module
  *
- * [in/out] flush_entries
- *   Pruned HW Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master HW
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in] count
+ *   Number of entries in the residual array
  *
- * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ * [in] residuals
+ *   Pointer to an array of residual entries. Array is index same as
+ *   the DB in which this function is used. Each entry holds residual
+ *   value for that entry.
  */
-static int
-tf_rm_hw_to_flush(struct tf_session *tfs,
-		  enum tf_dir dir,
-		  struct tf_rm_entry *hw_entries,
-		  struct tf_rm_entry *flush_entries)
+static void
+tf_rm_log_residuals(enum tf_dir dir,
+		    enum tf_device_module_type type,
+		    uint16_t count,
+		    uint16_t *residuals)
 {
-	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
+	int i;
 
-	/* Check all the hw resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
+	/* Walk the residual array and log the types that wasn't
+	 * cleaned up to the console.
 	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_CTXT_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_PROF_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
-	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_PROF_ID_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_WC_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METER_INST_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_MIRROR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_UPAR_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
-		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SP_TCAM_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_L2_FUNC_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_FKB_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
-		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_TBL_SCOPE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
-	} else {
-		TFP_DRV_LOG(ERR, "%s, TBL_SCOPE free_cnt:%d, entries:%d\n",
-			    tf_dir_2_str(dir),
-			    free_cnt,
-			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH0_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_EPOCH1_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_METADATA_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
-		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_CT_STATE_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_PROF_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_RANGE_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_LAG_ENTRY_POOL_NAME,
-			rc);
-	if (rc)
-		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
-		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
-	} else {
-		flush_rc = 1;
+	for (i = 0; i < count; i++) {
+		if (residuals[i] != 0)
+			TFP_DRV_LOG(ERR,
+				"%s, %s was not cleaned up, %d outstanding\n",
+				tf_dir_2_str(dir),
+				tf_device_module_type_subtype_2_str(type, i),
+				residuals[i]);
 	}
-
-	return flush_rc;
 }
 
 /**
- * Helper function used to prune a SRAM resource array to only hold
- * elements that needs to be flushed.
+ * Performs a check of the passed in DB for any lingering elements. If
+ * a resource type was found to not have been cleaned up by the caller
+ * then its residual values are recorded, logged and passed back in an
+ * allocate reservation array that the caller can pass to the FW for
+ * cleanup.
  *
- * [in] tfs
- *   Session handle
- *
- * [in] dir
- *   Receive or transmit direction
+ * [in] db
+ *   Pointer to the db, used for the lookup
  *
- * [in] hw_entries
- *   Master SRAM Resource data base
+ * [out] resv_size
+ *   Pointer to the reservation size of the generated reservation
+ *   array.
  *
- * [in/out] flush_entries
- *   Pruned SRAM Resource database of entries to be flushed. This
- *   array should be passed in as a complete copy of the master SRAM
- *   Resource database. The outgoing result will be a pruned version
- *   based on the result of the requested checking
+ * [in/out] resv
+ *   Pointer Pointer to a reservation array. The reservation array is
+ *   allocated after the residual scan and holds any found residual
+ *   entries. Thus it can be smaller than the DB that the check was
+ *   performed on. Array must be freed by the caller.
+ *
+ * [out] residuals_present
+ *   Pointer to a bool flag indicating if residual was present in the
+ *   DB
  *
  * Returns:
- *    0 - Success, no flush required
- *    1 - Success, flush required
- *   -1 - Internal error
+ *     0          - Success
+ *   - EOPNOTSUPP - Operation not supported
  */
 static int
-tf_rm_sram_to_flush(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    struct tf_rm_entry *sram_entries,
-		    struct tf_rm_entry *flush_entries)
+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
+		      uint16_t *resv_size,
+		      struct tf_rm_resc_entry **resv,
+		      bool *residuals_present)
 {
 	int rc;
-	int flush_rc = 0;
-	int free_cnt;
-	struct bitalloc *pool;
-
-	/* Check all the sram resource pools and check for left over
-	 * elements. Any found will result in the complete pool of a
-	 * type to get invalidated.
-	 */
-
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_FULL_ACTION_POOL_NAME,
-			rc);
+	int i;
+	int f;
+	uint16_t count;
+	uint16_t found;
+	uint16_t *residuals = NULL;
+	uint16_t hcapi_type;
+	struct tf_rm_get_inuse_count_parms iparms;
+	struct tf_rm_get_alloc_info_parms aparms;
+	struct tf_rm_get_hcapi_parms hparms;
+	struct tf_rm_alloc_info info;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_entry *local_resv = NULL;
+
+	/* Create array to hold the entries that have residuals */
+	cparms.nitems = rm_db->num_entries;
+	cparms.size = sizeof(uint16_t);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
-	} else {
-		flush_rc = 1;
+
+	residuals = (uint16_t *)cparms.mem_va;
+
+	/* Traverse the DB and collect any residual elements */
+	iparms.rm_db = rm_db;
+	iparms.count = &count;
+	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
+		iparms.db_index = i;
+		rc = tf_rm_get_inuse_count(&iparms);
+		/* Not a device supported entry, just skip */
+		if (rc == -ENOTSUP)
+			continue;
+		if (rc)
+			goto cleanup_residuals;
+
+		if (count) {
+			found++;
+			residuals[i] = count;
+			*residuals_present = true;
+		}
 	}
 
-	/* Only pools for RX direction */
-	if (dir == TF_DIR_RX) {
-		TF_RM_GET_POOLS_RX(tfs, &pool,
-				   TF_SRAM_MCG_POOL_NAME);
+	if (*residuals_present) {
+		/* Populate a reduced resv array with only the entries
+		 * that have residuals.
+		 */
+		cparms.nitems = found;
+		cparms.size = sizeof(struct tf_rm_resc_entry);
+		cparms.alignment = 0;
+		rc = tfp_calloc(&cparms);
 		if (rc)
 			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
-		} else {
-			flush_rc = 1;
+
+		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+		aparms.rm_db = rm_db;
+		hparms.rm_db = rm_db;
+		hparms.hcapi_type = &hcapi_type;
+		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
+			if (residuals[i] == 0)
+				continue;
+			aparms.db_index = i;
+			aparms.info = &info;
+			rc = tf_rm_get_info(&aparms);
+			if (rc)
+				goto cleanup_all;
+
+			hparms.db_index = i;
+			rc = tf_rm_get_hcapi_type(&hparms);
+			if (rc)
+				goto cleanup_all;
+
+			local_resv[f].type = hcapi_type;
+			local_resv[f].start = info.entry.start;
+			local_resv[f].stride = info.entry.stride;
+			f++;
 		}
-	} else {
-		/* Always prune TX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		*resv_size = found;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_8B_POOL_NAME,
-			rc);
+	tf_rm_log_residuals(rm_db->dir,
+			    rm_db->type,
+			    rm_db->num_entries,
+			    residuals);
+
+	tfp_free((void *)residuals);
+	*resv = local_resv;
+
+	return 0;
+
+ cleanup_all:
+	tfp_free((void *)local_resv);
+	*resv = NULL;
+ cleanup_residuals:
+	tfp_free((void *)residuals);
+
+	return rc;
+}
+
+int
+tf_rm_create_db(struct tf *tfp,
+		struct tf_rm_create_db_parms *parms)
+{
+	int rc;
+	int i;
+	int j;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	uint16_t max_types;
+	struct tfp_calloc_parms cparms;
+	struct tf_rm_resc_req_entry *query;
+	enum tf_rm_resc_resv_strategy resv_strategy;
+	struct tf_rm_resc_req_entry *req;
+	struct tf_rm_resc_entry *resv;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_element *db;
+	uint32_t pool_size;
+	uint16_t hcapi_items;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_ENCAP_16B_POOL_NAME,
-			rc);
+	/* Retrieve device information */
+	rc = tf_session_get_device(tfs, &dev);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
-	}
+	/* Need device max number of elements for the RM QCAPS */
+	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
+	if (rc)
+		return rc;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_SP_SMAC_POOL_NAME,
-			rc);
+	cparms.nitems = max_types;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
-	}
+	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	/* Only pools for TX direction */
-	if (dir == TF_DIR_TX) {
-		TF_RM_GET_POOLS_TX(tfs, &pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		if (rc)
-			return rc;
-		free_cnt = ba_free_count(pool);
-		if (free_cnt ==
-		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
-				0;
-		} else {
-			flush_rc = 1;
-		}
-	} else {
-		/* Always prune RX direction */
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	/* Get Firmware Capabilities */
+	rc = tf_msg_session_resc_qcaps(tfp,
+				       parms->dir,
+				       max_types,
+				       query,
+				       &resv_strategy);
+	if (rc)
+		return rc;
+
+	/* Process capabilities against DB requirements. However, as a
+	 * DB can hold elements that are not HCAPI we can reduce the
+	 * req msg content by removing those out of the request yet
+	 * the DB holds them all as to give a fast lookup. We can also
+	 * remove entries where there are no request for elements.
+	 */
+	tf_rm_count_hcapi_reservations(parms->dir,
+				       parms->type,
+				       parms->cfg,
+				       parms->alloc_cnt,
+				       parms->num_elements,
+				       &hcapi_items);
+
+	/* Handle the case where a DB create request really ends up
+	 * being empty. Unsupported (if not rare) case but possible
+	 * that no resources are necessary for a 'direction'.
+	 */
+	if (hcapi_items == 0) {
+		TFP_DRV_LOG(ERR,
+			"%s: DB create request for Zero elements, DB Type:%s\n",
+			tf_dir_2_str(parms->dir),
+			tf_device_module_type_2_str(parms->type));
+
+		parms->rm_db = NULL;
+		return -ENOMEM;
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_STATS_64B_POOL_NAME,
-			rc);
+	/* Alloc request, alignment already set */
+	cparms.nitems = (size_t)hcapi_items;
+	cparms.size = sizeof(struct tf_rm_resc_req_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_SPORT_POOL_NAME,
-			rc);
+	/* Alloc reservation, alignment and nitems already set */
+	cparms.size = sizeof(struct tf_rm_resc_entry);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
-	} else {
-		flush_rc = 1;
+	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
+
+	/* Build the request */
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		/* Skip any non HCAPI cfg elements */
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+			/* Only perform reservation for entries that
+			 * has been requested
+			 */
+			if (parms->alloc_cnt[i] == 0)
+				continue;
+
+			/* Verify that we can get the full amount
+			 * allocated per the qcaps availability.
+			 */
+			if (parms->alloc_cnt[i] <=
+			    query[parms->cfg[i].hcapi_type].max) {
+				req[j].type = parms->cfg[i].hcapi_type;
+				req[j].min = parms->alloc_cnt[i];
+				req[j].max = parms->alloc_cnt[i];
+				j++;
+			} else {
+				TFP_DRV_LOG(ERR,
+					    "%s: Resource failure, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    parms->cfg[i].hcapi_type);
+				TFP_DRV_LOG(ERR,
+					"req:%d, avail:%d\n",
+					parms->alloc_cnt[i],
+					query[parms->cfg[i].hcapi_type].max);
+				return -EINVAL;
+			}
+		}
 	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_DPORT_POOL_NAME,
-			rc);
+	rc = tf_msg_session_resc_alloc(tfp,
+				       parms->dir,
+				       hcapi_items,
+				       req,
+				       resv);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_S_IPV4_POOL_NAME,
-			rc);
+	/* Build the RM DB per the request */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_rm_new_db);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db = (void *)cparms.mem_va;
 
-	TF_RM_GET_POOLS(tfs, dir, &pool,
-			TF_SRAM_NAT_D_IPV4_POOL_NAME,
-			rc);
+	/* Build the DB within RM DB */
+	cparms.nitems = parms->num_elements;
+	cparms.size = sizeof(struct tf_rm_element);
+	rc = tfp_calloc(&cparms);
 	if (rc)
 		return rc;
-	free_cnt = ba_free_count(pool);
-	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
-		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
-	} else {
-		flush_rc = 1;
-	}
+	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
 
-	return flush_rc;
-}
+	db = rm_db->db;
+	for (i = 0, j = 0; i < parms->num_elements; i++) {
+		db[i].cfg_type = parms->cfg[i].cfg_type;
+		db[i].hcapi_type = parms->cfg[i].hcapi_type;
 
-/**
- * Helper function used to generate an error log for the HW types that
- * needs to be flushed. The types should have been cleaned up ahead of
- * invoking tf_close_session.
- *
- * [in] hw_entries
- *   HW Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_hw_flush(enum tf_dir dir,
-		   struct tf_rm_entry *hw_entries)
-{
-	int i;
+		/* Skip any non HCAPI types as we didn't include them
+		 * in the reservation request.
+		 */
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+			continue;
 
-	/* Walk the hw flush array and log the types that wasn't
-	 * cleaned up.
-	 */
-	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
-		if (hw_entries[i].stride != 0)
+		/* If the element didn't request an allocation no need
+		 * to create a pool nor verify if we got a reservation.
+		 */
+		if (parms->alloc_cnt[i] == 0)
+			continue;
+
+		/* If the element had requested an allocation and that
+		 * allocation was a success (full amount) then
+		 * allocate the pool.
+		 */
+		if (parms->alloc_cnt[i] == resv[j].stride) {
+			db[i].alloc.entry.start = resv[j].start;
+			db[i].alloc.entry.stride = resv[j].stride;
+
+			printf("Entry:%d Start:%d Stride:%d\n",
+			       i,
+			       resv[j].start,
+			       resv[j].stride);
+
+			/* Create pool */
+			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+				     sizeof(struct bitalloc));
+			/* Alloc request, alignment already set */
+			cparms.nitems = pool_size;
+			cparms.size = sizeof(struct bitalloc);
+			rc = tfp_calloc(&cparms);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool alloc failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+			rc = ba_init(db[i].pool, resv[j].stride);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "%s: Pool init failed, type:%d\n",
+					    tf_dir_2_str(parms->dir),
+					    db[i].cfg_type);
+				goto fail;
+			}
+			j++;
+		} else {
+			/* Bail out as we want what we requested for
+			 * all elements, not any less.
+			 */
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_hw_2_str(i));
+				    "%s: Alloc failed, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    db[i].cfg_type);
+			TFP_DRV_LOG(ERR,
+				    "req:%d, alloc:%d\n",
+				    parms->alloc_cnt[i],
+				    resv[j].stride);
+			goto fail;
+		}
 	}
+
+	rm_db->num_entries = parms->num_elements;
+	rm_db->dir = parms->dir;
+	rm_db->type = parms->type;
+	*parms->rm_db = (void *)rm_db;
+
+	printf("%s: type:%d num_entries:%d\n",
+	       tf_dir_2_str(parms->dir),
+	       parms->type,
+	       i);
+
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+
+	return 0;
+
+ fail:
+	tfp_free((void *)req);
+	tfp_free((void *)resv);
+	tfp_free((void *)db->pool);
+	tfp_free((void *)db);
+	tfp_free((void *)rm_db);
+	parms->rm_db = NULL;
+
+	return -EINVAL;
 }
 
-/**
- * Helper function used to generate an error log for the SRAM types
- * that needs to be flushed. The types should have been cleaned up
- * ahead of invoking tf_close_session.
- *
- * [in] sram_entries
- *   SRAM Resource database holding elements to be flushed
- */
-static void
-tf_rm_log_sram_flush(enum tf_dir dir,
-		     struct tf_rm_entry *sram_entries)
+int
+tf_rm_free_db(struct tf *tfp,
+	      struct tf_rm_free_db_parms *parms)
 {
+	int rc;
 	int i;
+	uint16_t resv_size = 0;
+	struct tf_rm_new_db *rm_db;
+	struct tf_rm_resc_entry *resv;
+	bool residuals_found = false;
+
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	/* Device unbind happens when the TF Session is closed and the
+	 * session ref count is 0. Device unbind will cleanup each of
+	 * its support modules, i.e. Identifier, thus we're ending up
+	 * here to close the DB.
+	 *
+	 * On TF Session close it is assumed that the session has already
+	 * cleaned up all its resources, individually, while
+	 * destroying its flows.
+	 *
+	 * To assist in the 'cleanup checking' the DB is checked for any
+	 * remaining elements and logged if found to be the case.
+	 *
+	 * Any such elements will need to be 'cleared' ahead of
+	 * returning the resources to the HCAPI RM.
+	 *
+	 * RM will signal FW to flush the DB resources. FW will
+	 * perform the invalidation. TF Session close will return the
+	 * previous allocated elements to the RM and then close the
+	 * HCAPI RM registration. That then saves several 'free' msgs
+	 * from being required.
+	 */
 
-	/* Walk the sram flush array and log the types that wasn't
-	 * cleaned up.
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+
+	/* Check for residuals that the client didn't clean up */
+	rc = tf_rm_check_residuals(rm_db,
+				   &resv_size,
+				   &resv,
+				   &residuals_found);
+	if (rc)
+		return rc;
+
+	/* Invalidate any residuals followed by a DB traversal for
+	 * pool cleanup.
 	 */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
-		if (sram_entries[i].stride != 0)
+	if (residuals_found) {
+		rc = tf_msg_session_resc_flush(tfp,
+					       parms->dir,
+					       resv_size,
+					       resv);
+		tfp_free((void *)resv);
+		/* On failure we still have to cleanup so we can only
+		 * log that FW failed.
+		 */
+		if (rc)
 			TFP_DRV_LOG(ERR,
-				    "%s, %s was not cleaned up\n",
-				    tf_dir_2_str(dir),
-				    tf_hcapi_sram_2_str(i));
+				    "%s: Internal Flush error, module:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    tf_device_module_type_2_str(rm_db->type));
 	}
-}
 
-void
-tf_rm_init(struct tf *tfp __rte_unused)
-{
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	for (i = 0; i < rm_db->num_entries; i++)
+		tfp_free((void *)rm_db->db[i].pool);
 
-	/* This version is host specific and should be checked against
-	 * when attaching as there is no guarantee that a secondary
-	 * would run from same image version.
-	 */
-	tfs->ver.major = TF_SESSION_VER_MAJOR;
-	tfs->ver.minor = TF_SESSION_VER_MINOR;
-	tfs->ver.update = TF_SESSION_VER_UPDATE;
-
-	tfs->session_id.id = 0;
-	tfs->ref_count = 0;
-
-	/* Initialization of Table Scopes */
-	/* ll_init(&tfs->tbl_scope_ll); */
-
-	/* Initialization of HW and SRAM resource DB */
-	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
-
-	/* Initialization of HW Resource Pools */
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records should not be controlled by way of a pool */
-
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Initialization of SRAM Resource Pools
-	 * These pools are set to the TFLIB defined MAX sizes not
-	 * AFM's HW max as to limit the memory consumption
-	 */
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		TF_RSVD_SRAM_FULL_ACTION_RX);
-	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		TF_RSVD_SRAM_FULL_ACTION_TX);
-	/* Only Multicast Group on RX is supported */
-	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
-		TF_RSVD_SRAM_MCG_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_8B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_8B_TX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		TF_RSVD_SRAM_ENCAP_16B_RX);
-	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_16B_TX);
-	/* Only Encap 64B on TX is supported */
-	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_ENCAP_64B_TX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		TF_RSVD_SRAM_SP_SMAC_RX);
-	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_TX);
-	/* Only SP SMAC IPv4 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-	/* Only SP SMAC IPv6 on TX is supported */
-	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
-		TF_RSVD_SRAM_COUNTER_64B_RX);
-	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
-		TF_RSVD_SRAM_COUNTER_64B_TX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_SPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_SPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_DPORT_RX);
-	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_DPORT_TX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_S_IPV4_TX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/* Initialization of pools local to TF Core */
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	tfp_free((void *)parms->rm_db);
+
+	return rc;
 }
 
 int
-tf_rm_allocate_validate(struct tf *tfp)
+tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 {
 	int rc;
-	int i;
+	int id;
+	uint32_t index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		rc = tf_rm_allocate_validate_hw(tfp, i);
-		if (rc)
-			return rc;
-		rc = tf_rm_allocate_validate_sram(tfp, i);
-		if (rc)
-			return rc;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
+
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	/* With both HW and SRAM allocated and validated we can
-	 * 'scrub' the reservation on the pools.
+	/*
+	 * priority  0: allocate from top of the tcam i.e. high
+	 * priority !0: allocate index from bottom i.e lowest
 	 */
-	tf_rm_reserve_hw(tfp);
-	tf_rm_reserve_sram(tfp);
+	if (parms->priority)
+		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
+	else
+		id = ba_alloc(rm_db->db[parms->db_index].pool);
+	if (id == BA_FAIL) {
+		rc = -ENOMEM;
+		TFP_DRV_LOG(ERR,
+			    "%s: Allocation failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_ADD_BASE,
+				parms->db_index,
+				id,
+				&index);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Alloc adjust of base index failed, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    strerror(-rc));
+		return -EINVAL;
+	}
+
+	*parms->index = index;
 
 	return rc;
 }
 
 int
-tf_rm_close(struct tf *tfp)
+tf_rm_free(struct tf_rm_free_parms *parms)
 {
 	int rc;
-	int rc_close = 0;
-	int i;
-	struct tf_rm_entry *hw_entries;
-	struct tf_rm_entry *hw_flush_entries;
-	struct tf_rm_entry *sram_entries;
-	struct tf_rm_entry *sram_flush_entries;
-	struct tf_session *tfs __rte_unused =
-		(struct tf_session *)(tfp->session->core_data);
-
-	struct tf_rm_db flush_resc = tfs->resc;
-
-	/* On close it is assumed that the session has already cleaned
-	 * up all its resources, individually, while destroying its
-	 * flows. No checking is performed thus the behavior is as
-	 * follows.
-	 *
-	 * Session RM will signal FW to release session resources. FW
-	 * will perform invalidation of all the allocated entries
-	 * (assures any outstanding resources has been cleared, then
-	 * free the FW RM instance.
-	 *
-	 * Session will then be freed by tf_close_session() thus there
-	 * is no need to clean each resource pool as the whole session
-	 * is going away.
-	 */
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		if (i == TF_DIR_RX) {
-			hw_entries = tfs->resc.rx.hw_entry;
-			hw_flush_entries = flush_resc.rx.hw_entry;
-			sram_entries = tfs->resc.rx.sram_entry;
-			sram_flush_entries = flush_resc.rx.sram_entry;
-		} else {
-			hw_entries = tfs->resc.tx.hw_entry;
-			hw_flush_entries = flush_resc.tx.hw_entry;
-			sram_entries = tfs->resc.tx.sram_entry;
-			sram_flush_entries = flush_resc.tx.sram_entry;
-		}
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-		/* Check for any not previously freed HW resources and
-		 * flush if required.
-		 */
-		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering HW resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-			/* Log the entries to be flushed */
-			tf_rm_log_hw_flush(i, hw_flush_entries);
-		}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-		/* Check for any not previously freed SRAM resources
-		 * and flush if required.
-		 */
-		rc = tf_rm_sram_to_flush(tfs,
-					 i,
-					 sram_entries,
-					 sram_flush_entries);
-		if (rc) {
-			rc_close = -ENOTEMPTY;
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "%s, lingering SRAM resources, rc:%s\n",
-				    tf_dir_2_str(i),
-				    strerror(-rc));
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
 
-			/* Log the entries to be flushed */
-			tf_rm_log_sram_flush(i, sram_flush_entries);
-		}
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
+		return rc;
 	}
 
-	return rc_close;
-}
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
 
-#if (TF_SHADOW == 1)
-int
-tf_rm_shadow_db_init(struct tf_session *tfs)
-{
-	rc = 1;
+	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
+	/* No logging direction matters and that is not available here */
+	if (rc)
+		return rc;
 
 	return rc;
 }
-#endif /* TF_SHADOW */
 
 int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool)
+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	int rc;
+	uint32_t adj_index;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_L2_CTXT_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_PROF_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_PROF_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_WC_TCAM:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_WC_TCAM_POOL_NAME,
-				rc);
-		break;
-	case TF_TCAM_TBL_TYPE_VEB_TCAM:
-	case TF_TCAM_TBL_TYPE_SP_TCAM:
-	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail out if the pool is not valid, should never happen */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		rc = -ENOTSUP;
 		TFP_DRV_LOG(ERR,
-			    "%s, Tcam type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
+			    "%s: Invalid pool for this type:%d, rc:%s\n",
+			    tf_dir_2_str(rm_db->dir),
+			    parms->db_index,
+			    strerror(-rc));
 		return rc;
 	}
 
-	return 0;
+	/* Adjust for any non zero start value */
+	rc = tf_rm_adjust_index(rm_db->db,
+				TF_RM_ADJUST_RM_BASE,
+				parms->db_index,
+				parms->index,
+				&adj_index);
+	if (rc)
+		return rc;
+
+	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
+				     adj_index);
+
+	return rc;
 }
 
 int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool)
+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 {
-	int rc = -EOPNOTSUPP;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	*pool = NULL;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_FULL_ACTION_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_TX)
-			break;
-		TF_RM_GET_POOLS_RX(tfs, pool,
-				   TF_SRAM_MCG_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_8B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_ENCAP_16B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		/* No pools for RX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_ENCAP_64B_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_SP_SMAC_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		/* No pools for TX direction, so bail out */
-		if (dir == TF_DIR_RX)
-			break;
-		TF_RM_GET_POOLS_TX(tfs, pool,
-				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
-		rc = 0;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_STATS_64B_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_SPORT_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_S_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_SRAM_NAT_D_IPV4_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METER_INST_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_MIRROR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_UPAR:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_UPAR_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH0_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_EPOCH1_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_METADATA:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_METADATA_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_CT_STATE_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_PROF_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_RANGE_ENTRY_POOL_NAME,
-				rc);
-		break;
-	case TF_TBL_TYPE_LAG:
-		TF_RM_GET_POOLS(tfs, dir, pool,
-				TF_LAG_ENTRY_POOL_NAME,
-				rc);
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-		break;
-	/* No bitalloc pools for these types */
-	case TF_TBL_TYPE_EXT:
-	default:
-		break;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	if (rc == -EOPNOTSUPP) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type not supported, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	} else if (rc == -1) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table type lookup failed, type:%d\n",
-			    tf_dir_2_str(dir),
-			    type);
-		return rc;
-	}
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	memcpy(parms->info,
+	       &rm_db->db[parms->db_index].alloc,
+	       sizeof(struct tf_rm_alloc_info));
 
 	return 0;
 }
 
 int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type)
+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 {
-	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
-		break;
-	case TF_TBL_TYPE_MCAST_GROUPS:
-		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
-		break;
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
-		break;
-	case TF_TBL_TYPE_ACT_STATS_64:
-		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
-		break;
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
-		break;
-	case TF_TBL_TYPE_METER_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
-		break;
-	case TF_TBL_TYPE_METER_INST:
-		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
-		break;
-	case TF_TBL_TYPE_UPAR:
-		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
-		break;
-	case TF_TBL_TYPE_EPOCH0:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
-		break;
-	case TF_TBL_TYPE_EPOCH1:
-		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
-		break;
-	case TF_TBL_TYPE_METADATA:
-		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
-		break;
-	case TF_TBL_TYPE_CT_STATE:
-		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
-		break;
-	case TF_TBL_TYPE_RANGE_PROF:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
-		break;
-	case TF_TBL_TYPE_RANGE_ENTRY:
-		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
-		break;
-	case TF_TBL_TYPE_LAG:
-		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		*hcapi_type = -1;
-		rc = -EOPNOTSUPP;
-	}
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	return rc;
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
+
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
+
+	return 0;
 }
 
 int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index)
+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 {
-	int rc;
-	struct tf_rm_resc *resc;
-	uint32_t hcapi_type;
-	uint32_t base_index;
+	int rc = 0;
+	struct tf_rm_new_db *rm_db;
+	enum tf_rm_elem_cfg_type cfg_type;
 
-	if (dir == TF_DIR_RX)
-		resc = &tfs->resc.rx;
-	else if (dir == TF_DIR_TX)
-		resc = &tfs->resc.tx;
-	else
-		return -EOPNOTSUPP;
+	TF_CHECK_PARMS2(parms, parms->rm_db);
 
-	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
-	if (rc)
-		return -1;
-
-	switch (type) {
-	case TF_TBL_TYPE_FULL_ACT_RECORD:
-	case TF_TBL_TYPE_MCAST_GROUPS:
-	case TF_TBL_TYPE_ACT_ENCAP_8B:
-	case TF_TBL_TYPE_ACT_ENCAP_16B:
-	case TF_TBL_TYPE_ACT_ENCAP_32B:
-	case TF_TBL_TYPE_ACT_ENCAP_64B:
-	case TF_TBL_TYPE_ACT_SP_SMAC:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
-	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
-	case TF_TBL_TYPE_ACT_STATS_64:
-	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
-	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
-		base_index = resc->sram_entry[hcapi_type].start;
-		break;
-	case TF_TBL_TYPE_MIRROR_CONFIG:
-	case TF_TBL_TYPE_METER_PROF:
-	case TF_TBL_TYPE_METER_INST:
-	case TF_TBL_TYPE_UPAR:
-	case TF_TBL_TYPE_EPOCH0:
-	case TF_TBL_TYPE_EPOCH1:
-	case TF_TBL_TYPE_METADATA:
-	case TF_TBL_TYPE_CT_STATE:
-	case TF_TBL_TYPE_RANGE_PROF:
-	case TF_TBL_TYPE_RANGE_ENTRY:
-	case TF_TBL_TYPE_LAG:
-		base_index = resc->hw_entry[hcapi_type].start;
-		break;
-	/* Not yet supported */
-	case TF_TBL_TYPE_VNIC_SVIF:
-	case TF_TBL_TYPE_EXT:   /* No pools for this type */
-	default:
-		return -EOPNOTSUPP;
-	}
+	rm_db = (struct tf_rm_new_db *)parms->rm_db;
+	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	switch (c_type) {
-	case TF_RM_CONVERT_RM_BASE:
-		*convert_index = index - base_index;
-		break;
-	case TF_RM_CONVERT_ADD_BASE:
-		*convert_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
+	/* Bail out if not controlled by RM */
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+		return -ENOTSUP;
+
+	/* Bail silently (no logging), if the pool is not valid there
+	 * was no elements allocated for it.
+	 */
+	if (rm_db->db[parms->db_index].pool == NULL) {
+		*parms->count = 0;
+		return 0;
 	}
 
-	return 0;
+	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
+
+	return rc;
+
 }
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 1a09f13a7..5cb68892a 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -3,301 +3,444 @@
  * All rights reserved.
  */
 
-#ifndef TF_RM_H_
-#define TF_RM_H_
+#ifndef TF_RM_NEW_H_
+#define TF_RM_NEW_H_
 
-#include "tf_resources.h"
 #include "tf_core.h"
 #include "bitalloc.h"
+#include "tf_device.h"
 
 struct tf;
-struct tf_session;
 
-/* Internal macro to determine appropriate allocation pools based on
- * DIRECTION parm, also performs error checking for DIRECTION parm. The
- * SESSION_POOL and SESSION pointers are set appropriately upon
- * successful return (the GLOBAL_POOL is used to globally manage
- * resource allocation and the SESSION_POOL is used to track the
- * resources that have been allocated to the session)
+/**
+ * The Resource Manager (RM) module provides basic DB handling for
+ * internal resources. These resources exists within the actual device
+ * and are controlled by the HCAPI Resource Manager running on the
+ * firmware.
+ *
+ * The RM DBs are all intended to be indexed using TF types there for
+ * a lookup requires no additional conversion. The DB configuration
+ * specifies the TF Type to HCAPI Type mapping and it becomes the
+ * responsibility of the DB initialization to handle this static
+ * mapping.
+ *
+ * Accessor functions are providing access to the DB, thus hiding the
+ * implementation.
  *
- * parameters:
- *   struct tfp        *tfp
- *   enum tf_dir        direction
- *   struct bitalloc  **session_pool
- *   string             base_pool_name - used to form pointers to the
- *					 appropriate bit allocation
- *					 pools, both directions of the
- *					 session pools must have same
- *					 base name, for example if
- *					 POOL_NAME is feat_pool: - the
- *					 ptr's to the session pools
- *					 are feat_pool_rx feat_pool_tx
+ * The RM DB will work on its initial allocated sizes so the
+ * capability of dynamically growing a particular resource is not
+ * possible. If this capability later becomes a requirement then the
+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
+ * structure and several new APIs would need to be added to allow for
+ * growth of a single TF resource type.
  *
- *  int                  rc - return code
- *			      0 - Success
- *			     -1 - invalid DIRECTION parm
+ * The access functions does not check for NULL pointers as it's a
+ * support module, not called directly.
  */
-#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
-		(rc) = 0;						\
-		if ((direction) == TF_DIR_RX) {				\
-			*(session_pool) = (tfs)->pool_name ## _RX;	\
-		} else if ((direction) == TF_DIR_TX) {			\
-			*(session_pool) = (tfs)->pool_name ## _TX;	\
-		} else {						\
-			rc = -1;					\
-		}							\
-	} while (0)
 
-#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _RX)
+/**
+ * Resource reservation single entry result. Used when accessing HCAPI
+ * RM on the firmware.
+ */
+struct tf_rm_new_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
 
-#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
-	(*(session_pool) = (tfs)->pool_name ## _TX)
+/**
+ * RM Element configuration enumeration. Used by the Device to
+ * indicate how the RM elements the DB consists off, are to be
+ * configured at time of DB creation. The TF may present types to the
+ * ULP layer that is not controlled by HCAPI within the Firmware.
+ */
+enum tf_rm_elem_cfg_type {
+	/** No configuration */
+	TF_RM_ELEM_CFG_NULL,
+	/** HCAPI 'controlled', uses a Pool for internal storage */
+	TF_RM_ELEM_CFG_HCAPI,
+	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
+	TF_RM_ELEM_CFG_PRIVATE,
+	/**
+	 * Shared element thus it belongs to a shared FW Session and
+	 * is not controlled by the Host.
+	 */
+	TF_RM_ELEM_CFG_SHARED,
+	TF_RM_TYPE_MAX
+};
 
 /**
- * Resource query single entry
+ * RM Reservation strategy enumeration. Type of strategy comes from
+ * the HCAPI RM QCAPS handshake.
  */
-struct tf_rm_query_entry {
-	/** Minimum guaranteed number of elements */
-	uint16_t min;
-	/** Maximum non-guaranteed number of elements */
-	uint16_t max;
+enum tf_rm_resc_resv_strategy {
+	TF_RM_RESC_RESV_STATIC_PARTITION,
+	TF_RM_RESC_RESV_STRATEGY_1,
+	TF_RM_RESC_RESV_STRATEGY_2,
+	TF_RM_RESC_RESV_STRATEGY_3,
+	TF_RM_RESC_RESV_MAX
 };
 
 /**
- * Resource single entry
+ * RM Element configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI RM
+ * of same type.
  */
-struct tf_rm_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
+struct tf_rm_element_cfg {
+	/**
+	 * RM Element config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_rm_elem_cfg_type cfg_type;
+
+	/* If a HCAPI to TF type conversion is required then TF type
+	 * can be added here.
+	 */
+
+	/**
+	 * HCAPI RM Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
 };
 
 /**
- * Resource query array of HW entities
+ * Allocation information for a single element.
  */
-struct tf_rm_hw_query {
-	/** array of HW resource entries */
-	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_alloc_info {
+	/**
+	 * HCAPI RM allocated range information.
+	 *
+	 * NOTE:
+	 * In case of dynamic allocation support this would have
+	 * to be changed to linked list of tf_rm_entry instead.
+	 */
+	struct tf_rm_new_entry entry;
 };
 
 /**
- * Resource allocation array of HW entities
+ * Create RM DB parameters
  */
-struct tf_rm_hw_alloc {
-	/** array of HW resource entries */
-	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+struct tf_rm_create_db_parms {
+	/**
+	 * [in] Device module type. Used for logging purposes.
+	 */
+	enum tf_device_module_type type;
+	/**
+	 * [in] Receive or transmit direction.
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Number of elements.
+	 */
+	uint16_t num_elements;
+	/**
+	 * [in] Parameter structure array. Array size is num_elements.
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Resource allocation count array. This array content
+	 * originates from the tf_session_resources that is passed in
+	 * on session open.
+	 * Array size is num_elements.
+	 */
+	uint16_t *alloc_cnt;
+	/**
+	 * [out] RM DB Handle
+	 */
+	void **rm_db;
 };
 
 /**
- * Resource query array of SRAM entities
+ * Free RM DB parameters
  */
-struct tf_rm_sram_query {
-	/** array of SRAM resource entries */
-	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_db_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
 };
 
 /**
- * Resource allocation array of SRAM entities
+ * Allocate RM parameters for a single element
  */
-struct tf_rm_sram_alloc {
-	/** array of SRAM resource entries */
-	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_allocate_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Pointer to the allocated index in normalized
+	 * form. Normalized means the index has been adjusted,
+	 * i.e. Full Action Record offsets.
+	 */
+	uint32_t *index;
+	/**
+	 * [in] Priority, indicates the prority of the entry
+	 * priority  0: allocate from top of the tcam (from index 0
+	 *              or lowest available index)
+	 * priority !0: allocate from bottom of the tcam (from highest
+	 *              available index)
+	 */
+	uint32_t priority;
 };
 
 /**
- * Resource Manager arrays for a single direction
+ * Free RM parameters for a single element
  */
-struct tf_rm_resc {
-	/** array of HW resource entries */
-	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
-	/** array of SRAM resource entries */
-	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+struct tf_rm_free_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t index;
 };
 
 /**
- * Resource Manager Database
+ * Is Allocated parameters for a single element
  */
-struct tf_rm_db {
-	struct tf_rm_resc rx;
-	struct tf_rm_resc tx;
+struct tf_rm_is_allocated_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t index;
+	/**
+	 * [in] Pointer to flag that indicates the state of the query
+	 */
+	int *allocated;
 };
 
 /**
- * Helper function used to convert HW HCAPI resource type to a string.
+ * Get Allocation information for a single element
  */
-const char
-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+struct tf_rm_get_alloc_info_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the requested allocation information for
+	 * the specified db_index
+	 */
+	struct tf_rm_alloc_info *info;
+};
 
 /**
- * Helper function used to convert SRAM HCAPI resource type to a string.
+ * Get HCAPI type parameters for a single element
  */
-const char
-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+struct tf_rm_get_hcapi_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
 
 /**
- * Initializes the Resource Manager and the associated database
- * entries for HW and SRAM resources. Must be called before any other
- * Resource Manager functions.
+ * Get InUse count parameters for single element
+ */
+struct tf_rm_get_inuse_count_parms {
+	/**
+	 * [in] RM DB Handle
+	 */
+	void *rm_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the inuse count for the specified db_index
+	 */
+	uint16_t *count;
+};
+
+/**
+ * @page rm Resource Manager
  *
- * [in] tfp
- *   Pointer to TF handle
+ * @ref tf_rm_create_db
+ *
+ * @ref tf_rm_free_db
+ *
+ * @ref tf_rm_allocate
+ *
+ * @ref tf_rm_free
+ *
+ * @ref tf_rm_is_allocated
+ *
+ * @ref tf_rm_get_info
+ *
+ * @ref tf_rm_get_hcapi_type
+ *
+ * @ref tf_rm_get_inuse_count
  */
-void tf_rm_init(struct tf *tfp);
 
 /**
- * Allocates and validates both HW and SRAM resources per the NVM
- * configuration. If any allocation fails all resources for the
- * session is deallocated.
+ * Creates and fills a Resource Manager (RM) DB with requested
+ * elements. The DB is indexed per the parms structure.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to create parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_rm_allocate_validate(struct tf *tfp);
+/*
+ * NOTE:
+ * - Fail on parameter check
+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
+ * - Fail on DB creation if DB already exist
+ *
+ * - Allocs local DB
+ * - Does hcapi qcaps
+ * - Does hcapi reservation
+ * - Populates the pool with allocated elements
+ * - Returns handle to the created DB
+ */
+int tf_rm_create_db(struct tf *tfp,
+		    struct tf_rm_create_db_parms *parms);
 
 /**
- * Closes the Resource Manager and frees all allocated resources per
- * the associated database.
+ * Closes the Resource Manager (RM) DB and frees all allocated
+ * resources per the associated database.
  *
  * [in] tfp
- *   Pointer to TF handle
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to free parameters
  *
  * Returns
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
- *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
-int tf_rm_close(struct tf *tfp);
+int tf_rm_free_db(struct tf *tfp,
+		  struct tf_rm_free_db_parms *parms);
 
-#if (TF_SHADOW == 1)
 /**
- * Initializes Shadow DB of configuration elements
+ * Allocates a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to allocate parameters
  *
- * Returns:
- *  0  - Success
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if pool is empty
  */
-int tf_rm_shadow_db_init(struct tf_session *tfs);
-#endif /* TF_SHADOW */
+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Tcam table type.
- *
- * Function will print error msg if tcam type is unsupported or lookup
- * failed.
+ * Free's a single element for the type specified, within the DB.
  *
- * [in] tfs
- *   Pointer to TF Session
+ * [in] parms
+ *   Pointer to free parameters
  *
- * [in] type
- *   Type of the object
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
-			    enum tf_dir dir,
-			    enum tf_tcam_tbl_type type,
-			    struct bitalloc **pool);
+int tf_rm_free(struct tf_rm_free_parms *parms);
 
 /**
- * Perform a Session Pool lookup using the Table type.
- *
- * Function will print error msg if table type is unsupported or
- * lookup failed.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] type
- *   Type of the object
+ * Performs an allocation verification check on a specified element.
  *
- * [in] dir
- *    Receive or transmit direction
+ * [in] parms
+ *   Pointer to is allocated parameters
  *
- * [in/out]  session_pool
- *   Session pool
- *
- * Returns:
- *  0           - Success will set the **pool
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
-			   enum tf_dir dir,
-			   enum tf_tbl_type type,
-			   struct bitalloc **pool);
+/*
+ * NOTE:
+ *  - If pool is set to Chip MAX, then the query index must be checked
+ *    against the allocated range and query index must be allocated as well.
+ *  - If pool is allocated size only, then check if query index is allocated.
+ */
+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
 
 /**
- * Converts the TF Table Type to internal HCAPI_TYPE
- *
- * [in] type
- *   Type to be converted
+ * Retrieves an elements allocation information from the Resource
+ * Manager (RM) DB.
  *
- * [in/out] hcapi_type
- *   Converted type
+ * [in] parms
+ *   Pointer to get info parameters
  *
- * Returns:
- *  0           - Success will set the *hcapi_type
- *  -EOPNOTSUPP - Type is not supported
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_tbl_type(enum tf_tbl_type type,
-		       uint32_t *hcapi_type);
+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
 
 /**
- * TF RM Convert of index methods.
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type.
+ *
+ * [in] parms
+ *   Pointer to get hcapi parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-enum tf_rm_convert_type {
-	/** Adds the base of the Session Pool to the index */
-	TF_RM_CONVERT_ADD_BASE,
-	/** Removes the Session Pool base from the index */
-	TF_RM_CONVERT_RM_BASE
-};
+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
 
 /**
- * Provides conversion of the Table Type index in relation to the
- * Session Pool base.
- *
- * [in] tfs
- *   Pointer to TF Session
- *
- * [in] dir
- *    Receive or transmit direction
- *
- * [in] type
- *   Type of the object
+ * Performs a lookup in the Resource Manager DB and retrives the
+ * requested HCAPI RM type inuse count.
  *
- * [in] c_type
- *   Type of conversion to perform
+ * [in] parms
+ *   Pointer to get inuse parameters
  *
- * [in] index
- *   Index to be converted
- *
- * [in/out]  convert_index
- *   Pointer to the converted index
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
  */
-int
-tf_rm_convert_index(struct tf_session *tfs,
-		    enum tf_dir dir,
-		    enum tf_tbl_type type,
-		    enum tf_rm_convert_type c_type,
-		    uint32_t index,
-		    uint32_t *convert_index);
+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
 
-#endif /* TF_RM_H_ */
+#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c
deleted file mode 100644
index 2d9be654a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.c
+++ /dev/null
@@ -1,907 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <string.h>
-
-#include <rte_common.h>
-
-#include <cfa_resource_types.h>
-
-#include "tf_rm_new.h"
-#include "tf_common.h"
-#include "tf_util.h"
-#include "tf_session.h"
-#include "tf_device.h"
-#include "tfp.h"
-#include "tf_msg.h"
-
-/**
- * Generic RM Element data type that an RM DB is build upon.
- */
-struct tf_rm_element {
-	/**
-	 * RM Element configuration type. If Private then the
-	 * hcapi_type can be ignored. If Null then the element is not
-	 * valid for the device.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/**
-	 * HCAPI RM Type for the element.
-	 */
-	uint16_t hcapi_type;
-
-	/**
-	 * HCAPI RM allocated range information for the element.
-	 */
-	struct tf_rm_alloc_info alloc;
-
-	/**
-	 * Bit allocator pool for the element. Pool size is controlled
-	 * by the struct tf_session_resources at time of session creation.
-	 * Null indicates that the element is not used for the device.
-	 */
-	struct bitalloc *pool;
-};
-
-/**
- * TF RM DB definition
- */
-struct tf_rm_new_db {
-	/**
-	 * Number of elements in the DB
-	 */
-	uint16_t num_entries;
-
-	/**
-	 * Direction this DB controls.
-	 */
-	enum tf_dir dir;
-
-	/**
-	 * Module type, used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-
-	/**
-	 * The DB consists of an array of elements
-	 */
-	struct tf_rm_element *db;
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] cfg
- *   Pointer to the DB configuration
- *
- * [in] reservations
- *   Pointer to the allocation values associated with the module
- *
- * [in] count
- *   Number of DB configuration elements
- *
- * [out] valid_count
- *   Number of HCAPI entries with a reservation value greater than 0
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static void
-tf_rm_count_hcapi_reservations(enum tf_dir dir,
-			       enum tf_device_module_type type,
-			       struct tf_rm_element_cfg *cfg,
-			       uint16_t *reservations,
-			       uint16_t count,
-			       uint16_t *valid_count)
-{
-	int i;
-	uint16_t cnt = 0;
-
-	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
-		    reservations[i] > 0)
-			cnt++;
-
-		/* Only log msg if a type is attempted reserved and
-		 * not supported. We ignore EM module as its using a
-		 * split configuration array thus it would fail for
-		 * this type of check.
-		 */
-		if (type != TF_DEVICE_MODULE_TYPE_EM &&
-		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
-		    reservations[i] > 0) {
-			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-		}
-	}
-
-	*valid_count = cnt;
-}
-
-/**
- * Resource Manager Adjust of base index definitions.
- */
-enum tf_rm_adjust_type {
-	TF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */
-	TF_RM_ADJUST_RM_BASE   /**< Removes base from the index */
-};
-
-/**
- * Adjust an index according to the allocation information.
- *
- * All resources are controlled in a 0 based pool. Some resources, by
- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they
- * need to be adjusted before they are handed out.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [in] action
- *   Adjust action
- *
- * [in] db_index
- *   DB index for the element type
- *
- * [in] index
- *   Index to convert
- *
- * [out] adj_index
- *   Adjusted index
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_adjust_index(struct tf_rm_element *db,
-		   enum tf_rm_adjust_type action,
-		   uint32_t db_index,
-		   uint32_t index,
-		   uint32_t *adj_index)
-{
-	int rc = 0;
-	uint32_t base_index;
-
-	base_index = db[db_index].alloc.entry.start;
-
-	switch (action) {
-	case TF_RM_ADJUST_RM_BASE:
-		*adj_index = index - base_index;
-		break;
-	case TF_RM_ADJUST_ADD_BASE:
-		*adj_index = index + base_index;
-		break;
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	return rc;
-}
-
-/**
- * Logs an array of found residual entries to the console.
- *
- * [in] dir
- *   Receive or transmit direction
- *
- * [in] type
- *   Type of Device Module
- *
- * [in] count
- *   Number of entries in the residual array
- *
- * [in] residuals
- *   Pointer to an array of residual entries. Array is index same as
- *   the DB in which this function is used. Each entry holds residual
- *   value for that entry.
- */
-static void
-tf_rm_log_residuals(enum tf_dir dir,
-		    enum tf_device_module_type type,
-		    uint16_t count,
-		    uint16_t *residuals)
-{
-	int i;
-
-	/* Walk the residual array and log the types that wasn't
-	 * cleaned up to the console.
-	 */
-	for (i = 0; i < count; i++) {
-		if (residuals[i] != 0)
-			TFP_DRV_LOG(ERR,
-				"%s, %s was not cleaned up, %d outstanding\n",
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i),
-				residuals[i]);
-	}
-}
-
-/**
- * Performs a check of the passed in DB for any lingering elements. If
- * a resource type was found to not have been cleaned up by the caller
- * then its residual values are recorded, logged and passed back in an
- * allocate reservation array that the caller can pass to the FW for
- * cleanup.
- *
- * [in] db
- *   Pointer to the db, used for the lookup
- *
- * [out] resv_size
- *   Pointer to the reservation size of the generated reservation
- *   array.
- *
- * [in/out] resv
- *   Pointer Pointer to a reservation array. The reservation array is
- *   allocated after the residual scan and holds any found residual
- *   entries. Thus it can be smaller than the DB that the check was
- *   performed on. Array must be freed by the caller.
- *
- * [out] residuals_present
- *   Pointer to a bool flag indicating if residual was present in the
- *   DB
- *
- * Returns:
- *     0          - Success
- *   - EOPNOTSUPP - Operation not supported
- */
-static int
-tf_rm_check_residuals(struct tf_rm_new_db *rm_db,
-		      uint16_t *resv_size,
-		      struct tf_rm_resc_entry **resv,
-		      bool *residuals_present)
-{
-	int rc;
-	int i;
-	int f;
-	uint16_t count;
-	uint16_t found;
-	uint16_t *residuals = NULL;
-	uint16_t hcapi_type;
-	struct tf_rm_get_inuse_count_parms iparms;
-	struct tf_rm_get_alloc_info_parms aparms;
-	struct tf_rm_get_hcapi_parms hparms;
-	struct tf_rm_alloc_info info;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_entry *local_resv = NULL;
-
-	/* Create array to hold the entries that have residuals */
-	cparms.nitems = rm_db->num_entries;
-	cparms.size = sizeof(uint16_t);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	residuals = (uint16_t *)cparms.mem_va;
-
-	/* Traverse the DB and collect any residual elements */
-	iparms.rm_db = rm_db;
-	iparms.count = &count;
-	for (i = 0, found = 0; i < rm_db->num_entries; i++) {
-		iparms.db_index = i;
-		rc = tf_rm_get_inuse_count(&iparms);
-		/* Not a device supported entry, just skip */
-		if (rc == -ENOTSUP)
-			continue;
-		if (rc)
-			goto cleanup_residuals;
-
-		if (count) {
-			found++;
-			residuals[i] = count;
-			*residuals_present = true;
-		}
-	}
-
-	if (*residuals_present) {
-		/* Populate a reduced resv array with only the entries
-		 * that have residuals.
-		 */
-		cparms.nitems = found;
-		cparms.size = sizeof(struct tf_rm_resc_entry);
-		cparms.alignment = 0;
-		rc = tfp_calloc(&cparms);
-		if (rc)
-			return rc;
-
-		local_resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-		aparms.rm_db = rm_db;
-		hparms.rm_db = rm_db;
-		hparms.hcapi_type = &hcapi_type;
-		for (i = 0, f = 0; i < rm_db->num_entries; i++) {
-			if (residuals[i] == 0)
-				continue;
-			aparms.db_index = i;
-			aparms.info = &info;
-			rc = tf_rm_get_info(&aparms);
-			if (rc)
-				goto cleanup_all;
-
-			hparms.db_index = i;
-			rc = tf_rm_get_hcapi_type(&hparms);
-			if (rc)
-				goto cleanup_all;
-
-			local_resv[f].type = hcapi_type;
-			local_resv[f].start = info.entry.start;
-			local_resv[f].stride = info.entry.stride;
-			f++;
-		}
-		*resv_size = found;
-	}
-
-	tf_rm_log_residuals(rm_db->dir,
-			    rm_db->type,
-			    rm_db->num_entries,
-			    residuals);
-
-	tfp_free((void *)residuals);
-	*resv = local_resv;
-
-	return 0;
-
- cleanup_all:
-	tfp_free((void *)local_resv);
-	*resv = NULL;
- cleanup_residuals:
-	tfp_free((void *)residuals);
-
-	return rc;
-}
-
-int
-tf_rm_create_db(struct tf *tfp,
-		struct tf_rm_create_db_parms *parms)
-{
-	int rc;
-	int i;
-	int j;
-	struct tf_session *tfs;
-	struct tf_dev_info *dev;
-	uint16_t max_types;
-	struct tfp_calloc_parms cparms;
-	struct tf_rm_resc_req_entry *query;
-	enum tf_rm_resc_resv_strategy resv_strategy;
-	struct tf_rm_resc_req_entry *req;
-	struct tf_rm_resc_entry *resv;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_element *db;
-	uint32_t pool_size;
-	uint16_t hcapi_items;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
-	if (rc)
-		return rc;
-
-	/* Retrieve device information */
-	rc = tf_session_get_device(tfs, &dev);
-	if (rc)
-		return rc;
-
-	/* Need device max number of elements for the RM QCAPS */
-	rc = dev->ops->tf_dev_get_max_types(tfp, &max_types);
-	if (rc)
-		return rc;
-
-	cparms.nitems = max_types;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	cparms.alignment = 0;
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-
-	query = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Get Firmware Capabilities */
-	rc = tf_msg_session_resc_qcaps(tfp,
-				       parms->dir,
-				       max_types,
-				       query,
-				       &resv_strategy);
-	if (rc)
-		return rc;
-
-	/* Process capabilities against DB requirements. However, as a
-	 * DB can hold elements that are not HCAPI we can reduce the
-	 * req msg content by removing those out of the request yet
-	 * the DB holds them all as to give a fast lookup. We can also
-	 * remove entries where there are no request for elements.
-	 */
-	tf_rm_count_hcapi_reservations(parms->dir,
-				       parms->type,
-				       parms->cfg,
-				       parms->alloc_cnt,
-				       parms->num_elements,
-				       &hcapi_items);
-
-	/* Alloc request, alignment already set */
-	cparms.nitems = (size_t)hcapi_items;
-	cparms.size = sizeof(struct tf_rm_resc_req_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	req = (struct tf_rm_resc_req_entry *)cparms.mem_va;
-
-	/* Alloc reservation, alignment and nitems already set */
-	cparms.size = sizeof(struct tf_rm_resc_entry);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	resv = (struct tf_rm_resc_entry *)cparms.mem_va;
-
-	/* Build the request */
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
-			/* Only perform reservation for entries that
-			 * has been requested
-			 */
-			if (parms->alloc_cnt[i] == 0)
-				continue;
-
-			/* Verify that we can get the full amount
-			 * allocated per the qcaps availability.
-			 */
-			if (parms->alloc_cnt[i] <=
-			    query[parms->cfg[i].hcapi_type].max) {
-				req[j].type = parms->cfg[i].hcapi_type;
-				req[j].min = parms->alloc_cnt[i];
-				req[j].max = parms->alloc_cnt[i];
-				j++;
-			} else {
-				TFP_DRV_LOG(ERR,
-					    "%s: Resource failure, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    parms->cfg[i].hcapi_type);
-				TFP_DRV_LOG(ERR,
-					"req:%d, avail:%d\n",
-					parms->alloc_cnt[i],
-					query[parms->cfg[i].hcapi_type].max);
-				return -EINVAL;
-			}
-		}
-	}
-
-	rc = tf_msg_session_resc_alloc(tfp,
-				       parms->dir,
-				       hcapi_items,
-				       req,
-				       resv);
-	if (rc)
-		return rc;
-
-	/* Build the RM DB per the request */
-	cparms.nitems = 1;
-	cparms.size = sizeof(struct tf_rm_new_db);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db = (void *)cparms.mem_va;
-
-	/* Build the DB within RM DB */
-	cparms.nitems = parms->num_elements;
-	cparms.size = sizeof(struct tf_rm_element);
-	rc = tfp_calloc(&cparms);
-	if (rc)
-		return rc;
-	rm_db->db = (struct tf_rm_element *)cparms.mem_va;
-
-	db = rm_db->db;
-	for (i = 0, j = 0; i < parms->num_elements; i++) {
-		db[i].cfg_type = parms->cfg[i].cfg_type;
-		db[i].hcapi_type = parms->cfg[i].hcapi_type;
-
-		/* Skip any non HCAPI types as we didn't include them
-		 * in the reservation request.
-		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
-			continue;
-
-		/* If the element didn't request an allocation no need
-		 * to create a pool nor verify if we got a reservation.
-		 */
-		if (parms->alloc_cnt[i] == 0)
-			continue;
-
-		/* If the element had requested an allocation and that
-		 * allocation was a success (full amount) then
-		 * allocate the pool.
-		 */
-		if (parms->alloc_cnt[i] == resv[j].stride) {
-			db[i].alloc.entry.start = resv[j].start;
-			db[i].alloc.entry.stride = resv[j].stride;
-
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			j++;
-		} else {
-			/* Bail out as we want what we requested for
-			 * all elements, not any less.
-			 */
-			TFP_DRV_LOG(ERR,
-				    "%s: Alloc failed, type:%d\n",
-				    tf_dir_2_str(parms->dir),
-				    db[i].cfg_type);
-			TFP_DRV_LOG(ERR,
-				    "req:%d, alloc:%d\n",
-				    parms->alloc_cnt[i],
-				    resv[j].stride);
-			goto fail;
-		}
-	}
-
-	rm_db->num_entries = parms->num_elements;
-	rm_db->dir = parms->dir;
-	rm_db->type = parms->type;
-	*parms->rm_db = (void *)rm_db;
-
-	printf("%s: type:%d num_entries:%d\n",
-	       tf_dir_2_str(parms->dir),
-	       parms->type,
-	       i);
-
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-
-	return 0;
-
- fail:
-	tfp_free((void *)req);
-	tfp_free((void *)resv);
-	tfp_free((void *)db->pool);
-	tfp_free((void *)db);
-	tfp_free((void *)rm_db);
-	parms->rm_db = NULL;
-
-	return -EINVAL;
-}
-
-int
-tf_rm_free_db(struct tf *tfp,
-	      struct tf_rm_free_db_parms *parms)
-{
-	int rc;
-	int i;
-	uint16_t resv_size = 0;
-	struct tf_rm_new_db *rm_db;
-	struct tf_rm_resc_entry *resv;
-	bool residuals_found = false;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	/* Device unbind happens when the TF Session is closed and the
-	 * session ref count is 0. Device unbind will cleanup each of
-	 * its support modules, i.e. Identifier, thus we're ending up
-	 * here to close the DB.
-	 *
-	 * On TF Session close it is assumed that the session has already
-	 * cleaned up all its resources, individually, while
-	 * destroying its flows.
-	 *
-	 * To assist in the 'cleanup checking' the DB is checked for any
-	 * remaining elements and logged if found to be the case.
-	 *
-	 * Any such elements will need to be 'cleared' ahead of
-	 * returning the resources to the HCAPI RM.
-	 *
-	 * RM will signal FW to flush the DB resources. FW will
-	 * perform the invalidation. TF Session close will return the
-	 * previous allocated elements to the RM and then close the
-	 * HCAPI RM registration. That then saves several 'free' msgs
-	 * from being required.
-	 */
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-
-	/* Check for residuals that the client didn't clean up */
-	rc = tf_rm_check_residuals(rm_db,
-				   &resv_size,
-				   &resv,
-				   &residuals_found);
-	if (rc)
-		return rc;
-
-	/* Invalidate any residuals followed by a DB traversal for
-	 * pool cleanup.
-	 */
-	if (residuals_found) {
-		rc = tf_msg_session_resc_flush(tfp,
-					       parms->dir,
-					       resv_size,
-					       resv);
-		tfp_free((void *)resv);
-		/* On failure we still have to cleanup so we can only
-		 * log that FW failed.
-		 */
-		if (rc)
-			TFP_DRV_LOG(ERR,
-				    "%s: Internal Flush error, module:%s\n",
-				    tf_dir_2_str(parms->dir),
-				    tf_device_module_type_2_str(rm_db->type));
-	}
-
-	for (i = 0; i < rm_db->num_entries; i++)
-		tfp_free((void *)rm_db->db[i].pool);
-
-	tfp_free((void *)parms->rm_db);
-
-	return rc;
-}
-
-int
-tf_rm_allocate(struct tf_rm_allocate_parms *parms)
-{
-	int rc;
-	int id;
-	uint32_t index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/*
-	 * priority  0: allocate from top of the tcam i.e. high
-	 * priority !0: allocate index from bottom i.e lowest
-	 */
-	if (parms->priority)
-		id = ba_alloc_reverse(rm_db->db[parms->db_index].pool);
-	else
-		id = ba_alloc(rm_db->db[parms->db_index].pool);
-	if (id == BA_FAIL) {
-		rc = -ENOMEM;
-		TFP_DRV_LOG(ERR,
-			    "%s: Allocation failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_ADD_BASE,
-				parms->db_index,
-				id,
-				&index);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Alloc adjust of base index failed, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    strerror(-rc));
-		return -EINVAL;
-	}
-
-	*parms->index = index;
-
-	return rc;
-}
-
-int
-tf_rm_free(struct tf_rm_free_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	rc = ba_free(rm_db->db[parms->db_index].pool, adj_index);
-	/* No logging direction matters and that is not available here */
-	if (rc)
-		return rc;
-
-	return rc;
-}
-
-int
-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
-{
-	int rc;
-	uint32_t adj_index;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail out if the pool is not valid, should never happen */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		rc = -ENOTSUP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Invalid pool for this type:%d, rc:%s\n",
-			    tf_dir_2_str(rm_db->dir),
-			    parms->db_index,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Adjust for any non zero start value */
-	rc = tf_rm_adjust_index(rm_db->db,
-				TF_RM_ADJUST_RM_BASE,
-				parms->db_index,
-				parms->index,
-				&adj_index);
-	if (rc)
-		return rc;
-
-	*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,
-				     adj_index);
-
-	return rc;
-}
-
-int
-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	memcpy(parms->info,
-	       &rm_db->db[parms->db_index].alloc,
-	       sizeof(struct tf_rm_alloc_info));
-
-	return 0;
-}
-
-int
-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
-{
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
-
-	return 0;
-}
-
-int
-tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
-{
-	int rc = 0;
-	struct tf_rm_new_db *rm_db;
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	TF_CHECK_PARMS2(parms, parms->rm_db);
-
-	rm_db = (struct tf_rm_new_db *)parms->rm_db;
-	cfg_type = rm_db->db[parms->db_index].cfg_type;
-
-	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
-		return -ENOTSUP;
-
-	/* Bail silently (no logging), if the pool is not valid there
-	 * was no elements allocated for it.
-	 */
-	if (rm_db->db[parms->db_index].pool == NULL) {
-		*parms->count = 0;
-		return 0;
-	}
-
-	*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);
-
-	return rc;
-
-}
diff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h
deleted file mode 100644
index 5cb68892a..000000000
--- a/drivers/net/bnxt/tf_core/tf_rm_new.h
+++ /dev/null
@@ -1,446 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_RM_NEW_H_
-#define TF_RM_NEW_H_
-
-#include "tf_core.h"
-#include "bitalloc.h"
-#include "tf_device.h"
-
-struct tf;
-
-/**
- * The Resource Manager (RM) module provides basic DB handling for
- * internal resources. These resources exists within the actual device
- * and are controlled by the HCAPI Resource Manager running on the
- * firmware.
- *
- * The RM DBs are all intended to be indexed using TF types there for
- * a lookup requires no additional conversion. The DB configuration
- * specifies the TF Type to HCAPI Type mapping and it becomes the
- * responsibility of the DB initialization to handle this static
- * mapping.
- *
- * Accessor functions are providing access to the DB, thus hiding the
- * implementation.
- *
- * The RM DB will work on its initial allocated sizes so the
- * capability of dynamically growing a particular resource is not
- * possible. If this capability later becomes a requirement then the
- * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info
- * structure and several new APIs would need to be added to allow for
- * growth of a single TF resource type.
- *
- * The access functions does not check for NULL pointers as it's a
- * support module, not called directly.
- */
-
-/**
- * Resource reservation single entry result. Used when accessing HCAPI
- * RM on the firmware.
- */
-struct tf_rm_new_entry {
-	/** Starting index of the allocated resource */
-	uint16_t start;
-	/** Number of allocated elements */
-	uint16_t stride;
-};
-
-/**
- * RM Element configuration enumeration. Used by the Device to
- * indicate how the RM elements the DB consists off, are to be
- * configured at time of DB creation. The TF may present types to the
- * ULP layer that is not controlled by HCAPI within the Firmware.
- */
-enum tf_rm_elem_cfg_type {
-	/** No configuration */
-	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
-	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
-	/**
-	 * Shared element thus it belongs to a shared FW Session and
-	 * is not controlled by the Host.
-	 */
-	TF_RM_ELEM_CFG_SHARED,
-	TF_RM_TYPE_MAX
-};
-
-/**
- * RM Reservation strategy enumeration. Type of strategy comes from
- * the HCAPI RM QCAPS handshake.
- */
-enum tf_rm_resc_resv_strategy {
-	TF_RM_RESC_RESV_STATIC_PARTITION,
-	TF_RM_RESC_RESV_STRATEGY_1,
-	TF_RM_RESC_RESV_STRATEGY_2,
-	TF_RM_RESC_RESV_STRATEGY_3,
-	TF_RM_RESC_RESV_MAX
-};
-
-/**
- * RM Element configuration structure, used by the Device to configure
- * how an individual TF type is configured in regard to the HCAPI RM
- * of same type.
- */
-struct tf_rm_element_cfg {
-	/**
-	 * RM Element config controls how the DB for that element is
-	 * processed.
-	 */
-	enum tf_rm_elem_cfg_type cfg_type;
-
-	/* If a HCAPI to TF type conversion is required then TF type
-	 * can be added here.
-	 */
-
-	/**
-	 * HCAPI RM Type for the element. Used for TF to HCAPI type
-	 * conversion.
-	 */
-	uint16_t hcapi_type;
-};
-
-/**
- * Allocation information for a single element.
- */
-struct tf_rm_alloc_info {
-	/**
-	 * HCAPI RM allocated range information.
-	 *
-	 * NOTE:
-	 * In case of dynamic allocation support this would have
-	 * to be changed to linked list of tf_rm_entry instead.
-	 */
-	struct tf_rm_new_entry entry;
-};
-
-/**
- * Create RM DB parameters
- */
-struct tf_rm_create_db_parms {
-	/**
-	 * [in] Device module type. Used for logging purposes.
-	 */
-	enum tf_device_module_type type;
-	/**
-	 * [in] Receive or transmit direction.
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Number of elements.
-	 */
-	uint16_t num_elements;
-	/**
-	 * [in] Parameter structure array. Array size is num_elements.
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Resource allocation count array. This array content
-	 * originates from the tf_session_resources that is passed in
-	 * on session open.
-	 * Array size is num_elements.
-	 */
-	uint16_t *alloc_cnt;
-	/**
-	 * [out] RM DB Handle
-	 */
-	void **rm_db;
-};
-
-/**
- * Free RM DB parameters
- */
-struct tf_rm_free_db_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-};
-
-/**
- * Allocate RM parameters for a single element
- */
-struct tf_rm_allocate_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Pointer to the allocated index in normalized
-	 * form. Normalized means the index has been adjusted,
-	 * i.e. Full Action Record offsets.
-	 */
-	uint32_t *index;
-	/**
-	 * [in] Priority, indicates the prority of the entry
-	 * priority  0: allocate from top of the tcam (from index 0
-	 *              or lowest available index)
-	 * priority !0: allocate from bottom of the tcam (from highest
-	 *              available index)
-	 */
-	uint32_t priority;
-};
-
-/**
- * Free RM parameters for a single element
- */
-struct tf_rm_free_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint16_t index;
-};
-
-/**
- * Is Allocated parameters for a single element
- */
-struct tf_rm_is_allocated_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t index;
-	/**
-	 * [in] Pointer to flag that indicates the state of the query
-	 */
-	int *allocated;
-};
-
-/**
- * Get Allocation information for a single element
- */
-struct tf_rm_get_alloc_info_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the requested allocation information for
-	 * the specified db_index
-	 */
-	struct tf_rm_alloc_info *info;
-};
-
-/**
- * Get HCAPI type parameters for a single element
- */
-struct tf_rm_get_hcapi_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the hcapi type for the specified db_index
-	 */
-	uint16_t *hcapi_type;
-};
-
-/**
- * Get InUse count parameters for single element
- */
-struct tf_rm_get_inuse_count_parms {
-	/**
-	 * [in] RM DB Handle
-	 */
-	void *rm_db;
-	/**
-	 * [in] DB Index, indicates which DB entry to perform the
-	 * action on.
-	 */
-	uint16_t db_index;
-	/**
-	 * [out] Pointer to the inuse count for the specified db_index
-	 */
-	uint16_t *count;
-};
-
-/**
- * @page rm Resource Manager
- *
- * @ref tf_rm_create_db
- *
- * @ref tf_rm_free_db
- *
- * @ref tf_rm_allocate
- *
- * @ref tf_rm_free
- *
- * @ref tf_rm_is_allocated
- *
- * @ref tf_rm_get_info
- *
- * @ref tf_rm_get_hcapi_type
- *
- * @ref tf_rm_get_inuse_count
- */
-
-/**
- * Creates and fills a Resource Manager (RM) DB with requested
- * elements. The DB is indexed per the parms structure.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to create parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- * - Fail on parameter check
- * - Fail on DB creation, i.e. alloc amount is not possible or validation fails
- * - Fail on DB creation if DB already exist
- *
- * - Allocs local DB
- * - Does hcapi qcaps
- * - Does hcapi reservation
- * - Populates the pool with allocated elements
- * - Returns handle to the created DB
- */
-int tf_rm_create_db(struct tf *tfp,
-		    struct tf_rm_create_db_parms *parms);
-
-/**
- * Closes the Resource Manager (RM) DB and frees all allocated
- * resources per the associated database.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free_db(struct tf *tfp,
-		  struct tf_rm_free_db_parms *parms);
-
-/**
- * Allocates a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to allocate parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- *   - (-ENOMEM) if pool is empty
- */
-int tf_rm_allocate(struct tf_rm_allocate_parms *parms);
-
-/**
- * Free's a single element for the type specified, within the DB.
- *
- * [in] parms
- *   Pointer to free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_free(struct tf_rm_free_parms *parms);
-
-/**
- * Performs an allocation verification check on a specified element.
- *
- * [in] parms
- *   Pointer to is allocated parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-/*
- * NOTE:
- *  - If pool is set to Chip MAX, then the query index must be checked
- *    against the allocated range and query index must be allocated as well.
- *  - If pool is allocated size only, then check if query index is allocated.
- */
-int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);
-
-/**
- * Retrieves an elements allocation information from the Resource
- * Manager (RM) DB.
- *
- * [in] parms
- *   Pointer to get info parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type.
- *
- * [in] parms
- *   Pointer to get hcapi parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);
-
-/**
- * Performs a lookup in the Resource Manager DB and retrives the
- * requested HCAPI RM type inuse count.
- *
- * [in] parms
- *   Pointer to get inuse parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);
-
-#endif /* TF_RM_NEW_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 705bb0955..e4472ed7f 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -14,6 +14,7 @@
 #include "tf_device.h"
 #include "tf_rm.h"
 #include "tf_tbl.h"
+#include "tf_resources.h"
 #include "stack.h"
 
 /**
@@ -43,7 +44,8 @@
 #define TF_SESSION_EM_POOL_SIZE \
 	(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)
 
-/** Session
+/**
+ * Session
  *
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
@@ -99,216 +101,6 @@ struct tf_session {
 	/** Device handle */
 	struct tf_dev_info dev;
 
-	/** Session HW and SRAM resources */
-	struct tf_rm_db resc;
-
-	/* Session HW resource pools */
-
-	/** RX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 CTXT TCAM Pool */
-	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** RX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
-	/** TX Profile Func Pool */
-	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
-
-	/** RX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
-	/** TX Profile TCAM Pool */
-	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
-
-	/** RX EM Profile ID Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
-	/** TX EM Key Pool */
-	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
-
-	/** RX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
-	/** TX WC Profile Pool */
-	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
-
-	/* TBD, how do we want to handle EM records ?*/
-	/* EM Records are not controlled by way of a pool */
-
-	/** RX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
-	/** TX WC TCAM Pool */
-	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
-
-	/** RX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
-	/** TX Meter Profile Pool */
-	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
-
-	/** RX Meter Instance Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
-	/** TX Meter Pool */
-	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
-
-	/** RX Mirror Configuration Pool*/
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
-	/** RX Mirror Configuration Pool */
-	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
-
-	/** RX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
-	/** TX UPAR Pool */
-	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
-
-	/** RX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
-	/** TX SP TCAM Pool */
-	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
-
-	/** RX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
-	/** TX FKB Pool */
-	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
-
-	/** RX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
-	/** TX Table Scope Pool */
-	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
-
-	/** RX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
-	/** TX L2 Func Pool */
-	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
-
-	/** RX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
-	/** TX Epoch0 Pool */
-	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
-
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
-	/** TX Epoch1 Pool */
-	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
-
-	/** RX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
-	/** TX MetaData Profile Pool */
-	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
-
-	/** RX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
-	/** TX Connection Tracking State Pool */
-	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
-
-	/** RX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
-	/** TX Range Profile Pool */
-	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
-
-	/** RX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
-	/** TX Range Pool */
-	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
-
-	/** RX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
-	/** TX LAG Pool */
-	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
-
-	/* Session SRAM pools */
-
-	/** RX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
-		      TF_RSVD_SRAM_FULL_ACTION_RX);
-	/** TX Full Action Record Pool */
-	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
-		      TF_RSVD_SRAM_FULL_ACTION_TX);
-
-	/** RX Multicast Group Pool, only RX is supported */
-	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
-		      TF_RSVD_SRAM_MCG_RX);
-
-	/** RX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_8B_RX);
-	/** TX Encap 8B Pool*/
-	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_8B_TX);
-
-	/** RX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_ENCAP_16B_RX);
-	/** TX Encap 16B Pool */
-	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_16B_TX);
-
-	/** TX Encap 64B Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_ENCAP_64B_TX);
-
-	/** RX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
-		      TF_RSVD_SRAM_SP_SMAC_RX);
-	/** TX Source Properties SMAC Pool */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_TX);
-
-	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
-
-	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
-	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
-		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
-
-	/** RX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
-		      TF_RSVD_SRAM_COUNTER_64B_RX);
-	/** TX Counter 64B Pool */
-	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
-		      TF_RSVD_SRAM_COUNTER_64B_TX);
-
-	/** RX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_SPORT_RX);
-	/** TX NAT Source Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_SPORT_TX);
-
-	/** RX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_DPORT_RX);
-	/** TX NAT Destination Port Pool */
-	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_DPORT_TX);
-
-	/** RX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
-	/** TX NAT Source IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
-
-	/** RX NAT Destination IPv4 Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
-	/** TX NAT IPv4 Destination Pool */
-	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
-		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
-
-	/**
-	 * Pools not allocated from HCAPI RM
-	 */
-
-	/** RX L2 Ctx Remap ID  Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
-	/** TX L2 Ctx Remap ID Pool */
-	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
-
-	/** CRC32 seed table */
-#define TF_LKUP_SEED_MEM_SIZE 512
-	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
-
-	/** Lookup3 init values */
-	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index d7f5de4c4..05e866dc6 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -5,175 +5,413 @@
 
 /* Truflow Table APIs and supporting code */
 
-#include <stdio.h>
-#include <string.h>
-#include <stdbool.h>
-#include <math.h>
-#include <sys/param.h>
 #include <rte_common.h>
-#include <rte_errno.h>
-#include "hsi_struct_def_dpdk.h"
 
-#include "tf_core.h"
+#include "tf_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
 #include "tf_util.h"
-#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
-#include "hwrm_tf.h"
-#include "bnxt.h"
-#include "tf_resources.h"
-#include "tf_rm.h"
-#include "stack.h"
-#include "tf_common.h"
+
+
+struct tf;
+
+/**
+ * Table DBs.
+ */
+static void *tbl_db[TF_DIR_MAX];
+
+/**
+ * Table Shadow DBs
+ */
+/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
 
 /**
- * Internal function to get a Table Entry. Supports all Table Types
- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
+ * Shadow init flag, set on bind and cleared on unbind
  */
-static int
-tf_bulk_get_tbl_entry_internal(struct tf *tfp,
-			  struct tf_bulk_get_tbl_entry_parms *parms)
+/* static uint8_t shadow_init; */
+
+int
+tf_tbl_bind(struct tf *tfp,
+	    struct tf_tbl_cfg_parms *parms)
+{
+	int rc;
+	int i;
+	struct tf_rm_create_db_parms db_cfg = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Table DB already initialized\n");
+		return -EINVAL;
+	}
+
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
+	db_cfg.num_elements = parms->num_elements;
+	db_cfg.cfg = parms->cfg;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		db_cfg.dir = i;
+		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
+		db_cfg.rm_db = &tbl_db[i];
+		rc = tf_rm_create_db(tfp, &db_cfg);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table DB creation failed\n",
+				    tf_dir_2_str(i));
+
+			return rc;
+		}
+	}
+
+	init = 1;
+
+	printf("Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_tbl_unbind(struct tf *tfp)
 {
 	int rc;
-	int id;
-	uint32_t index;
-	struct bitalloc *session_pool;
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
-
-	/* Lookup the pool using the table type of the element */
-	rc = tf_rm_lookup_tbl_type_pool(tfs,
-					parms->dir,
-					parms->type,
-					&session_pool);
-	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	int i;
+	struct tf_rm_free_db_parms fparms = { 0 };
+
+	TF_CHECK_PARMS1(tfp);
+
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		fparms.dir = i;
+		fparms.rm_db = tbl_db[i];
+		rc = tf_rm_free_db(tfp, &fparms);
+		if (rc)
+			return rc;
+
+		tbl_db[i] = NULL;
+	}
+
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_tbl_alloc(struct tf *tfp __rte_unused,
+	     struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t idx;
+	struct tf_rm_allocate_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Allocate requested element */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = &idx;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed allocate, type:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type);
+		return rc;
+	}
+
+	*parms->idx = idx;
+
+	return 0;
+}
+
+int
+tf_tbl_free(struct tf *tfp __rte_unused,
+	    struct tf_tbl_free_parms *parms)
+{
+	int rc;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_free_parms fparms = { 0 };
+	int allocated = 0;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Check if element is in use */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
 	if (rc)
 		return rc;
 
-	index = parms->starting_idx;
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Entry already free, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    parms->idx);
+		return rc;
+	}
 
-	/*
-	 * Adjust the returned index/offset as there is no guarantee
-	 * that the start is 0 at time of RM allocation
-	 */
-	tf_rm_convert_index(tfs,
-			    parms->dir,
+	/* Free requested element */
+	fparms.rm_db = tbl_db[parms->dir];
+	fparms.db_index = parms->type;
+	fparms.index = parms->idx;
+	rc = tf_rm_free(&fparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Free failed, type:%d, index:%d\n",
+			    tf_dir_2_str(parms->dir),
 			    parms->type,
-			    TF_RM_CONVERT_RM_BASE,
-			    parms->starting_idx,
-			    &index);
+			    parms->idx);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_tbl_alloc_search(struct tf *tfp __rte_unused,
+		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
+{
+	return 0;
+}
+
+int
+tf_tbl_set(struct tf *tfp,
+	   struct tf_tbl_set_parms *parms)
+{
+	int rc;
+	int allocated = 0;
+	uint16_t hcapi_type;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
 	/* Verify that the entry has been previously allocated */
-	id = ba_inuse(session_pool, index);
-	if (id != 1) {
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
+
+	if (!allocated) {
 		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, starting_idx:%d\n",
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
 		   parms->type,
-		   index);
+		   parms->idx);
 		return -EINVAL;
 	}
 
-	/* Get the entry */
-	rc = tf_msg_bulk_get_tbl_entry(tfp, parms);
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
-	return rc;
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
 }
 
-#if (TF_SHADOW == 1)
-/**
- * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is incremente and
- * returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count incremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
-			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+int
+tf_tbl_get(struct tf *tfp,
+	   struct tf_tbl_get_parms *parms)
 {
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Alloc with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	int rc;
+	uint16_t hcapi_type;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	return -EOPNOTSUPP;
-}
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
 
-/**
- * Free Tbl entry from the Shadow DB. Shadow DB is searched for
- * the requested entry. If found the ref count is decremente and
- * new ref_count returned.
- *
- * [in] tfs
- *   Pointer to session
- * [in] parms
- *   Allocation parameters
- *
- * Return:
- *  0       - Success, entry found and ref count decremented
- *  -ENOENT - Failure, entry not found
- */
-static int
-tf_free_tbl_entry_shadow(struct tf_session *tfs,
-			 struct tf_free_tbl_entry_parms *parms)
-{
-	TFP_DRV_LOG(ERR,
-		    "%s, Entry Free with search not supported\n",
-		    tf_dir_2_str(parms->dir));
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
 
-	return -EOPNOTSUPP;
-}
-#endif /* TF_SHADOW */
+	/* Verify that the entry has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.index = parms->idx;
+	aparms.allocated = &allocated;
+	rc = tf_rm_is_allocated(&aparms);
+	if (rc)
+		return rc;
 
+	if (!allocated) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  hcapi_type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
 
- /* API defined in tf_core.h */
 int
-tf_bulk_get_tbl_entry(struct tf *tfp,
-		 struct tf_bulk_get_tbl_entry_parms *parms)
+tf_tbl_bulk_get(struct tf *tfp,
+		struct tf_tbl_get_bulk_parms *parms)
 {
-	int rc = 0;
+	int rc;
+	int i;
+	uint16_t hcapi_type;
+	uint32_t idx;
+	int allocated = 0;
+	struct tf_rm_is_allocated_parms aparms = { 0 };
+	struct tf_rm_get_hcapi_parms hparms = { 0 };
 
-	TF_CHECK_PARMS_SESSION(tfp, parms);
+	TF_CHECK_PARMS2(tfp, parms);
 
-	if (parms->type == TF_TBL_TYPE_EXT) {
-		/* Not supported, yet */
+	if (!init) {
 		TFP_DRV_LOG(ERR,
-			    "%s, External table type not supported\n",
+			    "%s: No Table DBs created\n",
 			    tf_dir_2_str(parms->dir));
 
-		rc = -EOPNOTSUPP;
-	} else {
-		/* Internal table type processing */
-		rc = tf_bulk_get_tbl_entry_internal(tfp, parms);
+		return -EINVAL;
+	}
+	/* Verify that the entries has been previously allocated */
+	aparms.rm_db = tbl_db[parms->dir];
+	aparms.db_index = parms->type;
+	aparms.allocated = &allocated;
+	idx = parms->starting_idx;
+	for (i = 0; i < parms->num_entries; i++) {
+		aparms.index = idx;
+		rc = tf_rm_is_allocated(&aparms);
 		if (rc)
+			return rc;
+
+		if (!allocated) {
 			TFP_DRV_LOG(ERR,
-				    "%s, Bulk get failed, type:%d, rc:%s\n",
+				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
 				    parms->type,
-				    strerror(-rc));
+				    idx);
+			return -EINVAL;
+		}
+		idx++;
+	}
+
+	hparms.rm_db = tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_rm_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entries */
+	rc = tf_msg_bulk_get_tbl_entry(tfp,
+				       parms->dir,
+				       hcapi_type,
+				       parms->starting_idx,
+				       parms->num_entries,
+				       parms->entry_sz_in_bytes,
+				       parms->physical_mem_addr);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Bulk get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index b17557345..eb560ffa7 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -3,17 +3,21 @@
  * All rights reserved.
  */
 
-#ifndef _TF_TBL_H_
-#define _TF_TBL_H_
-
-#include <stdint.h>
+#ifndef TF_TBL_TYPE_H_
+#define TF_TBL_TYPE_H_
 
 #include "tf_core.h"
 #include "stack.h"
 
-struct tf_session;
+struct tf;
+
+/**
+ * The Table module provides processing of Internal TF table types.
+ */
 
-/** table scope control block content */
+/**
+ * Table scope control block content
+ */
 struct tf_em_caps {
 	uint32_t flags;
 	uint32_t supported;
@@ -35,66 +39,364 @@ struct tf_em_caps {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
-	struct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
-	struct tf_em_caps          em_caps[TF_DIR_MAX];
-	struct stack               ext_act_pool[TF_DIR_MAX];
-	uint32_t                  *ext_act_pool_mem[TF_DIR_MAX];
+	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps em_caps[TF_DIR_MAX];
+	struct stack ext_act_pool[TF_DIR_MAX];
+	uint32_t *ext_act_pool_mem[TF_DIR_MAX];
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_rm_element_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+	/**
+	 * Session resource allocations
+	 */
+	struct tf_session_resources *resources;
+};
+
+/**
+ * Table allocation parameters
+ */
+struct tf_tbl_alloc_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t *idx;
+};
+
+/**
+ * Table free parameters
+ */
+struct tf_tbl_free_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
 };
 
-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
- * Round-down other page sizes to the lower hardware page size supported.
- */
-#define TF_EM_PAGE_SIZE_4K 12
-#define TF_EM_PAGE_SIZE_8K 13
-#define TF_EM_PAGE_SIZE_64K 16
-#define TF_EM_PAGE_SIZE_256K 18
-#define TF_EM_PAGE_SIZE_1M 20
-#define TF_EM_PAGE_SIZE_2M 21
-#define TF_EM_PAGE_SIZE_4M 22
-#define TF_EM_PAGE_SIZE_1G 30
-
-/* Set page size */
-#define BNXT_TF_PAGE_SIZE TF_EM_PAGE_SIZE_2M
-
-#if (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4K)	/** 4K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_8K)	/** 8K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_64K)	/** 64K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_256K)	/** 256K */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1M)	/** 1M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_2M)	/** 2M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_4M)	/** 4M */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
-#elif (BNXT_TF_PAGE_SIZE == TF_EM_PAGE_SIZE_1G)	/** 1G */
-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G
-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
-#else
-#error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
-#endif
-
-#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
-#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
-
-/**
- * Initialize table pool structure to indicate
- * no table scope has been associated with the
- * external pool of indexes.
- *
- * [in] session
- */
-void
-tf_init_tbl_pool(struct tf_session *session);
-
-#endif /* _TF_TBL_H_ */
+/**
+ * Table allocate search parameters
+ */
+struct tf_tbl_alloc_search_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table set parameters
+ */
+struct tf_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get parameters
+ */
+struct tf_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * Table get bulk parameters
+ */
+struct tf_tbl_get_bulk_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Starting index to read from
+	 */
+	uint32_t starting_idx;
+	/**
+	 * [in] Number of sequential entries
+	 */
+	uint16_t num_entries;
+	/**
+	 * [in] Size of the single entry
+	 */
+	uint16_t entry_sz_in_bytes;
+	/**
+	 * [out] Host physical address, where the data
+	 * will be copied to by the firmware.
+	 * Use tfp_calloc() API and mem_pa
+	 * variable of the tfp_calloc_parms
+	 * structure for the physical address.
+	 */
+	uint64_t physical_mem_addr;
+};
+
+/**
+ * @page tbl Table
+ *
+ * @ref tf_tbl_bind
+ *
+ * @ref tf_tbl_unbind
+ *
+ * @ref tf_tbl_alloc
+ *
+ * @ref tf_tbl_free
+ *
+ * @ref tf_tbl_alloc_search
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_bulk_get
+ */
+
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bind(struct tf *tfp,
+		struct tf_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_unbind(struct tf *tfp);
+
+/**
+ * Allocates the requested table type from the internal RM DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table allocation parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free's the requested table type and returns it to the DB. If shadow
+ * DB is enabled its searched first and if found the element refcount
+ * is decremented. If refcount goes to 0 then its returned to the
+ * table type DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table free parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Supported if Shadow DB is configured. Searches the Shadow DB for
+ * any matching element. If found the refcount in the shadow DB is
+ * updated accordingly. If not found a new element is allocated and
+ * installed into the shadow DB.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_alloc_search(struct tf *tfp,
+			struct tf_tbl_alloc_search_parms *parms);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_set(struct tf *tfp,
+	       struct tf_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_get(struct tf *tfp,
+	       struct tf_tbl_get_parms *parms);
+
+/**
+ * Retrieves bulk block of elements by sending a firmware request to
+ * get the elements.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get bulk parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_bulk_get(struct tf *tfp,
+		    struct tf_tbl_get_bulk_parms *parms);
+
+#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c
deleted file mode 100644
index 2f5af6060..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c
+++ /dev/null
@@ -1,342 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#include <rte_common.h>
-
-#include "tf_tbl_type.h"
-#include "tf_common.h"
-#include "tf_rm_new.h"
-#include "tf_util.h"
-#include "tf_msg.h"
-#include "tfp.h"
-
-struct tf;
-
-/**
- * Table DBs.
- */
-static void *tbl_db[TF_DIR_MAX];
-
-/**
- * Table Shadow DBs
- */
-/* static void *shadow_tbl_db[TF_DIR_MAX]; */
-
-/**
- * Init flag, set on bind and cleared on unbind
- */
-static uint8_t init;
-
-/**
- * Shadow init flag, set on bind and cleared on unbind
- */
-/* static uint8_t shadow_init; */
-
-int
-tf_tbl_bind(struct tf *tfp,
-	    struct tf_tbl_cfg_parms *parms)
-{
-	int rc;
-	int i;
-	struct tf_rm_create_db_parms db_cfg = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (init) {
-		TFP_DRV_LOG(ERR,
-			    "Table already initialized\n");
-		return -EINVAL;
-	}
-
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;
-	db_cfg.num_elements = parms->num_elements;
-	db_cfg.cfg = parms->cfg;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		db_cfg.dir = i;
-		db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
-		db_cfg.rm_db = &tbl_db[i];
-		rc = tf_rm_create_db(tfp, &db_cfg);
-		if (rc) {
-			TFP_DRV_LOG(ERR,
-				    "%s: Table DB creation failed\n",
-				    tf_dir_2_str(i));
-
-			return rc;
-		}
-	}
-
-	init = 1;
-
-	printf("Table Type - initialized\n");
-
-	return 0;
-}
-
-int
-tf_tbl_unbind(struct tf *tfp __rte_unused)
-{
-	int rc;
-	int i;
-	struct tf_rm_free_db_parms fparms = { 0 };
-
-	TF_CHECK_PARMS1(tfp);
-
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "No Table DBs created\n");
-		return -EINVAL;
-	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		fparms.dir = i;
-		fparms.rm_db = tbl_db[i];
-		rc = tf_rm_free_db(tfp, &fparms);
-		if (rc)
-			return rc;
-
-		tbl_db[i] = NULL;
-	}
-
-	init = 0;
-
-	return 0;
-}
-
-int
-tf_tbl_alloc(struct tf *tfp __rte_unused,
-	     struct tf_tbl_alloc_parms *parms)
-{
-	int rc;
-	uint32_t idx;
-	struct tf_rm_allocate_parms aparms = { 0 };
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Allocate requested element */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = &idx;
-	rc = tf_rm_allocate(&aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Failed allocate, type:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type);
-		return rc;
-	}
-
-	*parms->idx = idx;
-
-	return 0;
-}
-
-int
-tf_tbl_free(struct tf *tfp __rte_unused,
-	    struct tf_tbl_free_parms *parms)
-{
-	int rc;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_free_parms fparms = { 0 };
-	int allocated = 0;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Check if element is in use */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Entry already free, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	/* Free requested element */
-	fparms.rm_db = tbl_db[parms->dir];
-	fparms.db_index = parms->type;
-	fparms.index = parms->idx;
-	rc = tf_rm_free(&fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Free failed, type:%d, index:%d\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    parms->idx);
-		return rc;
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_alloc_search(struct tf *tfp __rte_unused,
-		    struct tf_tbl_alloc_search_parms *parms __rte_unused)
-{
-	return 0;
-}
-
-int
-tf_tbl_set(struct tf *tfp,
-	   struct tf_tbl_set_parms *parms)
-{
-	int rc;
-	int allocated = 0;
-	uint16_t hcapi_type;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	rc = tf_msg_set_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Set failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
-
-int
-tf_tbl_get(struct tf *tfp,
-	   struct tf_tbl_get_parms *parms)
-{
-	int rc;
-	uint16_t hcapi_type;
-	int allocated = 0;
-	struct tf_rm_is_allocated_parms aparms = { 0 };
-	struct tf_rm_get_hcapi_parms hparms = { 0 };
-
-	TF_CHECK_PARMS3(tfp, parms, parms->data);
-
-	if (!init) {
-		TFP_DRV_LOG(ERR,
-			    "%s: No Table DBs created\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Verify that the entry has been previously allocated */
-	aparms.rm_db = tbl_db[parms->dir];
-	aparms.db_index = parms->type;
-	aparms.index = parms->idx;
-	aparms.allocated = &allocated;
-	rc = tf_rm_is_allocated(&aparms);
-	if (rc)
-		return rc;
-
-	if (!allocated) {
-		TFP_DRV_LOG(ERR,
-		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
-		   tf_dir_2_str(parms->dir),
-		   parms->type,
-		   parms->idx);
-		return -EINVAL;
-	}
-
-	/* Set the entry */
-	hparms.rm_db = tbl_db[parms->dir];
-	hparms.db_index = parms->type;
-	hparms.hcapi_type = &hcapi_type;
-	rc = tf_rm_get_hcapi_type(&hparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Failed type lookup, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-		return rc;
-	}
-
-	/* Get the entry */
-	rc = tf_msg_get_tbl_entry(tfp,
-				  parms->dir,
-				  hcapi_type,
-				  parms->data_sz_in_bytes,
-				  parms->data,
-				  parms->idx);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s, Get failed, type:%d, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    parms->type,
-			    strerror(-rc));
-	}
-
-	return 0;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h
deleted file mode 100644
index 3474489a6..000000000
--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h
+++ /dev/null
@@ -1,318 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019-2020 Broadcom
- * All rights reserved.
- */
-
-#ifndef TF_TBL_TYPE_H_
-#define TF_TBL_TYPE_H_
-
-#include "tf_core.h"
-
-struct tf;
-
-/**
- * The Table module provides processing of Internal TF table types.
- */
-
-/**
- * Table configuration parameters
- */
-struct tf_tbl_cfg_parms {
-	/**
-	 * Number of table types in each of the configuration arrays
-	 */
-	uint16_t num_elements;
-	/**
-	 * Table Type element configuration array
-	 */
-	struct tf_rm_element_cfg *cfg;
-	/**
-	 * Shadow table type configuration array
-	 */
-	struct tf_shadow_tbl_cfg *shadow_cfg;
-	/**
-	 * Boolean controlling the request shadow copy.
-	 */
-	bool shadow_copy;
-	/**
-	 * Session resource allocations
-	 */
-	struct tf_session_resources *resources;
-};
-
-/**
- * Table allocation parameters
- */
-struct tf_tbl_alloc_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t *idx;
-};
-
-/**
- * Table free parameters
- */
-struct tf_tbl_free_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation type
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Index to free
-	 */
-	uint32_t idx;
-	/**
-	 * [out] Reference count after free, only valid if session has been
-	 * created with shadow_copy.
-	 */
-	uint16_t ref_cnt;
-};
-
-/**
- * Table allocate search parameters
- */
-struct tf_tbl_alloc_search_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of the allocation
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
-	 */
-	uint32_t tbl_scope_id;
-	/**
-	 * [in] Enable search for matching entry. If the table type is
-	 * internal the shadow copy will be searched before
-	 * alloc. Session must be configured with shadow copy enabled.
-	 */
-	uint8_t search_enable;
-	/**
-	 * [in] Result data to search for (if search_enable)
-	 */
-	uint8_t *result;
-	/**
-	 * [in] Result data size in bytes (if search_enable)
-	 */
-	uint16_t result_sz_in_bytes;
-	/**
-	 * [out] If search_enable, set if matching entry found
-	 */
-	uint8_t hit;
-	/**
-	 * [out] Current ref count after allocation (if search_enable)
-	 */
-	uint16_t ref_cnt;
-	/**
-	 * [out] Idx of allocated entry or found entry (if search_enable)
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table set parameters
- */
-struct tf_tbl_set_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to set
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [in] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [in] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to write to
-	 */
-	uint32_t idx;
-};
-
-/**
- * Table get parameters
- */
-struct tf_tbl_get_parms {
-	/**
-	 * [in] Receive or transmit direction
-	 */
-	enum tf_dir dir;
-	/**
-	 * [in] Type of object to get
-	 */
-	enum tf_tbl_type type;
-	/**
-	 * [out] Entry data
-	 */
-	uint8_t *data;
-	/**
-	 * [out] Entry size
-	 */
-	uint16_t data_sz_in_bytes;
-	/**
-	 * [in] Entry index to read
-	 */
-	uint32_t idx;
-};
-
-/**
- * @page tbl Table
- *
- * @ref tf_tbl_bind
- *
- * @ref tf_tbl_unbind
- *
- * @ref tf_tbl_alloc
- *
- * @ref tf_tbl_free
- *
- * @ref tf_tbl_alloc_search
- *
- * @ref tf_tbl_set
- *
- * @ref tf_tbl_get
- */
-
-/**
- * Initializes the Table module with the requested DBs. Must be
- * invoked as the first thing before any of the access functions.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table configuration parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_bind(struct tf *tfp,
-		struct tf_tbl_cfg_parms *parms);
-
-/**
- * Cleans up the private DBs and releases all the data.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_unbind(struct tf *tfp);
-
-/**
- * Allocates the requested table type from the internal RM DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table allocation parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc(struct tf *tfp,
-		 struct tf_tbl_alloc_parms *parms);
-
-/**
- * Free's the requested table type and returns it to the DB. If shadow
- * DB is enabled its searched first and if found the element refcount
- * is decremented. If refcount goes to 0 then its returned to the
- * table type DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table free parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_free(struct tf *tfp,
-		struct tf_tbl_free_parms *parms);
-
-/**
- * Supported if Shadow DB is configured. Searches the Shadow DB for
- * any matching element. If found the refcount in the shadow DB is
- * updated accordingly. If not found a new element is allocated and
- * installed into the shadow DB.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_alloc_search(struct tf *tfp,
-			struct tf_tbl_alloc_search_parms *parms);
-
-/**
- * Configures the requested element by sending a firmware request which
- * then installs it into the device internal structures.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_set(struct tf *tfp,
-	       struct tf_tbl_set_parms *parms);
-
-/**
- * Retrieves the requested element by sending a firmware request to get
- * the element.
- *
- * [in] tfp
- *   Pointer to TF handle, used for HCAPI communication
- *
- * [in] parms
- *   Pointer to Table get parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_get(struct tf *tfp,
-	       struct tf_tbl_get_parms *parms);
-
-#endif /* TF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index a1761ad56..fc047f8f8 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -9,7 +9,7 @@
 #include "tf_tcam.h"
 #include "tf_common.h"
 #include "tf_util.h"
-#include "tf_rm_new.h"
+#include "tf_rm.h"
 #include "tf_device.h"
 #include "tfp.h"
 #include "tf_session.h"
@@ -49,7 +49,7 @@ tf_tcam_bind(struct tf *tfp,
 
 	if (init) {
 		TFP_DRV_LOG(ERR,
-			    "TCAM already initialized\n");
+			    "TCAM DB already initialized\n");
 		return -EINVAL;
 	}
 
@@ -86,11 +86,12 @@ tf_tcam_unbind(struct tf *tfp)
 
 	TF_CHECK_PARMS1(tfp);
 
-	/* Bail if nothing has been initialized done silent as to
-	 * allow for creation cleanup.
-	 */
-	if (!init)
-		return -EINVAL;
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No TCAM DBs created\n");
+		return 0;
+	}
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 24/51] net/bnxt: update RM to support HCAPI only
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (22 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 23/51] net/bnxt: update table get to use new design Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 25/51] net/bnxt: remove table scope from session Ajit Khaparde
                           ` (28 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- For the EM Module there is a need to only allocate the EM Records in
  HCAPI RM but the storage control is requested to be outside of the RM DB.
- Add TF_RM_ELEM_CFG_HCAPI_BA.
- Return error when the number of reserved entries for wc tcam is odd
  number in tf_tcam_bind.
- Remove em_pool from session
- Use RM provided start offset and size
- HCAPI returns entry index instead of row index for WC TCAM.
- Move resource type conversion to hrwm set/free tcam functions.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_device_p4.c   |   2 +
 drivers/net/bnxt/tf_core/tf_device_p4.h   |  54 ++++-----
 drivers/net/bnxt/tf_core/tf_em_internal.c | 131 ++++++++++++++--------
 drivers/net/bnxt/tf_core/tf_msg.c         |   6 +-
 drivers/net/bnxt/tf_core/tf_rm.c          |  81 ++++++-------
 drivers/net/bnxt/tf_core/tf_rm.h          |  14 ++-
 drivers/net/bnxt/tf_core/tf_session.h     |   5 -
 drivers/net/bnxt/tf_core/tf_tcam.c        |  21 ++++
 8 files changed, 190 insertions(+), 124 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index e3526672f..1eaf18212 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -68,6 +68,8 @@ tf_dev_p4_get_tcam_slice_info(struct tf *tfp __rte_unused,
 		*num_slices_per_row = CFA_P4_WC_TCAM_SLICES_PER_ROW;
 		if (key_sz > *num_slices_per_row * CFA_P4_WC_TCAM_SLICE_SIZE)
 			return -ENOTSUP;
+
+		*num_slices_per_row = 1;
 	} else { /* for other type of tcam */
 		*num_slices_per_row = 1;
 	}
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 473e4eae5..8fae18012 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -12,19 +12,19 @@
 #include "tf_rm.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_FUNC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_EM_PROF_ID },
 	/* CFA_RESOURCE_TYPE_P4_L2_FUNC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID }
 };
 
 struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_WC_TCAM },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_PROF_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_WC_TCAM },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_TCAM },
 	/* CFA_RESOURCE_TYPE_P4_CT_RULE_TCAM */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_VEB_TCAM */
@@ -32,26 +32,26 @@ struct tf_rm_element_cfg tf_tcam_p4[TF_TCAM_TBL_TYPE_MAX] = {
 };
 
 struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MCG },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_FULL_ACTION },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MCG },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_8B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_16B },
 	/* CFA_RESOURCE_TYPE_P4_ENCAP_32B */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER_PROF },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_METER },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_MIRROR },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_ENCAP_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_COUNTER_64B },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_SPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
 	/* CFA_RESOURCE_TYPE_P4_UPAR */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 	/* CFA_RESOURCE_TYPE_P4_EPOC */
@@ -79,7 +79,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 struct tf_rm_element_cfg tf_em_ext_p4[TF_EM_TBL_TYPE_MAX] = {
 	/* CFA_RESOURCE_TYPE_P4_EM_REC */
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
-	{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
+	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_TBL_SCOPE },
 };
 
 struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
diff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c
index 1c514747d..3129fbe31 100644
--- a/drivers/net/bnxt/tf_core/tf_em_internal.c
+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c
@@ -23,20 +23,28 @@
  */
 static void *em_db[TF_DIR_MAX];
 
+#define TF_EM_DB_EM_REC 0
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
 static uint8_t init;
 
+
+/**
+ * EM Pool
+ */
+static struct stack em_pool[TF_DIR_MAX];
+
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  * [in] num_entries
  *   number of entries to write
+ * [in] start
+ *   starting offset
  *
  * Return:
  *  0       - Success, entry allocated - no search support
@@ -44,54 +52,66 @@ static uint8_t init;
  *          - Failure, entry not allocated, out of resources
  */
 static int
-tf_create_em_pool(struct tf_session *session,
-		  enum tf_dir dir,
-		  uint32_t num_entries)
+tf_create_em_pool(enum tf_dir dir,
+		  uint32_t num_entries,
+		  uint32_t start)
 {
 	struct tfp_calloc_parms parms;
 	uint32_t i, j;
 	int rc = 0;
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 
-	parms.nitems = num_entries;
+	/* Assumes that num_entries has been checked before we get here */
+	parms.nitems = num_entries / TF_SESSION_EM_ENTRY_SIZE;
 	parms.size = sizeof(uint32_t);
 	parms.alignment = 0;
 
 	rc = tfp_calloc(&parms);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool allocation failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool allocation failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Create empty stack
 	 */
-	rc = stack_init(num_entries, (uint32_t *)parms.mem_va, pool);
+	rc = stack_init(num_entries / TF_SESSION_EM_ENTRY_SIZE,
+			(uint32_t *)parms.mem_va,
+			pool);
 
 	if (rc) {
-		TFP_DRV_LOG(ERR, "EM pool stack init failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack init failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
 
 	/* Fill pool with indexes
 	 */
-	j = num_entries - 1;
+	j = start + num_entries - TF_SESSION_EM_ENTRY_SIZE;
 
-	for (i = 0; i < num_entries; i++) {
+	for (i = 0; i < (num_entries / TF_SESSION_EM_ENTRY_SIZE); i++) {
 		rc = stack_push(pool, j);
 		if (rc) {
-			TFP_DRV_LOG(ERR, "EM pool stack push failure %s\n",
+			TFP_DRV_LOG(ERR,
+				    "%s, EM pool stack push failure %s\n",
+				    tf_dir_2_str(dir),
 				    strerror(-rc));
 			goto cleanup;
 		}
-		j--;
+
+		j -= TF_SESSION_EM_ENTRY_SIZE;
 	}
 
 	if (!stack_is_full(pool)) {
 		rc = -EINVAL;
-		TFP_DRV_LOG(ERR, "EM pool stack failure %s\n",
+		TFP_DRV_LOG(ERR,
+			    "%s, EM pool stack failure %s\n",
+			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		goto cleanup;
 	}
@@ -105,18 +125,15 @@ tf_create_em_pool(struct tf_session *session,
 /**
  * Create EM Tbl pool of memory indexes.
  *
- * [in] session
- *   Pointer to session
  * [in] dir
  *   direction
  *
  * Return:
  */
 static void
-tf_free_em_pool(struct tf_session *session,
-		enum tf_dir dir)
+tf_free_em_pool(enum tf_dir dir)
 {
-	struct stack *pool = &session->em_pool[dir];
+	struct stack *pool = &em_pool[dir];
 	uint32_t *ptr;
 
 	ptr = stack_items(pool);
@@ -140,22 +157,19 @@ tf_em_insert_int_entry(struct tf *tfp,
 	uint16_t rptr_index = 0;
 	uint8_t rptr_entry = 0;
 	uint8_t num_of_entries = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 	uint32_t index;
 
 	rc = stack_pop(pool, &index);
 
 	if (rc) {
-		PMD_DRV_LOG
-		  (ERR,
-		   "dir:%d, EM entry index allocation failed\n",
-		   parms->dir);
+		PMD_DRV_LOG(ERR,
+			    "%s, EM entry index allocation failed\n",
+			    tf_dir_2_str(parms->dir));
 		return rc;
 	}
 
-	rptr_index = index * TF_SESSION_EM_ENTRY_SIZE;
+	rptr_index = index;
 	rc = tf_msg_insert_em_internal_entry(tfp,
 					     parms,
 					     &rptr_index,
@@ -166,8 +180,9 @@ tf_em_insert_int_entry(struct tf *tfp,
 
 	PMD_DRV_LOG
 		  (ERR,
-		   "Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
-		   index * TF_SESSION_EM_ENTRY_SIZE,
+		   "%s, Internal entry @ Index:%d rptr_index:0x%x rptr_entry:0x%x num_of_entries:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   index,
 		   rptr_index,
 		   rptr_entry,
 		   num_of_entries);
@@ -204,15 +219,13 @@ tf_em_delete_int_entry(struct tf *tfp,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	int rc = 0;
-	struct tf_session *session =
-		(struct tf_session *)(tfp->session->core_data);
-	struct stack *pool = &session->em_pool[parms->dir];
+	struct stack *pool = &em_pool[parms->dir];
 
 	rc = tf_msg_delete_em_entry(tfp, parms);
 
 	/* Return resource to pool */
 	if (rc == 0)
-		stack_push(pool, parms->index / TF_SESSION_EM_ENTRY_SIZE);
+		stack_push(pool, parms->index);
 
 	return rc;
 }
@@ -224,8 +237,9 @@ tf_em_int_bind(struct tf *tfp,
 	int rc;
 	int i;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
-	struct tf_session *session;
 	uint8_t db_exists = 0;
+	struct tf_rm_get_alloc_info_parms iparms;
+	struct tf_rm_alloc_info info;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -235,14 +249,6 @@ tf_em_int_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
-		tf_create_em_pool(session,
-				  i,
-				  TF_SESSION_EM_POOL_SIZE);
-	}
-
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_EM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -257,6 +263,18 @@ tf_em_int_bind(struct tf *tfp,
 		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)
 			continue;
 
+		if (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] %
+		    TF_SESSION_EM_ENTRY_SIZE != 0) {
+			rc = -ENOMEM;
+			TFP_DRV_LOG(ERR,
+				    "%s, EM Allocation must be in blocks of %d, failure %s\n",
+				    tf_dir_2_str(i),
+				    TF_SESSION_EM_ENTRY_SIZE,
+				    strerror(-rc));
+
+			return rc;
+		}
+
 		db_cfg.rm_db = &em_db[i];
 		rc = tf_rm_create_db(tfp, &db_cfg);
 		if (rc) {
@@ -272,6 +290,28 @@ tf_em_int_bind(struct tf *tfp,
 	if (db_exists)
 		init = 1;
 
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		iparms.rm_db = em_db[i];
+		iparms.db_index = TF_EM_DB_EM_REC;
+		iparms.info = &info;
+
+		rc = tf_rm_get_info(&iparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: EM DB get info failed\n",
+				    tf_dir_2_str(i));
+			return rc;
+		}
+
+		rc = tf_create_em_pool(i,
+				       iparms.info->entry.stride,
+				       iparms.info->entry.start);
+		/* Logging handled in tf_create_em_pool */
+		if (rc)
+			return rc;
+	}
+
+
 	return 0;
 }
 
@@ -281,7 +321,6 @@ tf_em_int_unbind(struct tf *tfp)
 	int rc;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
-	struct tf_session *session;
 
 	TF_CHECK_PARMS1(tfp);
 
@@ -292,10 +331,8 @@ tf_em_int_unbind(struct tf *tfp)
 		return 0;
 	}
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	for (i = 0; i < TF_DIR_MAX; i++)
-		tf_free_em_pool(session, i);
+		tf_free_em_pool(i);
 
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		fparms.dir = i;
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 02d8a4971..7fffb6baf 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -857,12 +857,12 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < size)
+	if (tfp_le_to_cpu_32(resp.size) != size)
 		return -EINVAL;
 
 	tfp_memcpy(data,
 		   &resp.data,
-		   resp.size);
+		   size);
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
@@ -919,7 +919,7 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 		return rc;
 
 	/* Verify that we got enough buffer to return the requested data */
-	if (resp.size < data_size)
+	if (tfp_le_to_cpu_32(resp.size) != data_size)
 		return -EINVAL;
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e0469b653..e7af9eb84 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -106,7 +106,8 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 	uint16_t cnt = 0;
 
 	for (i = 0; i < count; i++) {
-		if (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&
+		if ((cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		     cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) &&
 		    reservations[i] > 0)
 			cnt++;
 
@@ -467,7 +468,8 @@ tf_rm_create_db(struct tf *tfp,
 	/* Build the request */
 	for (i = 0, j = 0; i < parms->num_elements; i++) {
 		/* Skip any non HCAPI cfg elements */
-		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {
+		if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI ||
+		    parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 			/* Only perform reservation for entries that
 			 * has been requested
 			 */
@@ -529,7 +531,8 @@ tf_rm_create_db(struct tf *tfp,
 		/* Skip any non HCAPI types as we didn't include them
 		 * in the reservation request.
 		 */
-		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)
+		if (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI &&
+		    parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 			continue;
 
 		/* If the element didn't request an allocation no need
@@ -551,29 +554,32 @@ tf_rm_create_db(struct tf *tfp,
 			       resv[j].start,
 			       resv[j].stride);
 
-			/* Create pool */
-			pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
-				     sizeof(struct bitalloc));
-			/* Alloc request, alignment already set */
-			cparms.nitems = pool_size;
-			cparms.size = sizeof(struct bitalloc);
-			rc = tfp_calloc(&cparms);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool alloc failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
-			}
-			db[i].pool = (struct bitalloc *)cparms.mem_va;
-
-			rc = ba_init(db[i].pool, resv[j].stride);
-			if (rc) {
-				TFP_DRV_LOG(ERR,
-					    "%s: Pool init failed, type:%d\n",
-					    tf_dir_2_str(parms->dir),
-					    db[i].cfg_type);
-				goto fail;
+			/* Only allocate BA pool if so requested */
+			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
+				/* Create pool */
+				pool_size = (BITALLOC_SIZEOF(resv[j].stride) /
+					     sizeof(struct bitalloc));
+				/* Alloc request, alignment already set */
+				cparms.nitems = pool_size;
+				cparms.size = sizeof(struct bitalloc);
+				rc = tfp_calloc(&cparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool alloc failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
+				db[i].pool = (struct bitalloc *)cparms.mem_va;
+
+				rc = ba_init(db[i].pool, resv[j].stride);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+					     "%s: Pool init failed, type:%d\n",
+					     tf_dir_2_str(parms->dir),
+					     db[i].cfg_type);
+					goto fail;
+				}
 			}
 			j++;
 		} else {
@@ -682,6 +688,9 @@ tf_rm_free_db(struct tf *tfp,
 				    tf_device_module_type_2_str(rm_db->type));
 	}
 
+	/* No need to check for configuration type, even if we do not
+	 * have a BA pool we just delete on a null ptr, no harm
+	 */
 	for (i = 0; i < rm_db->num_entries; i++)
 		tfp_free((void *)rm_db->db[i].pool);
 
@@ -705,8 +714,7 @@ tf_rm_allocate(struct tf_rm_allocate_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -770,8 +778,7 @@ tf_rm_free(struct tf_rm_free_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -816,8 +823,7 @@ tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail out if the pool is not valid, should never happen */
@@ -857,9 +863,9 @@ tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	memcpy(parms->info,
@@ -880,9 +886,9 @@ tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)
 	rm_db = (struct tf_rm_new_db *)parms->rm_db;
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
-	/* Bail out if not controlled by RM */
+	/* Bail out if not controlled by HCAPI */
 	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	    cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;
@@ -903,8 +909,7 @@ tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)
 	cfg_type = rm_db->db[parms->db_index].cfg_type;
 
 	/* Bail out if not controlled by RM */
-	if (cfg_type != TF_RM_ELEM_CFG_HCAPI &&
-	    cfg_type != TF_RM_ELEM_CFG_PRIVATE)
+	if (cfg_type != TF_RM_ELEM_CFG_HCAPI_BA)
 		return -ENOTSUP;
 
 	/* Bail silently (no logging), if the pool is not valid there
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5cb68892a..f44fcca70 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -56,12 +56,18 @@ struct tf_rm_new_entry {
  * ULP layer that is not controlled by HCAPI within the Firmware.
  */
 enum tf_rm_elem_cfg_type {
-	/** No configuration */
+	/**
+	 * No configuration
+	 */
 	TF_RM_ELEM_CFG_NULL,
-	/** HCAPI 'controlled', uses a Pool for internal storage */
+	/** HCAPI 'controlled', no RM storage thus the Device Module
+	 *  using the RM can chose to handle storage locally.
+	 */
 	TF_RM_ELEM_CFG_HCAPI,
-	/** Private thus not HCAPI 'controlled', creates a Pool for storage */
-	TF_RM_ELEM_CFG_PRIVATE,
+	/** HCAPI 'controlled', uses a Bit Allocator Pool for internal
+	 *  storage in the RM.
+	 */
+	TF_RM_ELEM_CFG_HCAPI_BA,
 	/**
 	 * Shared element thus it belongs to a shared FW Session and
 	 * is not controlled by the Host.
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index e4472ed7f..ebee4db8c 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -103,11 +103,6 @@ struct tf_session {
 
 	/** Table scope array */
 	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
-
-	/**
-	 * EM Pools
-	 */
-	struct stack em_pool[TF_DIR_MAX];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index fc047f8f8..d5bb4eec1 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -43,6 +43,7 @@ tf_tcam_bind(struct tf *tfp,
 {
 	int rc;
 	int i;
+	struct tf_tcam_resources *tcam_cnt;
 	struct tf_rm_create_db_parms db_cfg = { 0 };
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -53,6 +54,14 @@ tf_tcam_bind(struct tf *tfp,
 		return -EINVAL;
 	}
 
+	tcam_cnt = parms->resources->tcam_cnt;
+	if ((tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2) ||
+	    (tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] % 2)) {
+		TFP_DRV_LOG(ERR,
+			    "Number of WC TCAM entries cannot be odd num\n");
+		return -EINVAL;
+	}
+
 	db_cfg.type = TF_DEVICE_MODULE_TYPE_TCAM;
 	db_cfg.num_elements = parms->num_elements;
 	db_cfg.cfg = parms->cfg;
@@ -168,6 +177,18 @@ tf_tcam_alloc(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM &&
+	    (parms->idx % 2) != 0) {
+		rc = tf_rm_allocate(&aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Failed tcam, type:%d\n",
+				    tf_dir_2_str(parms->dir),
+				    parms->type);
+			return rc;
+		}
+	}
+
 	parms->idx *= num_slice_per_row;
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 25/51] net/bnxt: remove table scope from session
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (23 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
                           ` (27 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Remove table scope data from session. Added to EEM.
- Complete move to RM of table scope base and range.
- Fix some err messaging strings.
- Fix the tcam logging message.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c      |  2 +-
 drivers/net/bnxt/tf_core/tf_em.h        |  1 -
 drivers/net/bnxt/tf_core/tf_em_common.c | 16 +++++++----
 drivers/net/bnxt/tf_core/tf_em_common.h |  5 +---
 drivers/net/bnxt/tf_core/tf_em_host.c   | 38 ++++++++++---------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 12 +++-----
 drivers/net/bnxt/tf_core/tf_session.h   |  3 --
 drivers/net/bnxt/tf_core/tf_tcam.c      |  6 ++--
 8 files changed, 35 insertions(+), 48 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 8727900c4..6410843f6 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -573,7 +573,7 @@ tf_free_tcam_entry(struct tf *tfp,
 	rc = dev->ops->tf_dev_free_tcam(tfp, &fparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: TCAM allocation failed, rc:%s\n",
+			    "%s: TCAM free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    strerror(-rc));
 		return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 6bfcbd59e..617b07587 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -9,7 +9,6 @@
 #include "tf_core.h"
 #include "tf_session.h"
 
-#define TF_HACK_TBL_SCOPE_BASE 68
 #define SUPPORT_CFA_HW_P4 1
 #define SUPPORT_CFA_HW_P58 0
 #define SUPPORT_CFA_HW_P59 0
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index d0d80daeb..e31a63b46 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,6 +29,8 @@
  */
 void *eem_db[TF_DIR_MAX];
 
+#define TF_EEM_DB_TBL_SCOPE 1
+
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -39,10 +41,12 @@ static uint8_t init;
  */
 static enum tf_mem_type mem_type;
 
+/** Table scope array */
+struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
 /* API defined in tf_em.h */
 struct tf_tbl_scope_cb *
-tbl_scope_cb_find(struct tf_session *session,
-		  uint32_t tbl_scope_id)
+tbl_scope_cb_find(uint32_t tbl_scope_id)
 {
 	int i;
 	struct tf_rm_is_allocated_parms parms;
@@ -50,8 +54,8 @@ tbl_scope_cb_find(struct tf_session *session,
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	parms.index = tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
@@ -60,8 +64,8 @@ tbl_scope_cb_find(struct tf_session *session,
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
-		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
-			return &session->tbl_scopes[i];
+		if (tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &tbl_scopes[i];
 	}
 
 	return NULL;
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index 45699a7c3..bf01df9b8 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -14,8 +14,6 @@
  * Function to search for table scope control block structure
  * with specified table scope ID.
  *
- * [in] session
- *   Session to use for the search of the table scope control block
  * [in] tbl_scope_id
  *   Table scope ID to search for
  *
@@ -23,8 +21,7 @@
  *  Pointer to the found table scope control block struct or NULL if
  *   table scope control block struct not found
  */
-struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
-					  uint32_t tbl_scope_id);
+struct tf_tbl_scope_cb *tbl_scope_cb_find(uint32_t tbl_scope_id);
 
 /**
  * Create and initialize a stack to use for action entries
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 8be39afdd..543edb54a 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,6 +48,9 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
+#define TF_EEM_DB_TBL_SCOPE 1
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
 /**
  * Function to free a page table
@@ -934,14 +937,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_entry(struct tf *tfp,
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -957,14 +958,12 @@ tf_em_insert_ext_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_entry(struct tf *tfp,
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
 		       struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb =
-	tbl_scope_cb_find((struct tf_session *)(tfp->session->core_data),
-			  parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -981,16 +980,13 @@ tf_em_ext_host_alloc(struct tf *tfp,
 	enum tf_dir dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 	struct hcapi_cfa_em_table *em_tables;
-	struct tf_session *session;
 	struct tf_free_tbl_scope_parms free_parms;
 	struct tf_rm_allocate_parms aparms = { 0 };
 	struct tf_rm_free_parms fparms = { 0 };
 
-	session = (struct tf_session *)tfp->session->core_data;
-
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -999,8 +995,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 		return rc;
 	}
 
-	parms->tbl_scope_id -= TF_HACK_TBL_SCOPE_BASE;
-	tbl_scope_cb = &session->tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
 	tbl_scope_cb->index = parms->tbl_scope_id;
 	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
 
@@ -1092,8 +1087,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	fparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
 }
@@ -1105,13 +1100,9 @@ tf_em_ext_host_free(struct tf *tfp,
 	int rc = 0;
 	enum tf_dir  dir;
 	struct tf_tbl_scope_cb *tbl_scope_cb;
-	struct tf_session *session;
 	struct tf_rm_free_parms aparms = { 0 };
 
-	session = (struct tf_session *)(tfp->session->core_data);
-
-	tbl_scope_cb = tbl_scope_cb_find(session,
-					 parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Table scope error\n");
@@ -1120,8 +1111,8 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = 1/**** TYPE TABLE-SCOPE??? ****/;
-	aparms.index = parms->tbl_scope_id + TF_HACK_TBL_SCOPE_BASE;
+	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
@@ -1142,5 +1133,6 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index ee18a0c70..6dd115470 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -63,14 +63,12 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
  *    -EINVAL - Error
  */
 int
-tf_em_insert_ext_sys_entry(struct tf *tfp,
+tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_insert_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
@@ -87,14 +85,12 @@ tf_em_insert_ext_sys_entry(struct tf *tfp,
  *    -EINVAL - Error
  */
 int
-tf_em_delete_ext_sys_entry(struct tf *tfp,
+tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
 			   struct tf_delete_em_entry_parms *parms)
 {
 	struct tf_tbl_scope_cb *tbl_scope_cb;
 
-	tbl_scope_cb = tbl_scope_cb_find
-		((struct tf_session *)(tfp->session->core_data),
-		parms->tbl_scope_id);
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
 	if (tbl_scope_cb == NULL) {
 		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
 		return -EINVAL;
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index ebee4db8c..a303fde51 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -100,9 +100,6 @@ struct tf_session {
 
 	/** Device handle */
 	struct tf_dev_info dev;
-
-	/** Table scope array */
-	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index d5bb4eec1..b67159a54 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -287,7 +287,8 @@ tf_tcam_free(struct tf *tfp,
 	rc = tf_msg_tcam_entry_free(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d free failed, rc:%s\n",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
@@ -382,7 +383,8 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	rc = tf_msg_tcam_entry_set(tfp, parms);
 	if (rc) {
 		/* Log error */
-		TFP_DRV_LOG(ERR, "%s: %s: Entry %d free failed with err %s",
+		TFP_DRV_LOG(ERR,
+			    "%s: %s: Entry %d set failed, rc:%s",
 			    tf_dir_2_str(parms->dir),
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 26/51] net/bnxt: add external action alloc and free
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (24 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 25/51] net/bnxt: remove table scope from session Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
                           ` (26 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Link external action alloc and free to new hcapi interface
- Add parameter range checking
- Fix issues with index allocation check

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c       | 163 ++++++++++++++++-------
 drivers/net/bnxt/tf_core/tf_core.h       |   4 -
 drivers/net/bnxt/tf_core/tf_device.h     |  58 ++++++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   6 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   2 -
 drivers/net/bnxt/tf_core/tf_em.h         |  95 +++++++++++++
 drivers/net/bnxt/tf_core/tf_em_common.c  | 120 ++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  80 ++++++++++-
 drivers/net/bnxt/tf_core/tf_em_system.c  |   6 +
 drivers/net/bnxt/tf_core/tf_identifier.c |   4 +-
 drivers/net/bnxt/tf_core/tf_rm.h         |   5 +
 drivers/net/bnxt/tf_core/tf_tbl.c        |  10 +-
 drivers/net/bnxt/tf_core/tf_tbl.h        |  12 ++
 drivers/net/bnxt/tf_core/tf_tcam.c       |   8 +-
 drivers/net/bnxt/tf_core/tf_util.c       |   4 -
 15 files changed, 499 insertions(+), 78 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6410843f6..45accb0ab 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -617,25 +617,48 @@ tf_alloc_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_alloc_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	aparms.dir = parms->dir;
 	aparms.type = parms->type;
 	aparms.idx = &idx;
-	rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table allocation failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	aparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_alloc_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_ext_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: External table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+
+	} else {
+		if (dev->ops->tf_dev_alloc_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table allocation failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	parms->idx = idx;
@@ -677,25 +700,47 @@ tf_free_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_free_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	fparms.dir = parms->dir;
 	fparms.type = parms->type;
 	fparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table free failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	fparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_free_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_ext_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_free_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table free failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return 0;
@@ -735,27 +780,49 @@ tf_set_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	if (dev->ops->tf_dev_set_tbl == NULL) {
-		rc = -EOPNOTSUPP;
-		TFP_DRV_LOG(ERR,
-			    "%s: Operation not supported, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return -EOPNOTSUPP;
-	}
-
 	sparms.dir = parms->dir;
 	sparms.type = parms->type;
 	sparms.data = parms->data;
 	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
 	sparms.idx = parms->idx;
-	rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
-	if (rc) {
-		TFP_DRV_LOG(ERR,
-			    "%s: Table set failed, rc:%s\n",
-			    tf_dir_2_str(parms->dir),
-			    strerror(-rc));
-		return rc;
+	sparms.tbl_scope_id = parms->tbl_scope_id;
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		if (dev->ops->tf_dev_set_ext_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_ext_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
+	} else {
+		if (dev->ops->tf_dev_set_tbl == NULL) {
+			rc = -EOPNOTSUPP;
+			TFP_DRV_LOG(ERR,
+				    "%s: Operation not supported, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return -EOPNOTSUPP;
+		}
+
+		rc = dev->ops->tf_dev_set_tbl(tfp, &sparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s: Table set failed, rc:%s\n",
+				    tf_dir_2_str(parms->dir),
+				    strerror(-rc));
+			return rc;
+		}
 	}
 
 	return rc;
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index a7a7bd38a..e898f19a0 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -211,10 +211,6 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
 	/** Wh+/SR Action _Modify L4 Dest Port */
 	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
-	/** Action Modify IPv6 Source */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
-	/** Action Modify IPv6 Destination */
-	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
 	/** Meter Profiles */
 	TF_TBL_TYPE_METER_PROF,
 	/** Meter Instance */
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 93f3627d4..58b7a4ab2 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -216,6 +216,26 @@ struct tf_dev_ops {
 	int (*tf_dev_alloc_tbl)(struct tf *tfp,
 				struct tf_tbl_alloc_parms *parms);
 
+	/**
+	 * Allocation of a external table type element.
+	 *
+	 * This API allocates the specified table type element from a
+	 * device specific table type DB. The allocated element is
+	 * returned.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table allocation parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_alloc_ext_tbl)(struct tf *tfp,
+				    struct tf_tbl_alloc_parms *parms);
+
 	/**
 	 * Free of a table type element.
 	 *
@@ -235,6 +255,25 @@ struct tf_dev_ops {
 	int (*tf_dev_free_tbl)(struct tf *tfp,
 			       struct tf_tbl_free_parms *parms);
 
+	/**
+	 * Free of a external table type element.
+	 *
+	 * This API free's a previous allocated table type element from a
+	 * device specific table type DB.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table free parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_free_ext_tbl)(struct tf *tfp,
+				   struct tf_tbl_free_parms *parms);
+
 	/**
 	 * Searches for the specified table type element in a shadow DB.
 	 *
@@ -276,6 +315,25 @@ struct tf_dev_ops {
 	int (*tf_dev_set_tbl)(struct tf *tfp,
 			      struct tf_tbl_set_parms *parms);
 
+	/**
+	 * Sets the specified external table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_ext_tbl)(struct tf *tfp,
+				  struct tf_tbl_set_parms *parms);
+
 	/**
 	 * Retrieves the specified table type element.
 	 *
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 1eaf18212..9a3230787 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -85,10 +85,13 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_get_tcam_slice_info = tf_dev_p4_get_tcam_slice_info,
 	.tf_dev_alloc_ident = NULL,
 	.tf_dev_free_ident = NULL,
+	.tf_dev_alloc_ext_tbl = NULL,
 	.tf_dev_alloc_tbl = NULL,
+	.tf_dev_free_ext_tbl = NULL,
 	.tf_dev_free_tbl = NULL,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = NULL,
+	.tf_dev_set_ext_tbl = NULL,
 	.tf_dev_get_tbl = NULL,
 	.tf_dev_get_bulk_tbl = NULL,
 	.tf_dev_alloc_tcam = NULL,
@@ -113,9 +116,12 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_alloc_ident = tf_ident_alloc,
 	.tf_dev_free_ident = tf_ident_free,
 	.tf_dev_alloc_tbl = tf_tbl_alloc,
+	.tf_dev_alloc_ext_tbl = tf_tbl_ext_alloc,
 	.tf_dev_free_tbl = tf_tbl_free,
+	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 8fae18012..298e100f3 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -47,8 +47,6 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_DPORT },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV4 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV4 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_S_IPV6 },
-	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_NAT_D_IPV6 },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER_PROF },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_METER },
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_MIRROR },
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 617b07587..39a216341 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -456,4 +456,99 @@ int tf_em_ext_common_free(struct tf *tfp,
  */
 int tf_em_ext_common_alloc(struct tf *tfp,
 			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms);
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms);
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data by invoking the
+ * firmware.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_system_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
+
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index e31a63b46..39a8412b3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -29,8 +29,6 @@
  */
 void *eem_db[TF_DIR_MAX];
 
-#define TF_EEM_DB_TBL_SCOPE 1
-
 /**
  * Init flag, set on bind and cleared on unbind
  */
@@ -54,13 +52,13 @@ tbl_scope_cb_find(uint32_t tbl_scope_id)
 
 	/* Check that id is valid */
 	parms.rm_db = eem_db[TF_DIR_RX];
-	parms.db_index = TF_EEM_DB_TBL_SCOPE;
+	parms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	parms.index = tbl_scope_id;
 	parms.allocated = &allocated;
 
 	i = tf_rm_is_allocated(&parms);
 
-	if (i < 0 || !allocated)
+	if (i < 0 || allocated != TF_RM_ALLOCATED_ENTRY_IN_USE)
 		return NULL;
 
 	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
@@ -158,6 +156,111 @@ tf_destroy_tbl_pool_external(enum tf_dir dir,
 	tfp_free(ext_act_pool_mem);
 }
 
+/**
+ * Allocate External Tbl entry from the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+int
+tf_tbl_ext_alloc(struct tf *tfp,
+		 struct tf_tbl_alloc_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, Allocation failed, type:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type);
+		return rc;
+	}
+
+	*parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the scope pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+int
+tf_tbl_ext_free(struct tf *tfp,
+		struct tf_tbl_free_parms *parms)
+{
+	int rc = 0;
+	uint32_t index;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_act_pool[parms->dir];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		TFP_DRV_LOG(ERR,
+		   "%s, consistency error, stack full, type:%d, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
 uint32_t
 tf_em_get_key_mask(int num_entries)
 {
@@ -273,6 +376,15 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms)
+{
+	if (mem_type == TF_EEM_MEM_TYPE_HOST)
+		return tf_tbl_ext_host_set(tfp, parms);
+	else
+		return tf_tbl_ext_system_set(tfp, parms);
+}
+
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 543edb54a..d7c147a15 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -48,7 +48,6 @@
  * EM DBs.
  */
 extern void *eem_db[TF_DIR_MAX];
-#define TF_EEM_DB_TBL_SCOPE 1
 
 extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
 
@@ -986,7 +985,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 	/* Get Table Scope control block from the session pool */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = (uint32_t *)&parms->tbl_scope_id;
 	rc = tf_rm_allocate(&aparms);
 	if (rc) {
@@ -1087,7 +1086,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 cleanup:
 	/* Free Table control block */
 	fparms.rm_db = eem_db[TF_DIR_RX];
-	fparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	fparms.index = parms->tbl_scope_id;
 	tf_rm_free(&fparms);
 	return -EINVAL;
@@ -1111,7 +1110,7 @@ tf_em_ext_host_free(struct tf *tfp,
 
 	/* Free Table control block */
 	aparms.rm_db = eem_db[TF_DIR_RX];
-	aparms.db_index = TF_EEM_DB_TBL_SCOPE;
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
 	aparms.index = parms->tbl_scope_id;
 	rc = tf_rm_free(&aparms);
 	if (rc) {
@@ -1133,6 +1132,77 @@ tf_em_ext_host_free(struct tf *tfp,
 		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
 	}
 
-	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = -1;
+	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
+	return rc;
+}
+
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_host_set(struct tf *tfp,
+			struct tf_tbl_set_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 6dd115470..10768df03 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -112,3 +112,9 @@ tf_em_ext_system_free(struct tf *tfp __rte_unused,
 {
 	return 0;
 }
+
+int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
+			  struct tf_tbl_set_parms *parms __rte_unused)
+{
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 211371081..219839272 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -159,13 +159,13 @@ tf_ident_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->id);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index f44fcca70..fd044801f 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -12,6 +12,11 @@
 
 struct tf;
 
+/** RM return codes */
+#define TF_RM_ALLOCATED_ENTRY_FREE        0
+#define TF_RM_ALLOCATED_ENTRY_IN_USE      1
+#define TF_RM_ALLOCATED_NO_ENTRY_FOUND   -1
+
 /**
  * The Resource Manager (RM) module provides basic DB handling for
  * internal resources. These resources exists within the actual device
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 05e866dc6..3a3277329 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -172,13 +172,13 @@ tf_tbl_free(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -233,7 +233,7 @@ tf_tbl_set(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -301,7 +301,7 @@ tf_tbl_get(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 		   "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 		   tf_dir_2_str(parms->dir),
@@ -374,7 +374,7 @@ tf_tbl_bulk_get(struct tf *tfp,
 		if (rc)
 			return rc;
 
-		if (!allocated) {
+		if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 			TFP_DRV_LOG(ERR,
 				    "%s, Invalid or not allocated index, type:%d, idx:%d\n",
 				    tf_dir_2_str(parms->dir),
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index eb560ffa7..2a10b47ce 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -83,6 +83,10 @@ struct tf_tbl_alloc_parms {
 	 * [in] Type of the allocation
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [out] Idx of allocated entry or found entry (if search_enable)
 	 */
@@ -101,6 +105,10 @@ struct tf_tbl_free_parms {
 	 * [in] Type of the allocation type
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Index to free
 	 */
@@ -168,6 +176,10 @@ struct tf_tbl_set_parms {
 	 * [in] Type of object to set
 	 */
 	enum tf_tbl_type type;
+	/**
+	 * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+	 */
+	uint32_t tbl_scope_id;
 	/**
 	 * [in] Entry data
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b67159a54..b1092cd9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -252,13 +252,13 @@ tf_tcam_free(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry already free, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Free requested element */
@@ -362,13 +362,13 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	if (rc)
 		return rc;
 
-	if (!allocated) {
+	if (allocated != TF_RM_ALLOCATED_ENTRY_IN_USE) {
 		TFP_DRV_LOG(ERR,
 			    "%s: Entry is not allocated, type:%d, index:%d\n",
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    parms->idx);
-		return rc;
+		return -EINVAL;
 	}
 
 	/* Convert TF type to HCAPI RM type */
diff --git a/drivers/net/bnxt/tf_core/tf_util.c b/drivers/net/bnxt/tf_core/tf_util.c
index 5472a9aac..85f6e25f4 100644
--- a/drivers/net/bnxt/tf_core/tf_util.c
+++ b/drivers/net/bnxt/tf_core/tf_util.c
@@ -92,10 +92,6 @@ tf_tbl_type_2_str(enum tf_tbl_type tbl_type)
 		return "NAT IPv4 Source";
 	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
 		return "NAT IPv4 Destination";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
-		return "NAT IPv6 Source";
-	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
-		return "NAT IPv6 Destination";
 	case TF_TBL_TYPE_METER_PROF:
 		return "Meter Profile";
 	case TF_TBL_TYPE_METER_INST:
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 27/51] net/bnxt: align CFA resources with RM
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (25 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
                           ` (25 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher, Venkat Duvvuru

From: Randy Schacher <stuart.schacher@broadcom.com>

- HCAPI resources need to align for Resource Manager
- Clean up unnecessary debug messages

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/cfa_resource_types.h | 250 +++++++++---------
 drivers/net/bnxt/tf_core/tf_identifier.c      |   3 +-
 drivers/net/bnxt/tf_core/tf_msg.c             |  37 ++-
 drivers/net/bnxt/tf_core/tf_rm.c              |  21 +-
 drivers/net/bnxt/tf_core/tf_tbl.c             |   3 +-
 drivers/net/bnxt/tf_core/tf_tcam.c            |  28 +-
 6 files changed, 197 insertions(+), 145 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/cfa_resource_types.h b/drivers/net/bnxt/tf_core/cfa_resource_types.h
index 6e79facec..6d6651fde 100644
--- a/drivers/net/bnxt/tf_core/cfa_resource_types.h
+++ b/drivers/net/bnxt/tf_core/cfa_resource_types.h
@@ -48,232 +48,246 @@
 #define CFA_RESOURCE_TYPE_P59_TBL_SCOPE       0xdUL
 /* L2 Func */
 #define CFA_RESOURCE_TYPE_P59_L2_FUNC         0xeUL
-/* EPOCH */
-#define CFA_RESOURCE_TYPE_P59_EPOCH           0xfUL
+/* EPOCH 0 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH0          0xfUL
+/* EPOCH 1 */
+#define CFA_RESOURCE_TYPE_P59_EPOCH1          0x10UL
 /* Metadata */
-#define CFA_RESOURCE_TYPE_P59_METADATA        0x10UL
+#define CFA_RESOURCE_TYPE_P59_METADATA        0x11UL
 /* Connection Tracking Rule TCAM */
-#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x11UL
+#define CFA_RESOURCE_TYPE_P59_CT_RULE_TCAM    0x12UL
 /* Range Profile */
-#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x12UL
+#define CFA_RESOURCE_TYPE_P59_RANGE_PROF      0x13UL
 /* Range */
-#define CFA_RESOURCE_TYPE_P59_RANGE           0x13UL
+#define CFA_RESOURCE_TYPE_P59_RANGE           0x14UL
 /* Link Aggrigation */
-#define CFA_RESOURCE_TYPE_P59_LAG             0x14UL
+#define CFA_RESOURCE_TYPE_P59_LAG             0x15UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x15UL
+#define CFA_RESOURCE_TYPE_P59_VEB_TCAM        0x16UL
 #define CFA_RESOURCE_TYPE_P59_LAST           CFA_RESOURCE_TYPE_P59_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P58_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P58_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P58_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P58_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P58_SP_MAC_IPV6         0x6UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_SPORT       0x7UL
+#define CFA_RESOURCE_TYPE_P58_NAT_SPORT           0x7UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P58_NAT_DPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P58_NAT_DPORT           0x8UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4      0x9UL
+#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV4          0x9UL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4      0xaUL
-/* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_S_IPV6      0xbUL
-/* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV6      0xcUL
+#define CFA_RESOURCE_TYPE_P58_NAT_D_IPV4          0xaUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_METER           0xdUL
+#define CFA_RESOURCE_TYPE_P58_METER               0xbUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P58_FLOW_STATE      0xeUL
+#define CFA_RESOURCE_TYPE_P58_FLOW_STATE          0xcUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P58_FULL_ACTION     0xfUL
+#define CFA_RESOURCE_TYPE_P58_FULL_ACTION         0xdUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION 0x10UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_0_ACTION     0xeUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P58_EXT_FORMAT_0_ACTION 0xfUL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_1_ACTION     0x10UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_2_ACTION     0x11UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_3_ACTION     0x12UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P58_FORMAT_4_ACTION     0x13UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_5_ACTION     0x14UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P58_FORMAT_6_ACTION     0x15UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM    0x14UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_TCAM        0x16UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP   0x15UL
+#define CFA_RESOURCE_TYPE_P58_L2_CTXT_REMAP       0x17UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P58_PROF_FUNC       0x16UL
+#define CFA_RESOURCE_TYPE_P58_PROF_FUNC           0x18UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P58_PROF_TCAM       0x17UL
+#define CFA_RESOURCE_TYPE_P58_PROF_TCAM           0x19UL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID      0x18UL
+#define CFA_RESOURCE_TYPE_P58_EM_PROF_ID          0x1aUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID 0x19UL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM_PROF_ID     0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P58_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P58_EM_REC              0x1cUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P58_WC_TCAM         0x1bUL
+#define CFA_RESOURCE_TYPE_P58_WC_TCAM             0x1dUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P58_METER_PROF      0x1cUL
+#define CFA_RESOURCE_TYPE_P58_METER_PROF          0x1eUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P58_MIRROR          0x1dUL
+#define CFA_RESOURCE_TYPE_P58_MIRROR              0x1fUL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P58_SP_TCAM         0x1eUL
+#define CFA_RESOURCE_TYPE_P58_SP_TCAM             0x20UL
 /* Exact Match Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_EM_FKB          0x1fUL
+#define CFA_RESOURCE_TYPE_P58_EM_FKB              0x21UL
 /* Wildcard Flexible Key Builder */
-#define CFA_RESOURCE_TYPE_P58_WC_FKB          0x20UL
+#define CFA_RESOURCE_TYPE_P58_WC_FKB              0x22UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P58_VEB_TCAM        0x21UL
-#define CFA_RESOURCE_TYPE_P58_LAST           CFA_RESOURCE_TYPE_P58_VEB_TCAM
+#define CFA_RESOURCE_TYPE_P58_VEB_TCAM            0x23UL
+#define CFA_RESOURCE_TYPE_P58_LAST               CFA_RESOURCE_TYPE_P58_VEB_TCAM
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P45_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P45_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P45_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P45_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P45_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P45_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P45_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P45_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P45_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P45_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P45_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P45_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P45_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P45_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P45_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P45_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P45_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P45_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P45_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P45_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P45_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P45_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P45_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P45_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P45_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P45_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P45_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P45_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P45_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P45_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P45_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P45_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P45_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P45_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P45_SP_TCAM             0x21UL
 /* VEB TCAM */
-#define CFA_RESOURCE_TYPE_P45_VEB_TCAM        0x20UL
+#define CFA_RESOURCE_TYPE_P45_VEB_TCAM            0x22UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE       0x21UL
-#define CFA_RESOURCE_TYPE_P45_LAST           CFA_RESOURCE_TYPE_P45_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P45_TBL_SCOPE           0x23UL
+#define CFA_RESOURCE_TYPE_P45_LAST               CFA_RESOURCE_TYPE_P45_TBL_SCOPE
 
 
 /* Multicast Group */
-#define CFA_RESOURCE_TYPE_P4_MCG             0x0UL
+#define CFA_RESOURCE_TYPE_P4_MCG                 0x0UL
 /* Encap 8 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_8B        0x1UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_8B            0x1UL
 /* Encap 16 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_16B       0x2UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_16B           0x2UL
 /* Encap 64 byte record */
-#define CFA_RESOURCE_TYPE_P4_ENCAP_64B       0x3UL
+#define CFA_RESOURCE_TYPE_P4_ENCAP_64B           0x3UL
 /* Source Property MAC */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC          0x4UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC              0x4UL
 /* Source Property MAC and IPv4 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4     0x5UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4         0x5UL
 /* Source Property MAC and IPv6 */
-#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6     0x6UL
+#define CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6         0x6UL
 /* 64B Counters */
-#define CFA_RESOURCE_TYPE_P4_COUNTER_64B     0x7UL
+#define CFA_RESOURCE_TYPE_P4_COUNTER_64B         0x7UL
 /* Network Address Translation Source Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_SPORT       0x8UL
+#define CFA_RESOURCE_TYPE_P4_NAT_SPORT           0x8UL
 /* Network Address Translation Destination Port */
-#define CFA_RESOURCE_TYPE_P4_NAT_DPORT       0x9UL
+#define CFA_RESOURCE_TYPE_P4_NAT_DPORT           0x9UL
 /* Network Address Translation Source IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4      0xaUL
+#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV4          0xaUL
 /* Network Address Translation Destination IPv4 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4      0xbUL
-/* Network Address Translation Source IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_S_IPV6      0xcUL
-/* Network Address Translation Destination IPv6 address */
-#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV6      0xdUL
+#define CFA_RESOURCE_TYPE_P4_NAT_D_IPV4          0xbUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_METER           0xeUL
+#define CFA_RESOURCE_TYPE_P4_METER               0xcUL
 /* Flow State */
-#define CFA_RESOURCE_TYPE_P4_FLOW_STATE      0xfUL
+#define CFA_RESOURCE_TYPE_P4_FLOW_STATE          0xdUL
 /* Full Action Records */
-#define CFA_RESOURCE_TYPE_P4_FULL_ACTION     0x10UL
+#define CFA_RESOURCE_TYPE_P4_FULL_ACTION         0xeUL
 /* Action Record Format 0 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION 0x11UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_0_ACTION     0xfUL
+/* Action Record Ext Format 0 */
+#define CFA_RESOURCE_TYPE_P4_EXT_FORMAT_0_ACTION 0x10UL
+/* Action Record Format 1 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_1_ACTION     0x11UL
 /* Action Record Format 2 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION 0x12UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_2_ACTION     0x12UL
 /* Action Record Format 3 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION 0x13UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_3_ACTION     0x13UL
 /* Action Record Format 4 */
-#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION 0x14UL
+#define CFA_RESOURCE_TYPE_P4_FORMAT_4_ACTION     0x14UL
+/* Action Record Format 5 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_5_ACTION     0x15UL
+/* Action Record Format 6 */
+#define CFA_RESOURCE_TYPE_P4_FORMAT_6_ACTION     0x16UL
 /* L2 Context TCAM */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM    0x15UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_TCAM        0x17UL
 /* L2 Context REMAP */
-#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP   0x16UL
+#define CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP       0x18UL
 /* Profile Func */
-#define CFA_RESOURCE_TYPE_P4_PROF_FUNC       0x17UL
+#define CFA_RESOURCE_TYPE_P4_PROF_FUNC           0x19UL
 /* Profile TCAM */
-#define CFA_RESOURCE_TYPE_P4_PROF_TCAM       0x18UL
+#define CFA_RESOURCE_TYPE_P4_PROF_TCAM           0x1aUL
 /* Exact Match Profile Id */
-#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID      0x19UL
+#define CFA_RESOURCE_TYPE_P4_EM_PROF_ID          0x1bUL
 /* Exact Match Record */
-#define CFA_RESOURCE_TYPE_P4_EM_REC          0x1aUL
+#define CFA_RESOURCE_TYPE_P4_EM_REC              0x1cUL
 /* Wildcard Profile Id */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID 0x1bUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM_PROF_ID     0x1dUL
 /* Wildcard TCAM */
-#define CFA_RESOURCE_TYPE_P4_WC_TCAM         0x1cUL
+#define CFA_RESOURCE_TYPE_P4_WC_TCAM             0x1eUL
 /* Meter profile */
-#define CFA_RESOURCE_TYPE_P4_METER_PROF      0x1dUL
+#define CFA_RESOURCE_TYPE_P4_METER_PROF          0x1fUL
 /* Meter */
-#define CFA_RESOURCE_TYPE_P4_MIRROR          0x1eUL
+#define CFA_RESOURCE_TYPE_P4_MIRROR              0x20UL
 /* Source Property TCAM */
-#define CFA_RESOURCE_TYPE_P4_SP_TCAM         0x1fUL
+#define CFA_RESOURCE_TYPE_P4_SP_TCAM             0x21UL
 /* Table Scope */
-#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE       0x20UL
-#define CFA_RESOURCE_TYPE_P4_LAST           CFA_RESOURCE_TYPE_P4_TBL_SCOPE
+#define CFA_RESOURCE_TYPE_P4_TBL_SCOPE           0x22UL
+#define CFA_RESOURCE_TYPE_P4_LAST               CFA_RESOURCE_TYPE_P4_TBL_SCOPE
 
 
 #endif /* _CFA_RESOURCE_TYPES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 219839272..2cc43b40f 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -59,7 +59,8 @@ tf_ident_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Identifier - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Identifier - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 7fffb6baf..659065de3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,9 @@
 #include "hwrm_tf.h"
 #include "tf_em.h"
 
+/* Logging defines */
+#define TF_RM_MSG_DEBUG  0
+
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -215,7 +218,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -225,31 +228,39 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	data = (struct tf_rm_resc_req_entry *)qcaps_buf.va_addr;
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nQCAPS\n");
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	for (i = 0; i < size; i++) {
 		query[i].type = tfp_le_to_cpu_32(data[i].type);
 		query[i].min = tfp_le_to_cpu_16(data[i].min);
 		query[i].max = tfp_le_to_cpu_16(data[i].max);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("type: %d(0x%x) %d %d\n",
 		       query[i].type,
 		       query[i].type,
 		       query[i].min,
 		       query[i].max);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	}
 
 	*resv_strategy = resp.flags &
 	      HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RESV_STRATEGY_MASK;
 
+cleanup:
 	tf_msg_free_dma_buf(&qcaps_buf);
 
 	return rc;
@@ -293,8 +304,10 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	dma_size = size * sizeof(struct tf_rm_resc_entry);
 	rc = tf_msg_alloc_dma_buf(&resv_buf, dma_size);
-	if (rc)
+	if (rc) {
+		tf_msg_free_dma_buf(&req_buf);
 		return rc;
+	}
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
@@ -320,7 +333,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc)
-		return rc;
+		goto cleanup;
 
 	/* Process the response
 	 * Should always get expected number of entries
@@ -330,11 +343,14 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-EINVAL));
-		return -EINVAL;
+		rc = -EINVAL;
+		goto cleanup;
 	}
 
+#if (TF_RM_MSG_DEBUG == 1)
 	printf("\nRESV\n");
 	printf("size: %d\n", tfp_le_to_cpu_32(resp.size));
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 
 	/* Post process the response */
 	resv_data = (struct tf_rm_resc_entry *)resv_buf.va_addr;
@@ -343,14 +359,17 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		resv[i].start = tfp_le_to_cpu_16(resv_data[i].start);
 		resv[i].stride = tfp_le_to_cpu_16(resv_data[i].stride);
 
+#if (TF_RM_MSG_DEBUG == 1)
 		printf("%d type: %d(0x%x) %d %d\n",
 		       i,
 		       resv[i].type,
 		       resv[i].type,
 		       resv[i].start,
 		       resv[i].stride);
+#endif /* (TF_RM_MSG_DEBUG == 1) */
 	}
 
+cleanup:
 	tf_msg_free_dma_buf(&req_buf);
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -412,8 +431,6 @@ tf_msg_session_resc_flush(struct tf *tfp,
 	parms.mailbox = TF_KONG_MB;
 
 	rc = tfp_send_msg_direct(tfp, &parms);
-	if (rc)
-		return rc;
 
 	tf_msg_free_dma_buf(&resv_buf);
 
@@ -434,7 +451,7 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
-	uint32_t flags;
+	uint16_t flags;
 
 	req.fw_session_id =
 		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
@@ -480,7 +497,7 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
-	uint32_t flags;
+	uint16_t flags;
 	struct tf_session *tfs =
 		(struct tf_session *)(tfp->session->core_data);
 
@@ -726,8 +743,6 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 
 	rc = tfp_send_msg_direct(tfp,
 				 &mparms);
-	if (rc)
-		goto cleanup;
 
 cleanup:
 	tf_msg_free_dma_buf(&buf);
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index e7af9eb84..30313e2ea 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -17,6 +17,9 @@
 #include "tfp.h"
 #include "tf_msg.h"
 
+/* Logging defines */
+#define TF_RM_DEBUG  0
+
 /**
  * Generic RM Element data type that an RM DB is build upon.
  */
@@ -120,16 +123,11 @@ tf_rm_count_hcapi_reservations(enum tf_dir dir,
 		    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&
 		    reservations[i] > 0) {
 			TFP_DRV_LOG(ERR,
-				"%s, %s, %s allocation not supported\n",
-				tf_device_module_type_2_str(type),
-				tf_dir_2_str(dir),
-				tf_device_module_type_subtype_2_str(type, i));
-			printf("%s, %s, %s allocation of %d not supported\n",
+				"%s, %s, %s allocation of %d not supported\n",
 				tf_device_module_type_2_str(type),
 				tf_dir_2_str(dir),
-			       tf_device_module_type_subtype_2_str(type, i),
-			       reservations[i]);
-
+				tf_device_module_type_subtype_2_str(type, i),
+				reservations[i]);
 		}
 	}
 
@@ -549,11 +547,6 @@ tf_rm_create_db(struct tf *tfp,
 			db[i].alloc.entry.start = resv[j].start;
 			db[i].alloc.entry.stride = resv[j].stride;
 
-			printf("Entry:%d Start:%d Stride:%d\n",
-			       i,
-			       resv[j].start,
-			       resv[j].stride);
-
 			/* Only allocate BA pool if so requested */
 			if (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI_BA) {
 				/* Create pool */
@@ -603,10 +596,12 @@ tf_rm_create_db(struct tf *tfp,
 	rm_db->type = parms->type;
 	*parms->rm_db = (void *)rm_db;
 
+#if (TF_RM_DEBUG == 1)
 	printf("%s: type:%d num_entries:%d\n",
 	       tf_dir_2_str(parms->dir),
 	       parms->type,
 	       i);
+#endif /* (TF_RM_DEBUG == 1) */
 
 	tfp_free((void *)req);
 	tfp_free((void *)resv);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 3a3277329..7d4daaf2d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -74,7 +74,8 @@ tf_tbl_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("Table Type - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
 
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index b1092cd9d..1c48b5363 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -81,7 +81,8 @@ tf_tcam_bind(struct tf *tfp,
 
 	init = 1;
 
-	printf("TCAM - initialized\n");
+	TFP_DRV_LOG(INFO,
+		    "TCAM - initialized\n");
 
 	return 0;
 }
@@ -275,6 +276,31 @@ tf_tcam_free(struct tf *tfp,
 		return rc;
 	}
 
+	if (parms->type == TF_TCAM_TBL_TYPE_WC_TCAM) {
+		int i;
+
+		for (i = -1; i < 3; i += 3) {
+			aparms.index += i;
+			rc = tf_rm_is_allocated(&aparms);
+			if (rc)
+				return rc;
+
+			if (allocated == TF_RM_ALLOCATED_ENTRY_IN_USE) {
+				/* Free requested element */
+				fparms.index = aparms.index;
+				rc = tf_rm_free(&fparms);
+				if (rc) {
+					TFP_DRV_LOG(ERR,
+						    "%s: Free failed, type:%d, index:%d\n",
+						    tf_dir_2_str(parms->dir),
+						    parms->type,
+						    fparms.index);
+					return rc;
+				}
+			}
+		}
+	}
+
 	/* Convert TF type to HCAPI RM type */
 	hparms.rm_db = tcam_db[parms->dir];
 	hparms.db_index = parms->type;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 28/51] net/bnxt: implement IF tables set and get
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (26 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
                           ` (24 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Implement set/get for PROF_SPIF_CTXT, LKUP_PF_DFLT_ARP,
  PROF_PF_ERR_ARP with tunneled HWRM messages
- Add IF table for PROF_PARIF_DFLT_ARP
- Fix page size offset in the HCAPI code
- Fix Entry offset calculation

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/cfa_p40_tbl.h     |  53 +++++
 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h  |  12 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c    |   8 +-
 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h    |  18 +-
 drivers/net/bnxt/meson.build             |   2 +-
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  63 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 116 +++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       | 104 ++++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  21 ++
 drivers/net/bnxt/tf_core/tf_device.h     |  39 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   5 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |  10 +
 drivers/net/bnxt/tf_core/tf_em_common.c  |   5 +-
 drivers/net/bnxt/tf_core/tf_em_host.c    |  12 +-
 drivers/net/bnxt/tf_core/tf_identifier.c |   3 +-
 drivers/net/bnxt/tf_core/tf_if_tbl.c     | 178 +++++++++++++++++
 drivers/net/bnxt/tf_core/tf_if_tbl.h     | 236 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 186 +++++++++++++++---
 drivers/net/bnxt/tf_core/tf_msg.h        |  30 +++
 drivers/net/bnxt/tf_core/tf_session.c    |  14 +-
 21 files changed, 1060 insertions(+), 57 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h

diff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
index c30e4f49c..3243b3f2b 100644
--- a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h
@@ -127,6 +127,11 @@ const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_remap_mem_layout[] = {
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_PROFILE_ID_NUM_BITS},
 	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_BITPOS,
 	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_TCAM_KEY_ID_NUM_BITS},
+	/* Fields below not generated through automation */
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_BYPASS_OPT_NUM_BITS},
+	{CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_BITPOS,
+	 CFA_P40_PROF_PROFILE_TCAM_REMAP_MEM_ACT_REC_PTR_NUM_BITS},
 };
 
 const struct hcapi_cfa_field cfa_p40_prof_profile_tcam_layout[] = {
@@ -247,4 +252,52 @@ const struct hcapi_cfa_field cfa_p40_eem_key_tbl_layout[] = {
 	 CFA_P40_EEM_KEY_TBL_AR_PTR_NUM_BITS},
 
 };
+
+const struct hcapi_cfa_field cfa_p40_mirror_tbl_layout[] = {
+	{CFA_P40_MIRROR_TBL_SP_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_SP_PTR_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_IGN_DROP_BITPOS,
+	 CFA_P40_MIRROR_TBL_IGN_DROP_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_COPY_BITPOS,
+	 CFA_P40_MIRROR_TBL_COPY_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_EN_BITPOS,
+	 CFA_P40_MIRROR_TBL_EN_NUM_BITS},
+
+	{CFA_P40_MIRROR_TBL_AR_PTR_BITPOS,
+	 CFA_P40_MIRROR_TBL_AR_PTR_NUM_BITS},
+};
+
+/* P45 Defines */
+
+const struct hcapi_cfa_field cfa_p45_prof_l2_ctxt_tcam_layout[] = {
+	{CFA_P45_PROF_L2_CTXT_TCAM_VALID_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_VALID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SPARIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_KEY_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_TUN_HDR_TYPE_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_L2_NUMTAGS_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC1_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC1_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_T_IVID_NUM_BITS},
+	{CFA_P45_PROF_L2_CTXT_TCAM_SVIF_BITPOS,
+	 CFA_P45_PROF_L2_CTXT_TCAM_SVIF_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_MAC0_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_MAC0_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_OVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_OVID_NUM_BITS},
+	{CFA_P40_PROF_L2_CTXT_TCAM_IVID_BITPOS,
+	 CFA_P40_PROF_L2_CTXT_TCAM_IVID_NUM_BITS},
+};
 #endif /* _CFA_P40_TBL_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
index ea8d99d01..53a284887 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
@@ -35,10 +35,6 @@
 
 #define CFA_GLOBAL_CFG_DATA_SZ (100)
 
-#if SUPPORT_CFA_HW_P4 && SUPPORT_CFA_HW_P58 && SUPPORT_CFA_HW_P59
-#define SUPPORT_CFA_HW_ALL (1)
-#endif
-
 #include "hcapi_cfa_p4.h"
 #define CFA_PROF_L2CTXT_TCAM_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_TCAM_MAX_FLD
 #define CFA_PROF_L2CTXT_REMAP_MAX_FIELD_CNT CFA_P40_PROF_L2_CTXT_RMP_DR_MAX_FLD
@@ -121,6 +117,8 @@ struct hcapi_cfa_layout {
 	const struct hcapi_cfa_field *field_array;
 	/** [out] number of HW field entries in the HW layout field array */
 	uint32_t array_sz;
+	/** [out] layout_id - layout id associated with the layout */
+	uint16_t layout_id;
 };
 
 /**
@@ -247,6 +245,8 @@ struct hcapi_cfa_key_tbl {
 	 *  applicable for newer chip
 	 */
 	uint8_t *base1;
+	/** [in] Page size for EEM tables */
+	uint32_t page_size;
 };
 
 /**
@@ -267,7 +267,7 @@ struct hcapi_cfa_key_obj {
 struct hcapi_cfa_key_data {
 	/** [in] For on-chip key table, it is the offset in unit of smallest
 	 *  key. For off-chip key table, it is the byte offset relative
-	 *  to the key record memory base.
+	 *  to the key record memory base and adjusted for page and entry size.
 	 */
 	uint32_t offset;
 	/** [in] HW key data buffer pointer */
@@ -668,5 +668,5 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 			struct hcapi_cfa_key_loc *key_loc);
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset);
+			      uint32_t page);
 #endif /* HCAPI_CFA_DEFS_H_ */
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
index 42b37da0f..a01bbdbbb 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
@@ -13,7 +13,6 @@
 #include "hcapi_cfa_defs.h"
 
 #define HCAPI_CFA_LKUP_SEED_MEM_SIZE 512
-#define TF_EM_PAGE_SIZE (1 << 21)
 uint32_t hcapi_cfa_lkup_lkup3_init_cfg;
 uint32_t hcapi_cfa_lkup_em_seed_mem[HCAPI_CFA_LKUP_SEED_MEM_SIZE];
 bool hcapi_cfa_lkup_init;
@@ -199,10 +198,9 @@ static uint32_t hcapi_cfa_lookup3_hash(uint8_t *in_key)
 
 
 uint64_t hcapi_get_table_page(struct hcapi_cfa_em_table *mem,
-			      uint32_t offset)
+			      uint32_t page)
 {
 	int level = 0;
-	int page = offset / TF_EM_PAGE_SIZE;
 	uint64_t addr;
 
 	if (mem == NULL)
@@ -362,7 +360,9 @@ int hcapi_cfa_key_hw_op(struct hcapi_cfa_hwop *op,
 	op->hw.base_addr =
 		hcapi_get_table_page((struct hcapi_cfa_em_table *)
 				     key_tbl->base0,
-				     key_obj->offset);
+				     key_obj->offset / key_tbl->page_size);
+	/* Offset is adjusted to be the offset into the page */
+	key_obj->offset = key_obj->offset % key_tbl->page_size;
 
 	if (op->hw.base_addr == 0)
 		return -1;
diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
index 0661d6363..c6113707f 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
@@ -21,6 +21,10 @@ enum cfa_p4_tbl_id {
 	CFA_P4_TBL_WC_TCAM_REMAP,
 	CFA_P4_TBL_VEB_TCAM,
 	CFA_P4_TBL_SP_TCAM,
+	CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT,
+	CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR,
+	CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR,
+	CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR,
 	CFA_P4_TBL_MAX
 };
 
@@ -333,17 +337,29 @@ enum cfa_p4_action_sram_entry_type {
 	 */
 
 	/** SRAM Action Record */
-	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ACT,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FULL_ACTION,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_0_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_1_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_2_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_3_ACTION,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_FORMAT_4_ACTION,
+
 	/** SRAM Action Encap 8 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_8B,
 	/** SRAM Action Encap 16 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_16B,
 	/** SRAM Action Encap 64 Bytes */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_ENCAP_64B,
+
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_SRC,
+	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_PORT_DEST,
+
 	/** SRAM Action Modify IPv4 Source */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_SRC,
 	/** SRAM Action Modify IPv4 Destination */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_MODIFY_IPV4_DEST,
+
 	/** SRAM Action Source Properties SMAC */
 	CFA_P4_ACTION_SRAM_ENTRY_TYPE_SP_SMAC,
 	/** SRAM Action Source Properties SMAC IPv4 */
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 7f3ec6204..f25a9448d 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -43,7 +43,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_shadow_tcam.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
-	'tf_core/tf_rm.c',
+	'tf_core/tf_if_tbl.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 9ba60e1c2..1924bef02 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -25,6 +25,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -33,3 +34,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 26836e488..32f152314 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -16,7 +16,9 @@ typedef enum tf_subtype {
 	HWRM_TFT_REG_GET = 821,
 	HWRM_TFT_REG_SET = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
-	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_BULK_GET,
+	HWRM_TFT_IF_TBL_SET = 827,
+	HWRM_TFT_IF_TBL_GET = 828,
+	TF_SUBTYPE_LAST = HWRM_TFT_IF_TBL_GET,
 } tf_subtype_t;
 
 /* Request and Response compile time checking */
@@ -46,7 +48,17 @@ typedef enum tf_subtype {
 /* WC DMA Address Type */
 #define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
 /* WC Entry */
-#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY				0x30d1UL
+/* SPIF DFLT L2 CTXT Entry */
+#define TF_DEV_DATA_TYPE_SPIF_DFLT_L2_CTXT		  0x3131UL
+/* PARIF DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_DFLT_ACT_REC		0x3132UL
+/* PARIF ERR DFLT ACT REC PTR Entry */
+#define TF_DEV_DATA_TYPE_PARIF_ERR_DFLT_ACT_REC	 0x3133UL
+/* ILT Entry */
+#define TF_DEV_DATA_TYPE_ILT				0x3134UL
+/* VNIC SVIF entry */
+#define TF_DEV_DATA_TYPE_VNIC_SVIF			0x3135UL
 /* Action Data */
 #define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
 #define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
@@ -56,6 +68,9 @@ typedef enum tf_subtype {
 
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
+struct tf_if_tbl_set_input;
+struct tf_if_tbl_get_input;
+struct tf_if_tbl_get_output;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
@@ -85,4 +100,48 @@ typedef struct tf_tbl_type_bulk_get_output {
 	uint16_t			 size;
 } tf_tbl_type_bulk_get_output_t, *ptf_tbl_type_bulk_get_output_t;
 
+/* Input params for if tbl set */
+typedef struct tf_if_tbl_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* index of table entry */
+	uint16_t			 idx;
+	/* size of the data write to table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* data to write into table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_set_input_t, *ptf_if_tbl_set_input_t;
+
+/* Input params for if tbl get */
+typedef struct tf_if_tbl_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX			  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX			  (0x1)
+	/* if table type */
+	uint16_t			 tf_if_tbl_type;
+	/* size of the data get from table entry */
+	uint32_t			 data_sz_in_bytes;
+	/* index of table entry */
+	uint16_t			 idx;
+} tf_if_tbl_get_input_t, *ptf_if_tbl_get_input_t;
+
+/* output params for if tbl get */
+typedef struct tf_if_tbl_get_output {
+	/* Value read from table entry */
+	uint32_t			 data[2];
+} tf_if_tbl_get_output_t, *ptf_if_tbl_get_output_t;
+
 #endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 45accb0ab..a980a2056 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -1039,3 +1039,119 @@ tf_free_tbl_scope(struct tf *tfp,
 
 	return rc;
 }
+
+int
+tf_set_if_tbl_entry(struct tf *tfp,
+		    struct tf_set_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_set_parms sparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_set_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	sparms.dir = parms->dir;
+	sparms.type = parms->type;
+	sparms.idx = parms->idx;
+	sparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	sparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_set_if_tbl(tfp, &sparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_get_if_tbl_entry(struct tf *tfp,
+		    struct tf_get_if_tbl_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_if_tbl_get_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (dev->ops->tf_dev_get_if_tbl == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.idx = parms->idx;
+	gparms.data_sz_in_bytes = parms->data_sz_in_bytes;
+	gparms.data = parms->data;
+
+	rc = dev->ops->tf_dev_get_if_tbl(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: If_tbl get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e898f19a0..e3d46bd45 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1556,4 +1556,108 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page if_tbl Interface Table Access
+ *
+ * @ref tf_set_if_tbl_entry
+ *
+ * @ref tf_get_if_tbl_entry
+ *
+ * @ref tf_restore_if_tbl_entry
+ */
+/**
+ * Enumeration of TruFlow interface table types.
+ */
+enum tf_if_tbl_type {
+	/** Default Profile L2 Context Entry */
+	TF_IF_TBL_TYPE_PROF_SPIF_DFLT_L2_CTXT,
+	/** Default Profile TCAM/Lookup Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	/** Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	/** Default Error Profile TCAM Miss Action Record Pointer Table */
+	TF_IF_TBL_TYPE_LKUP_PARIF_DFLT_ACT_REC_PTR,
+	/** SR2 Ingress lookup table */
+	TF_IF_TBL_TYPE_ILT,
+	/** SR2 VNIC/SVIF Table */
+	TF_IF_TBL_TYPE_VNIC_SVIF,
+	TF_IF_TBL_TYPE_MAX
+};
+
+/**
+ * tf_set_if_tbl_entry parameter definition
+ */
+struct tf_set_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Interface to write
+	 */
+	uint32_t idx;
+};
+
+/**
+ * set interface table entry
+ *
+ * Used to set an interface table. This API is used for managing tables indexed
+ * by SVIF/SPIF/PARIF interfaces. In current implementation only the value is
+ * set.
+ * Returns success or failure code.
+ */
+int tf_set_if_tbl_entry(struct tf *tfp,
+			struct tf_set_if_tbl_entry_parms *parms);
+
+/**
+ * tf_get_if_tbl_entry parameter definition
+ */
+struct tf_get_if_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of table to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * get interface table entry
+ *
+ * Used to retrieve an interface table entry.
+ *
+ * Reads the interface table entry value
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_if_tbl_entry(struct tf *tfp,
+			struct tf_get_if_tbl_entry_parms *parms);
+
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index 20b0c5948..a3073c826 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -44,6 +44,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tbl_cfg_parms tbl_cfg;
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
+	struct tf_if_tbl_cfg_parms if_tbl_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -114,6 +115,19 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * IF_TBL
+	 */
+	if_tbl_cfg.num_elements = TF_IF_TBL_TYPE_MAX;
+	if_tbl_cfg.cfg = tf_if_tbl_p4;
+	if_tbl_cfg.shadow_copy = shadow_copy;
+	rc = tf_if_tbl_bind(tfp, &if_tbl_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "IF Table initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -186,6 +200,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_if_tbl_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, IF Table Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 58b7a4ab2..5a0943ad7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -10,6 +10,7 @@
 #include "tf_identifier.h"
 #include "tf_tbl.h"
 #include "tf_tcam.h"
+#include "tf_if_tbl.h"
 
 struct tf;
 struct tf_session;
@@ -567,6 +568,44 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_free_tbl_scope)(struct tf *tfp,
 				     struct tf_free_tbl_scope_parms *parms);
+
+	/**
+	 * Sets the specified interface table type element.
+	 *
+	 * This API sets the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to interface table set parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_set_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_set_parms *parms);
+
+	/**
+	 * Retrieves the specified interface table type element.
+	 *
+	 * This API retrieves the specified element data by invoking the
+	 * firmware.
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to table get parameters
+	 *
+	 * Returns
+	 *   - (0) if successful.
+	 *   - (-EINVAL) on failure.
+	 */
+	int (*tf_dev_get_if_tbl)(struct tf *tfp,
+				 struct tf_if_tbl_get_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 9a3230787..2dc34b853 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_em.h"
+#include "tf_if_tbl.h"
 
 /**
  * Device specific function that retrieves the MAX number of HCAPI
@@ -105,6 +106,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_delete_ext_em_entry = NULL,
 	.tf_dev_alloc_tbl_scope = NULL,
 	.tf_dev_free_tbl_scope = NULL,
+	.tf_dev_set_if_tbl = NULL,
+	.tf_dev_get_if_tbl = NULL,
 };
 
 /**
@@ -135,4 +138,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_delete_ext_em_entry = tf_em_delete_ext_entry,
 	.tf_dev_alloc_tbl_scope = tf_em_ext_common_alloc,
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
+	.tf_dev_set_if_tbl = tf_if_tbl_set,
+	.tf_dev_get_if_tbl = tf_if_tbl_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 298e100f3..3b03a7c4e 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -10,6 +10,7 @@
 
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_if_tbl.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -86,4 +87,13 @@ struct tf_rm_element_cfg tf_em_int_p4[TF_EM_TBL_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },
 };
 
+struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_SPIF_DFLT_L2CTXT },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_PROF_PARIF_ERR_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG, CFA_P4_TBL_LKUP_PARIF_DFLT_ACT_REC_PTR },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID },
+	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
+};
+
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 39a8412b3..23a7fc9c2 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -337,11 +337,10 @@ tf_em_ext_common_bind(struct tf *tfp,
 		db_exists = 1;
 	}
 
-	if (db_exists) {
-		mem_type = parms->mem_type;
+	if (db_exists)
 		init = 1;
-	}
 
+	mem_type = parms->mem_type;
 	return 0;
 }
 
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index d7c147a15..2626a59fe 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -831,7 +831,8 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	op.opcode = HCAPI_CFA_HWOPS_ADD;
 	key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = (uint8_t *)&key_entry;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -847,8 +848,7 @@ tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 
 		key_tbl.base0 = (uint8_t *)
 		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset =
-			(index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 
 		rc = hcapi_cfa_key_hw_op(&op,
 					 &key_tbl,
@@ -914,7 +914,8 @@ tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
 	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
 							  TF_KEY0_TABLE :
 							  TF_KEY1_TABLE)];
-	key_obj.offset = (index * TF_EM_KEY_RECORD_SIZE) % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
 	key_obj.data = NULL;
 	key_obj.size = TF_EM_KEY_RECORD_SIZE;
 
@@ -1195,7 +1196,8 @@ int tf_tbl_ext_host_set(struct tf *tfp,
 	op.opcode = HCAPI_CFA_HWOPS_PUT;
 	key_tbl.base0 =
 		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_obj.offset = parms->idx % TF_EM_PAGE_SIZE;
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
 	key_obj.data = parms->data;
 	key_obj.size = parms->data_sz_in_bytes;
 
diff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c
index 2cc43b40f..90aeaa468 100644
--- a/drivers/net/bnxt/tf_core/tf_identifier.c
+++ b/drivers/net/bnxt/tf_core/tf_identifier.c
@@ -68,7 +68,7 @@ tf_ident_bind(struct tf *tfp,
 int
 tf_ident_unbind(struct tf *tfp)
 {
-	int rc;
+	int rc = 0;
 	int i;
 	struct tf_rm_free_db_parms fparms = { 0 };
 
@@ -89,7 +89,6 @@ tf_ident_unbind(struct tf *tfp)
 			TFP_DRV_LOG(ERR,
 				    "rm free failed on unbind\n");
 		}
-
 		ident_db[i] = NULL;
 	}
 
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.c b/drivers/net/bnxt/tf_core/tf_if_tbl.c
new file mode 100644
index 000000000..dc73ba2d0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.c
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_if_tbl.h"
+#include "tf_common.h"
+#include "tf_rm.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+
+/**
+ * IF Table DBs.
+ */
+static void *if_tbl_db[TF_DIR_MAX];
+
+/**
+ * IF Table Shadow DBs
+ */
+/* static void *shadow_if_tbl_db[TF_DIR_MAX]; */
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Shadow init flag, set on bind and cleared on unbind
+ */
+/* static uint8_t shadow_init; */
+
+/**
+ * Convert if_tbl_type to hwrm type.
+ *
+ * [in] if_tbl_type
+ *   Interface table type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_if_tbl_get_hcapi_type(struct tf_if_tbl_get_hcapi_parms *parms)
+{
+	struct tf_if_tbl_cfg *tbl_cfg;
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	tbl_cfg = (struct tf_if_tbl_cfg *)parms->tbl_db;
+	cfg_type = tbl_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_IF_TBL_CFG)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = tbl_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_if_tbl_bind(struct tf *tfp __rte_unused,
+	       struct tf_if_tbl_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "IF TBL DB already initialized\n");
+		return -EINVAL;
+	}
+
+	if_tbl_db[TF_DIR_RX] = parms->cfg;
+	if_tbl_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Table Type - initialized\n");
+
+	return 0;
+}
+
+int
+tf_if_tbl_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Table DBs created\n");
+		return 0;
+	}
+
+	if_tbl_db[TF_DIR_RX] = NULL;
+	if_tbl_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_if_tbl_set(struct tf *tfp,
+	      struct tf_if_tbl_set_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	rc = tf_msg_set_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_if_tbl_get(struct tf *tfp,
+	      struct tf_if_tbl_get_parms *parms)
+{
+	int rc;
+	struct tf_if_tbl_get_hcapi_parms hparms;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->data);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Table DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.tbl_db = if_tbl_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &parms->hcapi_type;
+	rc = tf_if_tbl_get_hcapi_type(&hparms);
+	if (rc)
+		return rc;
+
+	/* Get the entry */
+	rc = tf_msg_get_if_tbl_entry(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, If Tbl get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
new file mode 100644
index 000000000..54d4c37f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_IF_TBL_TYPE_H_
+#define TF_IF_TBL_TYPE_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/*
+ * This is the constant used to define invalid CFA
+ * types across all devices.
+ */
+#define CFA_IF_TBL_TYPE_INVALID 65535
+
+struct tf;
+
+/**
+ * The IF Table module provides processing of Internal TF interface table types.
+ */
+
+/**
+ * IF table configuration enumeration.
+ */
+enum tf_if_tbl_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_IF_TBL_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_IF_TBL_CFG,
+};
+
+/**
+ * IF table configuration structure, used by the Device to configure
+ * how an individual TF type is configured in regard to the HCAPI type.
+ */
+struct tf_if_tbl_cfg {
+	/**
+	 * IF table config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_if_tbl_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_if_tbl_get_hcapi_parms {
+	/**
+	 * [in] IF Tbl DB Handle
+	 */
+	void *tbl_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Table configuration parameters
+ */
+struct tf_if_tbl_cfg_parms {
+	/**
+	 * Number of table types in each of the configuration arrays
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_if_tbl_cfg *cfg;
+	/**
+	 * Shadow table type configuration array
+	 */
+	struct tf_shadow_if_tbl_cfg *shadow_cfg;
+	/**
+	 * Boolean controlling the request shadow copy.
+	 */
+	bool shadow_copy;
+};
+
+/**
+ * IF Table set parameters
+ */
+struct tf_if_tbl_set_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [in] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/**
+ * IF Table get parameters
+ */
+struct tf_if_tbl_get_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_if_tbl_type type;
+	/**
+	 * [in] Type of HCAPI
+	 */
+	uint16_t hcapi_type;
+	/**
+	 * [out] Entry data
+	 */
+	uint32_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/**
+ * @page if tbl Table
+ *
+ * @ref tf_if_tbl_bind
+ *
+ * @ref tf_if_tbl_unbind
+ *
+ * @ref tf_tbl_set
+ *
+ * @ref tf_tbl_get
+ *
+ * @ref tf_tbl_restore
+ */
+/**
+ * Initializes the Table module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_bind(struct tf *tfp,
+		   struct tf_if_tbl_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_unbind(struct tf *tfp);
+
+/**
+ * Configures the requested element by sending a firmware request which
+ * then installs it into the device internal structures.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Interface Table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_set(struct tf *tfp,
+		  struct tf_if_tbl_set_parms *parms);
+
+/**
+ * Retrieves the requested element by sending a firmware request to get
+ * the element.
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to Table get parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_if_tbl_get(struct tf *tfp,
+		  struct tf_if_tbl_get_parms *parms);
+
+#endif /* TF_IF_TBL_TYPE_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 659065de3..6600a14c8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -125,12 +125,19 @@ tf_msg_session_close(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_close_input req = { 0 };
 	struct hwrm_tf_session_close_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_CLOSE;
 	parms.req_data = (uint32_t *)&req;
@@ -150,12 +157,19 @@ tf_msg_session_qcfg(struct tf *tfp)
 	int rc;
 	struct hwrm_tf_session_qcfg_input req = { 0 };
 	struct hwrm_tf_session_qcfg_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	parms.tf_type = HWRM_TF_SESSION_QCFG,
 	parms.req_data = (uint32_t *)&req;
@@ -448,13 +462,22 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct hwrm_tf_em_insert_input req = { 0 };
 	struct hwrm_tf_em_insert_output resp = { 0 };
-	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
 	struct tf_em_64b_entry *em_result =
 		(struct tf_em_64b_entry *)em_parms->em_record;
 	uint16_t flags;
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	tfp_memcpy(req.em_key,
 		   em_parms->key,
 		   ((em_parms->key_sz_in_bits + 7) / 8));
@@ -498,11 +521,19 @@ tf_msg_delete_em_entry(struct tf *tfp,
 	struct hwrm_tf_em_delete_input req = { 0 };
 	struct hwrm_tf_em_delete_output resp = { 0 };
 	uint16_t flags;
-	struct tf_session *tfs =
-		(struct tf_session *)(tfp->session->core_data);
+	uint8_t fw_session_id;
 
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_DELETE_INPUT_FLAGS_DIR_TX :
@@ -789,21 +820,19 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_set_input req = { 0 };
 	struct hwrm_tf_tbl_type_set_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.size = tfp_cpu_to_le_16(size);
@@ -840,21 +869,19 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 	struct hwrm_tf_tbl_type_get_input req = { 0 };
 	struct hwrm_tf_tbl_type_get_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
-	struct tf_session *tfs;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.index = tfp_cpu_to_le_32(index);
@@ -897,22 +924,20 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 	struct tfp_send_msg_parms parms = { 0 };
 	struct tf_tbl_type_bulk_get_input req = { 0 };
 	struct tf_tbl_type_bulk_get_output resp = { 0 };
-	struct tf_session *tfs;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
-	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
-			    "%s: Failed to lookup session, rc:%s\n",
+			    "%s: Unable to lookup FW id, rc:%s\n",
 			    tf_dir_2_str(dir),
 			    strerror(-rc));
 		return rc;
 	}
 
 	/* Populate the request */
-	req.fw_session_id =
-		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.flags = tfp_cpu_to_le_16(dir);
 	req.type = tfp_cpu_to_le_32(hcapi_type);
 	req.start_index = tfp_cpu_to_le_32(starting_idx);
@@ -939,3 +964,102 @@ tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+int
+tf_msg_get_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_get_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_get_input_t req = { 0 };
+	tf_if_tbl_get_output_t resp;
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_16(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_16(params->data_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_IF_TBL_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	if (parms.tf_resp_code != 0)
+		return tfp_le_to_cpu_32(parms.tf_resp_code);
+
+	tfp_memcpy(&params->data[0], resp.data, req.data_sz_in_bytes);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_if_tbl_entry(struct tf *tfp,
+			struct tf_if_tbl_set_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_if_tbl_set_input_t req = { 0 };
+	uint32_t flags = 0;
+	struct tf_session *tfs;
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = flags;
+	req.tf_if_tbl_type = params->hcapi_type;
+	req.idx = tfp_cpu_to_le_32(params->idx);
+	req.data_sz_in_bytes = tfp_cpu_to_le_32(params->data_sz_in_bytes);
+	tfp_memcpy(&req.data[0], params->data, params->data_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_IF_TBL_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 7432873d7..37f291016 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -428,4 +428,34 @@ int tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			      uint16_t entry_sz_in_bytes,
 			      uint64_t physical_mem_addr);
 
+/**
+ * Sends Set message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_set_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_set_parms *params);
+
+/**
+ * Sends get message of a IF Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to IF table get parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_get_if_tbl_entry(struct tf *tfp,
+			    struct tf_if_tbl_get_parms *params);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index b08d06306..529ea5083 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -70,14 +70,24 @@ tf_session_open_session(struct tf *tfp,
 		goto cleanup;
 	}
 	tfp->session->core_data = cparms.mem_va;
+	session_id = &parms->open_cfg->session_id;
+
+	/* Update Session Info, which is what is visible to the caller */
+	tfp->session->ver.major = 0;
+	tfp->session->ver.minor = 0;
+	tfp->session->ver.update = 0;
 
-	/* Initialize Session and Device */
+	tfp->session->session_id.internal.domain = session_id->internal.domain;
+	tfp->session->session_id.internal.bus = session_id->internal.bus;
+	tfp->session->session_id.internal.device = session_id->internal.device;
+	tfp->session->session_id.internal.fw_session_id = fw_session_id;
+
+	/* Initialize Session and Device, which is private */
 	session = (struct tf_session *)tfp->session->core_data;
 	session->ver.major = 0;
 	session->ver.minor = 0;
 	session->ver.update = 0;
 
-	session_id = &parms->open_cfg->session_id;
 	session->session_id.internal.domain = session_id->internal.domain;
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 29/51] net/bnxt: add TF register and unregister
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (27 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
                           ` (23 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru, Randy Schacher

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TF register/unregister support. Session got session clients to
  keep track of the ctrl-channels/function.
- Add support code to tfp layer

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_core/Makefile     |   1 +
 drivers/net/bnxt/tf_core/ll.c         |  52 +++
 drivers/net/bnxt/tf_core/ll.h         |  46 +++
 drivers/net/bnxt/tf_core/tf_core.c    |  26 +-
 drivers/net/bnxt/tf_core/tf_core.h    | 105 +++--
 drivers/net/bnxt/tf_core/tf_msg.c     |  84 +++-
 drivers/net/bnxt/tf_core/tf_msg.h     |  42 +-
 drivers/net/bnxt/tf_core/tf_rm.c      |   2 +-
 drivers/net/bnxt/tf_core/tf_session.c | 569 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h | 201 ++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.c     |   2 +
 drivers/net/bnxt/tf_core/tf_tcam.c    |   8 +-
 drivers/net/bnxt/tf_core/tfp.c        |  17 +
 drivers/net/bnxt/tf_core/tfp.h        |  15 +
 15 files changed, 1075 insertions(+), 96 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/ll.c
 create mode 100644 drivers/net/bnxt/tf_core/ll.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index f25a9448d..54564e02e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -44,6 +44,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_tcam.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
+	'tf_core/ll.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 1924bef02..6210bc70e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -8,6 +8,7 @@
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/ll.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/ll.c b/drivers/net/bnxt/tf_core/ll.c
new file mode 100644
index 000000000..6f58662f5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Functions */
+
+#include <stdio.h>
+#include "ll.h"
+
+/* init linked list */
+void ll_init(struct ll *ll)
+{
+	ll->head = NULL;
+	ll->tail = NULL;
+}
+
+/* insert entry in linked list */
+void ll_insert(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == NULL) {
+		ll->head = entry;
+		ll->tail = entry;
+		entry->next = NULL;
+		entry->prev = NULL;
+	} else {
+		entry->next = ll->head;
+		entry->prev = NULL;
+		entry->next->prev = entry;
+		ll->head = entry->next->prev;
+	}
+}
+
+/* delete entry from linked list */
+void ll_delete(struct ll *ll,
+	       struct ll_entry *entry)
+{
+	if (ll->head == entry && ll->tail == entry) {
+		ll->head = NULL;
+		ll->tail = NULL;
+	} else if (ll->head == entry) {
+		ll->head = entry->next;
+		ll->head->prev = NULL;
+	} else if (ll->tail == entry) {
+		ll->tail = entry->prev;
+		ll->tail->next = NULL;
+	} else {
+		entry->prev->next = entry->next;
+		entry->next->prev = entry->prev;
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/ll.h b/drivers/net/bnxt/tf_core/ll.h
new file mode 100644
index 000000000..d70917850
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/ll.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Linked List Header File */
+
+#ifndef _LL_H_
+#define _LL_H_
+
+/* linked list entry */
+struct ll_entry {
+	struct ll_entry *prev;
+	struct ll_entry *next;
+};
+
+/* linked list */
+struct ll {
+	struct ll_entry *head;
+	struct ll_entry *tail;
+};
+
+/**
+ * Linked list initialization.
+ *
+ * [in] ll, linked list to be initialized
+ */
+void ll_init(struct ll *ll);
+
+/**
+ * Linked list insert
+ *
+ * [in] ll, linked list where element is inserted
+ * [in] entry, entry to be added
+ */
+void ll_insert(struct ll *ll, struct ll_entry *entry);
+
+/**
+ * Linked list delete
+ *
+ * [in] ll, linked list where element is removed
+ * [in] entry, entry to be deleted
+ */
+void ll_delete(struct ll *ll, struct ll_entry *entry);
+
+#endif /* _LL_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index a980a2056..489c461d1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -58,21 +58,20 @@ tf_open_session(struct tf *tfp,
 	parms->session_id.internal.device = device;
 	oparms.open_cfg = parms;
 
+	/* Session vs session client is decided in
+	 * tf_session_open_session()
+	 */
+	printf("TF_OPEN, %s\n", parms->ctrl_chan_name);
 	rc = tf_session_open_session(tfp, &oparms);
 	/* Logging handled by tf_session_open_session */
 	if (rc)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Session created, session_id:%d\n",
-		    parms->session_id.id);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    parms->session_id.internal.domain,
 		    parms->session_id.internal.bus,
-		    parms->session_id.internal.device,
-		    parms->session_id.internal.fw_session_id);
+		    parms->session_id.internal.device);
 
 	return 0;
 }
@@ -152,6 +151,9 @@ tf_close_session(struct tf *tfp)
 
 	cparms.ref_count = &ref_count;
 	cparms.session_id = &session_id;
+	/* Session vs session client is decided in
+	 * tf_session_close_session()
+	 */
 	rc = tf_session_close_session(tfp,
 				      &cparms);
 	/* Logging handled by tf_session_close_session */
@@ -159,16 +161,10 @@ tf_close_session(struct tf *tfp)
 		return rc;
 
 	TFP_DRV_LOG(INFO,
-		    "Closed session, session_id:%d, ref_count:%d\n",
-		    cparms.session_id->id,
-		    *cparms.ref_count);
-
-	TFP_DRV_LOG(INFO,
-		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    "domain:%d, bus:%d, device:%d\n",
 		    cparms.session_id->internal.domain,
 		    cparms.session_id->internal.bus,
-		    cparms.session_id->internal.device,
-		    cparms.session_id->internal.fw_session_id);
+		    cparms.session_id->internal.device);
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index e3d46bd45..fea222bee 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -72,7 +72,6 @@ enum tf_mem {
  * @ref tf_close_session
  */
 
-
 /**
  * Session Version defines
  *
@@ -113,6 +112,21 @@ union tf_session_id {
 	} internal;
 };
 
+/**
+ * Session Client Identifier
+ *
+ * Unique identifier for a client within a session. Session Client ID
+ * is constructed from the passed in session and a firmware allocated
+ * fw_session_client_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_client_id {
+	uint16_t id;
+	struct {
+		uint8_t fw_session_id;
+		uint8_t fw_session_client_id;
+	} internal;
+};
+
 /**
  * Session Version
  *
@@ -368,8 +382,8 @@ struct tf_session_info {
  *
  * Contains a pointer to the session info. Allocated by ULP and passed
  * to TruFlow using tf_open_session(). TruFlow will populate the
- * session info at that time. Additional 'opens' can be done using
- * same session_info by using tf_attach_session().
+ * session info at that time. A TruFlow Session can be used by more
+ * than one PF/VF by using the tf_open_session().
  *
  * It is expected that ULP allocates this memory as shared memory.
  *
@@ -506,36 +520,62 @@ struct tf_open_session_parms {
 	 * The session_id allows a session to be shared between devices.
 	 */
 	union tf_session_id session_id;
+	/**
+	 * [in/out] session_client_id
+	 *
+	 * Session_client_id is unique per client.
+	 *
+	 * Session_client_id is composed of session_id and the
+	 * fw_session_client_id fw_session_id. The construction is
+	 * done by parsing the ctrl_chan_name together with allocation
+	 * of a fw_session_client_id during tf_open_session().
+	 *
+	 * A reference count will be incremented in the session on
+	 * which a client is created.
+	 *
+	 * A session can first be closed if there is one Session
+	 * Client left. Session Clients should closed using
+	 * tf_close_session().
+	 */
+	union tf_session_client_id session_client_id;
 	/**
 	 * [in] device type
 	 *
-	 * Device type is passed, one of Wh+, SR, Thor, SR2
+	 * Device type for the session.
 	 */
 	enum tf_device_type device_type;
-	/** [in] resources
+	/**
+	 * [in] resources
 	 *
-	 * Resource allocation
+	 * Resource allocation for the session.
 	 */
 	struct tf_session_resources resources;
 };
 
 /**
- * Opens a new TruFlow management session.
+ * Opens a new TruFlow Session or session client.
+ *
+ * What gets created depends on the passed in tfp content. If the tfp
+ * does not have prior session data a new session with associated
+ * session client. If tfp has a session already a session client will
+ * be created. In both cases the session client is created using the
+ * provided ctrl_chan_name.
  *
- * TruFlow will allocate session specific memory, shared memory, to
- * hold its session data. This data is private to TruFlow.
+ * In case of session creation TruFlow will allocate session specific
+ * memory, shared memory, to hold its session data. This data is
+ * private to TruFlow.
  *
- * Multiple PFs can share the same session. An association, refcount,
- * between session and PFs is maintained within TruFlow. Thus, a PF
- * can attach to an existing session, see tf_attach_session().
+ * No other TruFlow APIs will succeed unless this API is first called
+ * and succeeds.
  *
- * No other TruFlow APIs will succeed unless this API is first called and
- * succeeds.
+ * tf_open_session() returns a session id and session client id that
+ * is used on all other TF APIs.
  *
- * tf_open_session() returns a session id that can be used on attach.
+ * A Session or session client can be closed using tf_close_session().
  *
  * [in] tfp
  *   Pointer to TF handle
+ *
  * [in] parms
  *   Pointer to open parameters
  *
@@ -546,6 +586,11 @@ struct tf_open_session_parms {
 int tf_open_session(struct tf *tfp,
 		    struct tf_open_session_parms *parms);
 
+/**
+ * Experimental
+ *
+ * tf_attach_session parameters definition.
+ */
 struct tf_attach_session_parms {
 	/**
 	 * [in] ctrl_chan_name
@@ -595,15 +640,18 @@ struct tf_attach_session_parms {
 };
 
 /**
- * Attaches to an existing session. Used when more than one PF wants
- * to share a single session. In that case all TruFlow management
- * traffic will be sent to the TruFlow firmware using the 'PF' that
- * did the attach not the session ctrl channel.
+ * Experimental
+ *
+ * Allows a 2nd application instance to attach to an existing
+ * session. Used when a session is to be shared between two processes.
  *
  * Attach will increment a ref count as to manage the shared session data.
  *
- * [in] tfp, pointer to TF handle
- * [in] parms, pointer to attach parameters
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to attach parameters
  *
  * Returns
  *   - (0) if successful.
@@ -613,9 +661,15 @@ int tf_attach_session(struct tf *tfp,
 		      struct tf_attach_session_parms *parms);
 
 /**
- * Closes an existing session. Cleans up all hardware and firmware
- * state associated with the TruFlow application session when the last
- * PF associated with the session results in refcount to be zero.
+ * Closes an existing session client or the session it self. The
+ * session client is default closed and if the session reference count
+ * is 0 then the session is closed as well.
+ *
+ * On session close all hardware and firmware state associated with
+ * the TruFlow application is cleaned up.
+ *
+ * The session client is extracted from the tfp. Thus tf_close_session()
+ * cannot close a session client on behalf of another function.
  *
  * Returns success or failure code.
  */
@@ -1056,9 +1110,10 @@ int tf_free_tcam_entry(struct tf *tfp,
  * @ref tf_set_tbl_entry
  *
  * @ref tf_get_tbl_entry
+ *
+ * @ref tf_bulk_get_tbl_entry
  */
 
-
 /**
  * tf_alloc_tbl_entry parameter definition
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 6600a14c8..8c2dff8ad 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -84,7 +84,8 @@ tf_msg_free_dma_buf(struct tf_msg_dma_buf *buf)
 int
 tf_msg_session_open(struct tf *tfp,
 		    char *ctrl_chan_name,
-		    uint8_t *fw_session_id)
+		    uint8_t *fw_session_id,
+		    uint8_t *fw_session_client_id)
 {
 	int rc;
 	struct hwrm_tf_session_open_input req = { 0 };
@@ -106,7 +107,8 @@ tf_msg_session_open(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	*fw_session_id = resp.fw_session_id;
+	*fw_session_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
+	*fw_session_client_id = (uint8_t)tfp_le_to_cpu_32(resp.fw_session_id);
 
 	return rc;
 }
@@ -119,6 +121,84 @@ tf_msg_session_attach(struct tf *tfp __rte_unused,
 	return -1;
 }
 
+int
+tf_msg_session_client_register(struct tf *tfp,
+			       char *ctrl_channel_name,
+			       uint8_t *fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_register_input req = { 0 };
+	struct hwrm_tf_session_register_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	tfp_memcpy(&req.session_client_name,
+		   ctrl_channel_name,
+		   TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_REGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_client_id =
+		(uint8_t)tfp_le_to_cpu_32(resp.fw_session_client_id);
+
+	return rc;
+}
+
+int
+tf_msg_session_client_unregister(struct tf *tfp,
+				 uint8_t fw_session_client_id)
+{
+	int rc;
+	struct hwrm_tf_session_unregister_input req = { 0 };
+	struct hwrm_tf_session_unregister_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Unable to lookup FW id, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.fw_session_client_id = tfp_cpu_to_le_32(fw_session_client_id);
+
+	parms.tf_type = HWRM_TF_SESSION_UNREGISTER;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+
+	return rc;
+}
+
 int
 tf_msg_session_close(struct tf *tfp)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 37f291016..c02a5203c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -34,7 +34,8 @@ struct tf;
  */
 int tf_msg_session_open(struct tf *tfp,
 			char *ctrl_chan_name,
-			uint8_t *fw_session_id);
+			uint8_t *fw_session_id,
+			uint8_t *fw_session_client_id);
 
 /**
  * Sends session close request to Firmware
@@ -42,6 +43,9 @@ int tf_msg_session_open(struct tf *tfp,
  * [in] session
  *   Pointer to session handle
  *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
  * [in] fw_session_id
  *   Pointer to the fw_session_id that is assigned to the session at
  *   time of session open
@@ -53,6 +57,42 @@ int tf_msg_session_attach(struct tf *tfp,
 			  char *ctrl_channel_name,
 			  uint8_t tf_fw_session_id);
 
+/**
+ * Sends session client register request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_register(struct tf *tfp,
+				   char *ctrl_channel_name,
+				   uint8_t *fw_session_client_id);
+
+/**
+ * Sends session client unregister request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in/out] fw_session_client_id
+ *   Pointer to the fw_session_client_id that is allocated on firmware
+ *   side
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_session_client_unregister(struct tf *tfp,
+				     uint8_t fw_session_client_id);
+
 /**
  * Sends session close request to Firmware
  *
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 30313e2ea..fdb87ecb8 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -389,7 +389,7 @@ tf_rm_create_db(struct tf *tfp,
 	TF_CHECK_PARMS2(tfp, parms);
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 529ea5083..932a14a41 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -12,14 +12,49 @@
 #include "tf_msg.h"
 #include "tfp.h"
 
-int
-tf_session_open_session(struct tf *tfp,
-			struct tf_session_open_session_parms *parms)
+struct tf_session_client_create_parms {
+	/**
+	 * [in] Pointer to the control channel name string
+	 */
+	char *ctrl_chan_name;
+
+	/**
+	 * [out] Firmware Session Client ID
+	 */
+	union tf_session_client_id *session_client_id;
+};
+
+struct tf_session_client_destroy_parms {
+	/**
+	 * FW Session Client Identifier
+	 */
+	union tf_session_client_id session_client_id;
+};
+
+/**
+ * Creates a Session and the associated client.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_create(struct tf *tfp,
+		  struct tf_session_open_session_parms *parms)
 {
 	int rc;
 	struct tf_session *session = NULL;
+	struct tf_session_client *client;
 	struct tfp_calloc_parms cparms;
 	uint8_t fw_session_id;
+	uint8_t fw_session_client_id;
 	union tf_session_id *session_id;
 
 	TF_CHECK_PARMS2(tfp, parms);
@@ -27,7 +62,8 @@ tf_session_open_session(struct tf *tfp,
 	/* Open FW session and get a new session_id */
 	rc = tf_msg_session_open(tfp,
 				 parms->open_cfg->ctrl_chan_name,
-				 &fw_session_id);
+				 &fw_session_id,
+				 &fw_session_client_id);
 	if (rc) {
 		/* Log error */
 		if (rc == -EEXIST)
@@ -92,15 +128,46 @@ tf_session_open_session(struct tf *tfp,
 	session->session_id.internal.bus = session_id->internal.bus;
 	session->session_id.internal.device = session_id->internal.device;
 	session->session_id.internal.fw_session_id = fw_session_id;
-	/* Return the allocated fw session id */
-	session_id->internal.fw_session_id = fw_session_id;
+	/* Return the allocated session id */
+	session_id->id = session->session_id.id;
 
 	session->shadow_copy = parms->open_cfg->shadow_copy;
 
-	tfp_memcpy(session->ctrl_chan_name,
+	/* Init session client list */
+	ll_init(&session->client_ll);
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		/* Log error */
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	client->session_client_id.internal.fw_session_id = fw_session_id;
+	client->session_client_id.internal.fw_session_client_id =
+		fw_session_client_id;
+
+	tfp_memcpy(client->ctrl_chan_name,
 		   parms->open_cfg->ctrl_chan_name,
 		   TF_SESSION_NAME_MAX);
 
+	ll_insert(&session->client_ll, &client->ll_entry);
+	session->ref_count++;
+
 	rc = tf_dev_bind(tfp,
 			 parms->open_cfg->device_type,
 			 session->shadow_copy,
@@ -110,7 +177,7 @@ tf_session_open_session(struct tf *tfp,
 	if (rc)
 		return rc;
 
-	session->ref_count++;
+	session->dev_init = true;
 
 	return 0;
 
@@ -121,6 +188,235 @@ tf_session_open_session(struct tf *tfp,
 	return rc;
 }
 
+/**
+ * Creates a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to session client create parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOMEM) if max session clients has been reached.
+ */
+static int
+tf_session_client_create(struct tf *tfp,
+			 struct tf_session_client_create_parms *parms)
+{
+	int rc;
+	struct tf_session *session = NULL;
+	struct tf_session_client *client;
+	struct tfp_calloc_parms cparms;
+	union tf_session_client_id session_client_id;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &session);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	client = tf_session_find_session_client_by_name(session,
+							parms->ctrl_chan_name);
+	if (client) {
+		TFP_DRV_LOG(ERR,
+			    "Client %s, already registered with this session\n",
+			    parms->ctrl_chan_name);
+		return -EOPNOTSUPP;
+	}
+
+	rc = tf_msg_session_client_register
+		    (tfp,
+		    parms->ctrl_chan_name,
+		    &session_client_id.internal.fw_session_client_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to create client on session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Create the local session client, initialize and attach to
+	 * the session
+	 */
+	cparms.nitems = 1;
+	cparms.size = sizeof(struct tf_session_client);
+	cparms.alignment = 0;
+	rc = tfp_calloc(&cparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate session client, rc:%s\n",
+			    strerror(-rc));
+		goto cleanup;
+	}
+	client = cparms.mem_va;
+
+	/* Register FID with the client */
+	rc = tfp_get_fid(tfp, &client->fw_fid);
+	if (rc)
+		return rc;
+
+	/* Build the Session Client ID by adding the fw_session_id */
+	rc = tf_session_get_fw_session_id
+			(tfp,
+			&session_client_id.internal.fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Session Firmware id lookup failed, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tfp_memcpy(client->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	client->session_client_id.id = session_client_id.id;
+
+	ll_insert(&session->client_ll, &client->ll_entry);
+
+	session->ref_count++;
+
+	/* Build the return value */
+	parms->session_client_id->id = session_client_id.id;
+
+ cleanup:
+	/* TBD - Add code to unregister newly create client from fw */
+
+	return rc;
+}
+
+
+/**
+ * Destroys a Session Client on an existing Session.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to the session client destroy parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTFOUND) error, client not owned by the session.
+ *   - (-ENOTSUPP) error, unable to destroy client as its the last
+ *                 client. Please use the tf_session_close().
+ */
+static int
+tf_session_client_destroy(struct tf *tfp,
+			  struct tf_session_client_destroy_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_session_client *client;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Check session owns this client and that we're not the last client */
+	client = tf_session_get_session_client(tfs,
+					       parms->session_client_id);
+	if (client == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "Client %d, not found within this session\n",
+			    parms->session_client_id.id);
+		return -EINVAL;
+	}
+
+	/* If last client the request is rejected and cleanup should
+	 * be done by session close.
+	 */
+	if (tfs->ref_count == 1)
+		return -EOPNOTSUPP;
+
+	rc = tf_msg_session_client_unregister
+			(tfp,
+			parms->session_client_id.internal.fw_session_client_id);
+
+	/* Log error, but continue. If FW fails we do not really have
+	 * a way to fix this but the client would no longer be valid
+	 * thus we remove from the session.
+	 */
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Client destroy on FW Failed, rc:%s\n",
+			    strerror(-rc));
+	}
+
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+
+	/* Decrement the session ref_count */
+	tfs->ref_count--;
+
+	tfp_free(client);
+
+	return rc;
+}
+
+int
+tf_session_open_session(struct tf *tfp,
+			struct tf_session_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session_client_create_parms scparms;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Decide if we're creating a new session or session client */
+	if (tfp->session == NULL) {
+		rc = tf_session_create(tfp, parms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to create session, ctrl_chan_name:%s, rc:%s\n",
+				    parms->open_cfg->ctrl_chan_name,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+		       "Session created, session_client_id:%d, session_id:%d\n",
+		       parms->open_cfg->session_client_id.id,
+		       parms->open_cfg->session_id.id);
+	} else {
+		scparms.ctrl_chan_name = parms->open_cfg->ctrl_chan_name;
+		scparms.session_client_id = &parms->open_cfg->session_client_id;
+
+		/* Create the new client and get it associated with
+		 * the session.
+		 */
+		rc = tf_session_client_create(tfp, &scparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+			      "Failed to create client on session %d, rc:%s\n",
+			      parms->open_cfg->session_id.id,
+			      strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Session Client:%d created on session:%d\n",
+			    parms->open_cfg->session_client_id.id,
+			    parms->open_cfg->session_id.id);
+	}
+
+	return 0;
+}
+
 int
 tf_session_attach_session(struct tf *tfp __rte_unused,
 			  struct tf_session_attach_session_parms *parms __rte_unused)
@@ -141,7 +437,10 @@ tf_session_close_session(struct tf *tfp,
 {
 	int rc;
 	struct tf_session *tfs = NULL;
+	struct tf_session_client *client;
 	struct tf_dev_info *tfd;
+	struct tf_session_client_destroy_parms scdparms;
+	uint16_t fid;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
@@ -161,7 +460,49 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	tfs->ref_count--;
+	/* Get the client, we need it independently of the closure
+	 * type (client or session closure).
+	 *
+	 * We find the client by way of the fid. Thus one cannot close
+	 * a client on behalf of someone else.
+	 */
+	rc = tfp_get_fid(tfp, &fid);
+	if (rc)
+		return rc;
+
+	client = tf_session_find_session_client_by_fid(tfs,
+						       fid);
+	/* In case multiple clients we chose to close those first */
+	if (tfs->ref_count > 1) {
+		/* Linaro gcc can't static init this structure */
+		memset(&scdparms,
+		       0,
+		       sizeof(struct tf_session_client_destroy_parms));
+
+		scdparms.session_client_id = client->session_client_id;
+		/* Destroy requested client so its no longer
+		 * registered with this session.
+		 */
+		rc = tf_session_client_destroy(tfp, &scdparms);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "Failed to unregister Client %d, rc:%s\n",
+				    client->session_client_id.id,
+				    strerror(-rc));
+			return rc;
+		}
+
+		TFP_DRV_LOG(INFO,
+			    "Closed session client, session_client_id:%d\n",
+			    client->session_client_id.id);
+
+		TFP_DRV_LOG(INFO,
+			    "session_id:%d, ref_count:%d\n",
+			    tfs->session_id.id,
+			    tfs->ref_count);
+
+		return 0;
+	}
 
 	/* Record the session we're closing so the caller knows the
 	 * details.
@@ -176,23 +517,6 @@ tf_session_close_session(struct tf *tfp,
 		return rc;
 	}
 
-	if (tfs->ref_count > 0) {
-		/* In case we're attached only the session client gets
-		 * closed.
-		 */
-		rc = tf_msg_session_close(tfp);
-		if (rc) {
-			/* Log error */
-			TFP_DRV_LOG(ERR,
-				    "FW Session close failed, rc:%s\n",
-				    strerror(-rc));
-		}
-
-		return 0;
-	}
-
-	/* Final cleanup as we're last user of the session */
-
 	/* Unbind the device */
 	rc = tf_dev_unbind(tfp, tfd);
 	if (rc) {
@@ -202,7 +526,6 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
-	/* In case we're attached only the session client gets closed */
 	rc = tf_msg_session_close(tfp);
 	if (rc) {
 		/* Log error */
@@ -211,6 +534,21 @@ tf_session_close_session(struct tf *tfp,
 			    strerror(-rc));
 	}
 
+	/* Final cleanup as we're last user of the session thus we
+	 * also delete the last client.
+	 */
+	ll_delete(&tfs->client_ll, &client->ll_entry);
+	tfp_free(client);
+
+	tfs->ref_count--;
+
+	TFP_DRV_LOG(INFO,
+		    "Closed session, session_id:%d, ref_count:%d\n",
+		    tfs->session_id.id,
+		    tfs->ref_count);
+
+	tfs->dev_init = false;
+
 	tfp_free(tfp->session->core_data);
 	tfp_free(tfp->session);
 	tfp->session = NULL;
@@ -218,12 +556,31 @@ tf_session_close_session(struct tf *tfp,
 	return 0;
 }
 
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return true;
+	}
+
+	return false;
+}
+
 int
-tf_session_get_session(struct tf *tfp,
-		       struct tf_session **tfs)
+tf_session_get_session_internal(struct tf *tfp,
+				struct tf_session **tfs)
 {
-	int rc;
+	int rc = 0;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL || tfp->session->core_data == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -234,7 +591,113 @@ tf_session_get_session(struct tf *tfp,
 
 	*tfs = (struct tf_session *)(tfp->session->core_data);
 
-	return 0;
+	return rc;
+}
+
+int
+tf_session_get_session(struct tf *tfp,
+		       struct tf_session **tfs)
+{
+	int rc;
+	uint16_t fw_fid;
+	bool supported = false;
+
+	rc = tf_session_get_session_internal(tfp,
+					     tfs);
+	/* Logging done by tf_session_get_session_internal */
+	if (rc)
+		return rc;
+
+	/* As session sharing among functions aka 'individual clients'
+	 * is supported we have to assure that the client is indeed
+	 * registered before we get deep in the TruFlow api stack.
+	 */
+	rc = tfp_get_fid(tfp, &fw_fid);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Internal FID lookup\n, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	supported = tf_session_is_fid_supported(*tfs, fw_fid);
+	if (!supported) {
+		TFP_DRV_LOG
+			(ERR,
+			"Ctrl channel not registered with session\n, rc:%s\n",
+			strerror(-rc));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->session_client_id.id == session_client_id.id)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL || ctrl_chan_name == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (strncmp(client->ctrl_chan_name,
+			    ctrl_chan_name,
+			    TF_SESSION_NAME_MAX) == 0)
+			return client;
+	}
+
+	return NULL;
+}
+
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid)
+{
+	struct ll_entry *c_entry;
+	struct tf_session_client *client;
+
+	/* Skip using the check macro as we just want to return */
+	if (tfs == NULL)
+		return NULL;
+
+	for (c_entry = tfs->client_ll.head;
+	     c_entry != NULL;
+	     c_entry = c_entry->next) {
+		client = (struct tf_session_client *)c_entry;
+		if (client->fw_fid == fid)
+			return client;
+	}
+
+	return NULL;
 }
 
 int
@@ -253,6 +716,7 @@ tf_session_get_fw_session_id(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs = NULL;
 
+	/* Skip using the check macro as we want to control the error msg */
 	if (tfp->session == NULL) {
 		rc = -EINVAL;
 		TFP_DRV_LOG(ERR,
@@ -261,7 +725,15 @@ tf_session_get_fw_session_id(struct tf *tfp,
 		return rc;
 	}
 
-	rc = tf_session_get_session(tfp, &tfs);
+	if (fw_session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -269,3 +741,36 @@ tf_session_get_fw_session_id(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_session_get_session_id(struct tf *tfp,
+			  union tf_session_id *session_id)
+{
+	int rc;
+	struct tf_session *tfs = NULL;
+
+	if (tfp->session == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Session not created, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (session_id == NULL) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Invalid Argument(s), rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Using internal version as session client may not exist yet */
+	rc = tf_session_get_session_internal(tfp, &tfs);
+	if (rc)
+		return rc;
+
+	*session_id = tfs->session_id;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index a303fde51..aa7a27877 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -16,6 +16,7 @@
 #include "tf_tbl.h"
 #include "tf_resources.h"
 #include "stack.h"
+#include "ll.h"
 
 /**
  * The Session module provides session control support. A session is
@@ -29,7 +30,6 @@
 
 /** Session defines
  */
-#define TF_SESSIONS_MAX	          1          /** max # sessions */
 #define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
 
 /**
@@ -50,7 +50,7 @@
  * Shared memory containing private TruFlow session information.
  * Through this structure the session can keep track of resource
  * allocations and (if so configured) any shadow copy of flow
- * information.
+ * information. It also holds info about Session Clients.
  *
  * Memory is assigned to the Truflow instance by way of
  * tf_open_session. Memory is allocated and owned by i.e. ULP.
@@ -65,17 +65,10 @@ struct tf_session {
 	 */
 	struct tf_session_version ver;
 
-	/** Session ID, allocated by FW on tf_open_session() */
-	union tf_session_id session_id;
-
 	/**
-	 * String containing name of control channel interface to be
-	 * used for this session to communicate with firmware.
-	 *
-	 * ctrl_chan_name will be used as part of a name for any
-	 * shared memory allocation.
+	 * Session ID, allocated by FW on tf_open_session()
 	 */
-	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	union tf_session_id session_id;
 
 	/**
 	 * Boolean controlling the use and availability of shadow
@@ -92,14 +85,67 @@ struct tf_session {
 
 	/**
 	 * Session Reference Count. To keep track of functions per
-	 * session the ref_count is incremented. There is also a
+	 * session the ref_count is updated. There is also a
 	 * parallel TruFlow Firmware ref_count in case the TruFlow
 	 * Core goes away without informing the Firmware.
 	 */
 	uint8_t ref_count;
 
-	/** Device handle */
+	/**
+	 * Session Reference Count for attached sessions. To keep
+	 * track of application sharing of a session the
+	 * ref_count_attach is updated.
+	 */
+	uint8_t ref_count_attach;
+
+	/**
+	 * Device handle
+	 */
 	struct tf_dev_info dev;
+	/**
+	 * Device init flag. False if Device is not fully initialized,
+	 * else true.
+	 */
+	bool dev_init;
+
+	/**
+	 * Linked list of clients registered for this session
+	 */
+	struct ll client_ll;
+};
+
+/**
+ * Session Client
+ *
+ * Shared memory for each of the Session Clients. A session can have
+ * one or more clients.
+ */
+struct tf_session_client {
+	/**
+	 * Linked list of clients
+	 */
+	struct ll_entry ll_entry; /* For inserting in link list, must be
+				   * first field of struct.
+				   */
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Firmware FID, learned at time of Session Client create.
+	 */
+	uint16_t fw_fid;
+
+	/**
+	 * Session Client ID, allocated by FW on tf_register_session()
+	 */
+	union tf_session_client_id session_client_id;
 };
 
 /**
@@ -126,7 +172,13 @@ struct tf_session_attach_session_parms {
  * Session close parameter definition
  */
 struct tf_session_close_session_parms {
+	/**
+	 * []
+	 */
 	uint8_t *ref_count;
+	/**
+	 * []
+	 */
 	union tf_session_id *session_id;
 };
 
@@ -139,11 +191,23 @@ struct tf_session_close_session_parms {
  *
  * @ref tf_session_close_session
  *
+ * @ref tf_session_is_fid_supported
+ *
+ * @ref tf_session_get_session_internal
+ *
  * @ref tf_session_get_session
  *
+ * @ref tf_session_get_session_client
+ *
+ * @ref tf_session_find_session_client_by_name
+ *
+ * @ref tf_session_find_session_client_by_fid
+ *
  * @ref tf_session_get_device
  *
  * @ref tf_session_get_fw_session_id
+ *
+ * @ref tf_session_get_session_id
  */
 
 /**
@@ -179,7 +243,8 @@ int tf_session_attach_session(struct tf *tfp,
 			      struct tf_session_attach_session_parms *parms);
 
 /**
- * Closes a previous created session.
+ * Closes a previous created session. Only possible if previous
+ * registered Clients had been unregistered first.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -189,13 +254,53 @@ int tf_session_attach_session(struct tf *tfp,
  *
  * Returns
  *   - (0) if successful.
+ *   - (-EUSERS) if clients are still registered with the session.
  *   - (-EINVAL) on failure.
  */
 int tf_session_close_session(struct tf *tfp,
 			     struct tf_session_close_session_parms *parms);
 
 /**
- * Looks up the private session information from the TF session info.
+ * Verifies that the fid is supported by the session. Used to assure
+ * that a function i.e. client/control channel is registered with the
+ * session.
+ *
+ * [in] tfs
+ *   Pointer to TF Session handle
+ *
+ * [in] fid
+ *   FID value to check
+ *
+ * Returns
+ *   - (true) if successful, else false
+ *   - (-EINVAL) on failure.
+ */
+bool
+tf_session_is_fid_supported(struct tf_session *tfs,
+			    uint16_t fid);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Does not perform a fid check against the registered
+ * clients. Should be used if tf_session_get_session() was used
+ * previously i.e. at the TF API boundary.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] tfs
+ *   Pointer pointer to the session
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_internal(struct tf *tfp,
+				    struct tf_session **tfs);
+
+/**
+ * Looks up the private session information from the TF session
+ * info. Performs a fid check against the clients on the session.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -210,6 +315,53 @@ int tf_session_close_session(struct tf *tfp,
 int tf_session_get_session(struct tf *tfp,
 			   struct tf_session **tfs);
 
+/**
+ * Looks up client within the session.
+ *
+ * [in] tfs
+ *   Pointer pointer to the session
+ *
+ * [in] session_client_id
+ *   Client id to look for within the session
+ *
+ * Returns
+ *   client if successful.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_get_session_client(struct tf_session *tfs,
+			      union tf_session_client_id session_client_id);
+
+/**
+ * Looks up client using name within the session.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] session_client_name, name of the client to lookup in the session
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_name(struct tf_session *tfs,
+				       const char *ctrl_chan_name);
+
+/**
+ * Looks up client using the fid.
+ *
+ * [in] session, pointer to the session
+ *
+ * [in] fid, fid of the client to find
+ *
+ * Returns:
+ *   - Pointer to the session, if found.
+ *   - (NULL) on failure, client not found.
+ */
+struct tf_session_client *
+tf_session_find_session_client_by_fid(struct tf_session *tfs,
+				      uint16_t fid);
+
 /**
  * Looks up the device information from the TF Session.
  *
@@ -227,8 +379,7 @@ int tf_session_get_device(struct tf_session *tfs,
 			  struct tf_dev_info **tfd);
 
 /**
- * Looks up the FW session id of the firmware connection for the
- * requested TF handle.
+ * Looks up the FW Session id the requested TF handle.
  *
  * [in] tfp
  *   Pointer to TF handle
@@ -243,4 +394,20 @@ int tf_session_get_device(struct tf_session *tfs,
 int tf_session_get_fw_session_id(struct tf *tfp,
 				 uint8_t *fw_session_id);
 
+/**
+ * Looks up the Session id the requested TF handle.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [out] session_id
+ *   Pointer to the session_id
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_session_get_session_id(struct tf *tfp,
+			      union tf_session_id *session_id);
+
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 7d4daaf2d..2b4a7c561 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -269,6 +269,7 @@ tf_tbl_set(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -338,6 +339,7 @@ tf_tbl_get(struct tf *tfp,
 			    tf_dir_2_str(parms->dir),
 			    parms->type,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c
index 1c48b5363..cbfaa94ee 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_tcam.c
@@ -138,7 +138,7 @@ tf_tcam_alloc(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -218,7 +218,7 @@ tf_tcam_free(struct tf *tfp,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -319,6 +319,7 @@ tf_tcam_free(struct tf *tfp,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
@@ -353,7 +354,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 	}
 
 	/* Retrieve the session information */
-	rc = tf_session_get_session(tfp, &tfs);
+	rc = tf_session_get_session_internal(tfp, &tfs);
 	if (rc)
 		return rc;
 
@@ -415,6 +416,7 @@ tf_tcam_set(struct tf *tfp __rte_unused,
 			    tf_tcam_tbl_2_str(parms->type),
 			    parms->idx,
 			    strerror(-rc));
+		return rc;
 	}
 
 	return 0;
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 69d1c9a1f..426a182a9 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -161,3 +161,20 @@ tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
 {
 	rte_spinlock_unlock(&parms->slock);
 }
+
+int
+tfp_get_fid(struct tf *tfp, uint16_t *fw_fid)
+{
+	struct bnxt *bp = NULL;
+
+	if (tfp == NULL || fw_fid == NULL)
+		return -EINVAL;
+
+	bp = container_of(tfp, struct bnxt, tfp);
+	if (bp == NULL)
+		return -EINVAL;
+
+	*fw_fid = bp->fw_fid;
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index fe49b6304..8789eba1f 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -238,4 +238,19 @@ int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
 #define tfp_bswap_32(val) rte_bswap32(val)
 #define tfp_bswap_64(val) rte_bswap64(val)
 
+/**
+ * Lookup of the FID in the platform specific structure.
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [out] fw_fid
+ *   Pointer to the fw_fid
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tfp_get_fid(struct tf *tfp, uint16_t *fw_fid);
+
 #endif /* _TFP_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 30/51] net/bnxt: add global config set and get APIs
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (28 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
                           ` (22 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Jay Ding, Venkat Duvvuru, Randy Schacher

From: Jay Ding <jay.ding@broadcom.com>

- Add support to update global configuration for ACT_TECT
  and ACT_ABCR.
- Add support to allow Tunnel and Action global configuration.
- Remove register read and write operations.
- Remove the register read and write support.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/hcapi/hcapi_cfa.h       |   3 +
 drivers/net/bnxt/meson.build             |   1 +
 drivers/net/bnxt/tf_core/Makefile        |   2 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       |  54 +++++-
 drivers/net/bnxt/tf_core/tf_core.c       | 137 ++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h       |  77 +++++++++
 drivers/net/bnxt/tf_core/tf_device.c     |  20 +++
 drivers/net/bnxt/tf_core/tf_device.h     |  33 ++++
 drivers/net/bnxt/tf_core/tf_device_p4.c  |   4 +
 drivers/net/bnxt/tf_core/tf_device_p4.h  |   5 +
 drivers/net/bnxt/tf_core/tf_global_cfg.c | 199 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_global_cfg.h | 170 +++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        | 109 ++++++++++++-
 drivers/net/bnxt/tf_core/tf_msg.h        |  31 ++++
 14 files changed, 840 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h

diff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h
index 7a67493bd..3d895f088 100644
--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h
+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h
@@ -245,6 +245,9 @@ int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,
 				   struct hcapi_cfa_data *obj_data);
 int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,
 			     struct hcapi_cfa_data *mirror);
+int hcapi_cfa_p4_global_cfg_hwop(struct hcapi_cfa_hwop *op,
+				 uint32_t type,
+				 struct hcapi_cfa_data *config);
 #endif /* SUPPORT_CFA_HW_P4 */
 /**
  *  HCAPI CFA device HW operation function callback definition
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 54564e02e..ace7353be 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -45,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_util.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
+	'tf_core/tf_global_cfg.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 6210bc70e..202db4150 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -27,6 +27,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_if_tbl.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_global_cfg.c
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
@@ -36,3 +37,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_if_tbl.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_global_cfg.h
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index 32f152314..7ade9927a 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -13,8 +13,8 @@ typedef enum tf_type {
 } tf_type_t;
 
 typedef enum tf_subtype {
-	HWRM_TFT_REG_GET = 821,
-	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_GET_GLOBAL_CFG = 821,
+	HWRM_TFT_SET_GLOBAL_CFG = 822,
 	HWRM_TFT_TBL_TYPE_BULK_GET = 825,
 	HWRM_TFT_IF_TBL_SET = 827,
 	HWRM_TFT_IF_TBL_GET = 828,
@@ -66,18 +66,66 @@ typedef enum tf_subtype {
 #define TF_BITS2BYTES(x) (((x) + 7) >> 3)
 #define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
 
+struct tf_set_global_cfg_input;
+struct tf_get_global_cfg_input;
+struct tf_get_global_cfg_output;
 struct tf_tbl_type_bulk_get_input;
 struct tf_tbl_type_bulk_get_output;
 struct tf_if_tbl_set_input;
 struct tf_if_tbl_get_input;
 struct tf_if_tbl_get_output;
+/* Input params for global config set */
+typedef struct tf_set_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config type */
+	uint32_t			 type;
+	/* Offset of the type */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_set_global_cfg_input_t, *ptf_set_global_cfg_input_t;
+
+/* Input params for global config to get */
+typedef struct tf_get_global_cfg_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX		  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX		  (0x1)
+	/* Config to retrieve */
+	uint32_t			 type;
+	/* Offset to retrieve */
+	uint32_t			 offset;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+} tf_get_global_cfg_input_t, *ptf_get_global_cfg_input_t;
+
+/* Output params for global config */
+typedef struct tf_get_global_cfg_output {
+	/* Size of the total data read in bytes */
+	uint16_t			 size;
+	/* Data to get */
+	uint8_t			  data[TF_BULK_SEND];
+} tf_get_global_cfg_output_t, *ptf_get_global_cfg_output_t;
 
 /* Input params for table type get */
 typedef struct tf_tbl_type_bulk_get_input {
 	/* Session Id */
 	uint32_t			 fw_session_id;
 	/* flags */
-	uint16_t			 flags;
+	uint32_t			 flags;
 	/* When set to 0, indicates the get apply to RX */
 #define TF_TBL_TYPE_BULK_GET_INPUT_FLAGS_DIR_RX	   (0x0)
 	/* When set to 1, indicates the get apply to TX */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 489c461d1..0f119b45f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_em.h"
 #include "tf_rm.h"
+#include "tf_global_cfg.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "bitalloc.h"
@@ -277,6 +278,142 @@ int tf_delete_em_entry(struct tf *tfp,
 	return rc;
 }
 
+/** Get global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_get_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_get_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg get failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
+/** Set global configuration API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	struct tf_dev_info *dev;
+	struct tf_dev_global_cfg_parms gparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup session, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Retrieve the device information */
+	rc = tf_session_get_device(tfs, &dev);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Failed to lookup device, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	if (parms->config == NULL ||
+	   parms->config_sz_in_bytes == 0) {
+		TFP_DRV_LOG(ERR, "Invalid Argument(s)\n");
+		return -EINVAL;
+	}
+
+	if (dev->ops->tf_dev_set_global_cfg == NULL) {
+		rc = -EOPNOTSUPP;
+		TFP_DRV_LOG(ERR,
+			    "%s: Operation not supported, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return -EOPNOTSUPP;
+	}
+
+	gparms.dir = parms->dir;
+	gparms.type = parms->type;
+	gparms.offset = parms->offset;
+	gparms.config = parms->config;
+	gparms.config_sz_in_bytes = parms->config_sz_in_bytes;
+	rc = dev->ops->tf_dev_set_global_cfg(tfp, &gparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Global Cfg set failed, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	return rc;
+}
+
 int
 tf_alloc_identifier(struct tf *tfp,
 		    struct tf_alloc_identifier_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index fea222bee..3f54ab16b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1611,6 +1611,83 @@ int tf_delete_em_entry(struct tf *tfp,
 int tf_search_em_entry(struct tf *tfp,
 		       struct tf_search_em_entry_parms *parms);
 
+/**
+ * @page global Global Configuration
+ *
+ * @ref tf_set_global_cfg
+ *
+ * @ref tf_get_global_cfg
+ */
+/**
+ * Tunnel Encapsulation Offsets
+ */
+enum tf_tunnel_encap_offsets {
+	TF_TUNNEL_ENCAP_L2,
+	TF_TUNNEL_ENCAP_NAT,
+	TF_TUNNEL_ENCAP_MPLS,
+	TF_TUNNEL_ENCAP_VXLAN,
+	TF_TUNNEL_ENCAP_GENEVE,
+	TF_TUNNEL_ENCAP_NVGRE,
+	TF_TUNNEL_ENCAP_GRE,
+	TF_TUNNEL_ENCAP_FULL_GENERIC
+};
+/**
+ * Global Configuration Table Types
+ */
+enum tf_global_config_type {
+	TF_TUNNEL_ENCAP,  /**< Tunnel Encap Config(TECT) */
+	TF_ACTION_BLOCK,  /**< Action Block Config(ABCR) */
+	TF_GLOBAL_CFG_TYPE_MAX
+};
+
+/**
+ * tf_global_cfg parameter definition
+ */
+struct tf_global_cfg_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * Get global configuration
+ *
+ * Retrieve the configuration
+ *
+ * Returns success or failure code.
+ */
+int tf_get_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
+/**
+ * Update the global configuration table
+ *
+ * Read, modify write the value.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_global_cfg(struct tf *tfp,
+		      struct tf_global_cfg_parms *parms);
+
 /**
  * @page if_tbl Interface Table Access
  *
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index a3073c826..ead958418 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -45,6 +45,7 @@ tf_dev_bind_p4(struct tf *tfp,
 	struct tf_tcam_cfg_parms tcam_cfg;
 	struct tf_em_cfg_parms em_cfg;
 	struct tf_if_tbl_cfg_parms if_tbl_cfg;
+	struct tf_global_cfg_cfg_parms global_cfg;
 
 	dev_handle->type = TF_DEVICE_TYPE_WH;
 	/* Initial function initialization */
@@ -128,6 +129,18 @@ tf_dev_bind_p4(struct tf *tfp,
 		goto fail;
 	}
 
+	/*
+	 * GLOBAL_CFG
+	 */
+	global_cfg.num_elements = TF_GLOBAL_CFG_TYPE_MAX;
+	global_cfg.cfg = tf_global_cfg_p4;
+	rc = tf_global_cfg_bind(tfp, &global_cfg);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg initialization failure\n");
+		goto fail;
+	}
+
 	/* Final function initialization */
 	dev_handle->ops = &tf_dev_ops_p4;
 
@@ -207,6 +220,13 @@ tf_dev_unbind_p4(struct tf *tfp)
 		fail = true;
 	}
 
+	rc = tf_global_cfg_unbind(tfp);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Device unbind failed, Global Cfg Type\n");
+		fail = true;
+	}
+
 	if (fail)
 		return -1;
 
diff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h
index 5a0943ad7..1740a271f 100644
--- a/drivers/net/bnxt/tf_core/tf_device.h
+++ b/drivers/net/bnxt/tf_core/tf_device.h
@@ -11,6 +11,7 @@
 #include "tf_tbl.h"
 #include "tf_tcam.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 struct tf_session;
@@ -606,6 +607,38 @@ struct tf_dev_ops {
 	 */
 	int (*tf_dev_get_if_tbl)(struct tf *tfp,
 				 struct tf_if_tbl_get_parms *parms);
+
+	/**
+	 * Update global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_set_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
+
+	/**
+	 * Get global cfg
+	 *
+	 * [in] tfp
+	 *   Pointer to TF handle
+	 *
+	 * [in] parms
+	 *   Pointer to global cfg parameters
+	 *
+	 *    returns:
+	 *    0       - Success
+	 *    -EINVAL - Error
+	 */
+	int (*tf_dev_get_global_cfg)(struct tf *tfp,
+				     struct tf_dev_global_cfg_parms *parms);
 };
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 2dc34b853..652608264 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -108,6 +108,8 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {
 	.tf_dev_free_tbl_scope = NULL,
 	.tf_dev_set_if_tbl = NULL,
 	.tf_dev_get_if_tbl = NULL,
+	.tf_dev_set_global_cfg = NULL,
+	.tf_dev_get_global_cfg = NULL,
 };
 
 /**
@@ -140,4 +142,6 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_tbl_scope = tf_em_ext_common_free,
 	.tf_dev_set_if_tbl = tf_if_tbl_set,
 	.tf_dev_get_if_tbl = tf_if_tbl_get,
+	.tf_dev_set_global_cfg = tf_global_cfg_set,
+	.tf_dev_get_global_cfg = tf_global_cfg_get,
 };
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h
index 3b03a7c4e..7fabb4ba8 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.h
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h
@@ -11,6 +11,7 @@
 #include "tf_core.h"
 #include "tf_rm.h"
 #include "tf_if_tbl.h"
+#include "tf_global_cfg.h"
 
 struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {
 	{ TF_RM_ELEM_CFG_HCAPI_BA, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },
@@ -96,4 +97,8 @@ struct tf_if_tbl_cfg tf_if_tbl_p4[TF_IF_TBL_TYPE_MAX] = {
 	{ TF_IF_TBL_CFG_NULL, CFA_IF_TBL_TYPE_INVALID }
 };
 
+struct tf_global_cfg_cfg tf_global_cfg_p4[TF_GLOBAL_CFG_TYPE_MAX] = {
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_TUNNEL_ENCAP },
+	{ TF_GLOBAL_CFG_CFG_HCAPI, TF_ACTION_BLOCK },
+};
 #endif /* _TF_DEVICE_P4_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.c b/drivers/net/bnxt/tf_core/tf_global_cfg.c
new file mode 100644
index 000000000..4ed4039db
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.c
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+
+#include "tf_global_cfg.h"
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tf_msg.h"
+#include "tfp.h"
+
+struct tf;
+/**
+ * Global Cfg DBs.
+ */
+static void *global_cfg_db[TF_DIR_MAX];
+
+/**
+ * Init flag, set on bind and cleared on unbind
+ */
+static uint8_t init;
+
+/**
+ * Get HCAPI type parameters for a single element
+ */
+struct tf_global_cfg_get_hcapi_parms {
+	/**
+	 * [in] Global Cfg DB Handle
+	 */
+	void *global_cfg_db;
+	/**
+	 * [in] DB Index, indicates which DB entry to perform the
+	 * action on.
+	 */
+	uint16_t db_index;
+	/**
+	 * [out] Pointer to the hcapi type for the specified db_index
+	 */
+	uint16_t *hcapi_type;
+};
+
+/**
+ * Check global_cfg_type and return hwrm type.
+ *
+ * [in] global_cfg_type
+ *   Global Cfg type
+ *
+ * [out] hwrm_type
+ *   HWRM device data type
+ *
+ * Returns:
+ *    0          - Success
+ *   -EOPNOTSUPP - Type not supported
+ */
+static int
+tf_global_cfg_get_hcapi_type(struct tf_global_cfg_get_hcapi_parms *parms)
+{
+	struct tf_global_cfg_cfg *global_cfg;
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	global_cfg = (struct tf_global_cfg_cfg *)parms->global_cfg_db;
+	cfg_type = global_cfg[parms->db_index].cfg_type;
+
+	if (cfg_type != TF_GLOBAL_CFG_CFG_HCAPI)
+		return -ENOTSUP;
+
+	*parms->hcapi_type = global_cfg[parms->db_index].hcapi_type;
+
+	return 0;
+}
+
+int
+tf_global_cfg_bind(struct tf *tfp __rte_unused,
+		   struct tf_global_cfg_cfg_parms *parms)
+{
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (init) {
+		TFP_DRV_LOG(ERR,
+			    "Global Cfg DB already initialized\n");
+		return -EINVAL;
+	}
+
+	global_cfg_db[TF_DIR_RX] = parms->cfg;
+	global_cfg_db[TF_DIR_TX] = parms->cfg;
+
+	init = 1;
+
+	TFP_DRV_LOG(INFO,
+		    "Global Cfg - initialized\n");
+
+	return 0;
+}
+
+int
+tf_global_cfg_unbind(struct tf *tfp __rte_unused)
+{
+	/* Bail if nothing has been initialized */
+	if (!init) {
+		TFP_DRV_LOG(INFO,
+			    "No Global Cfg DBs created\n");
+		return 0;
+	}
+
+	global_cfg_db[TF_DIR_RX] = NULL;
+	global_cfg_db[TF_DIR_TX] = NULL;
+	init = 0;
+
+	return 0;
+}
+
+int
+tf_global_cfg_set(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Convert TF type to HCAPI type */
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	rc = tf_msg_set_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Set failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
+
+int
+tf_global_cfg_get(struct tf *tfp,
+		  struct tf_dev_global_cfg_parms *parms)
+
+{
+	int rc;
+	struct tf_global_cfg_get_hcapi_parms hparms;
+	uint16_t hcapi_type;
+
+	TF_CHECK_PARMS3(tfp, parms, parms->config);
+
+	if (!init) {
+		TFP_DRV_LOG(ERR,
+			    "%s: No Global Cfg DBs created\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	hparms.global_cfg_db = global_cfg_db[parms->dir];
+	hparms.db_index = parms->type;
+	hparms.hcapi_type = &hcapi_type;
+	rc = tf_global_cfg_get_hcapi_type(&hparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Failed type lookup, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_global_cfg(tfp, parms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s, Get failed, type:%d, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    parms->type,
+			    strerror(-rc));
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_global_cfg.h b/drivers/net/bnxt/tf_core/tf_global_cfg.h
new file mode 100644
index 000000000..5c73bb115
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_global_cfg.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_GLOBAL_CFG_H_
+#define TF_GLOBAL_CFG_H_
+
+#include "tf_core.h"
+#include "stack.h"
+
+/**
+ * The global cfg module provides processing of global cfg types.
+ */
+
+struct tf;
+
+/**
+ * Global cfg configuration enumeration.
+ */
+enum tf_global_cfg_cfg_type {
+	/**
+	 * No configuration
+	 */
+	TF_GLOBAL_CFG_CFG_NULL,
+	/**
+	 * HCAPI 'controlled'
+	 */
+	TF_GLOBAL_CFG_CFG_HCAPI,
+};
+
+/**
+ * Global cfg configuration structure, used by the Device to configure
+ * how an individual global cfg type is configured in regard to the HCAPI type.
+ */
+struct tf_global_cfg_cfg {
+	/**
+	 * Global cfg config controls how the DB for that element is
+	 * processed.
+	 */
+	enum tf_global_cfg_cfg_type cfg_type;
+
+	/**
+	 * HCAPI Type for the element. Used for TF to HCAPI type
+	 * conversion.
+	 */
+	uint16_t hcapi_type;
+};
+
+/**
+ * Global Cfg configuration parameters
+ */
+struct tf_global_cfg_cfg_parms {
+	/**
+	 * Number of table types in the configuration array
+	 */
+	uint16_t num_elements;
+	/**
+	 * Table Type element configuration array
+	 */
+	struct tf_global_cfg_cfg *cfg;
+};
+
+/**
+ * global cfg parameters
+ */
+struct tf_dev_global_cfg_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Global config type
+	 */
+	enum tf_global_config_type type;
+	/**
+	 * [in] Offset @ the type
+	 */
+	uint32_t offset;
+	/**
+	 * [in/out] Value of the configuration
+	 * set - Read, Modify and Write
+	 * get - Read the full configuration
+	 */
+	uint8_t *config;
+	/**
+	 * [in] struct containing size
+	 */
+	uint16_t config_sz_in_bytes;
+};
+
+/**
+ * @page global cfg
+ *
+ * @ref tf_global_cfg_bind
+ *
+ * @ref tf_global_cfg_unbind
+ *
+ * @ref tf_global_cfg_set
+ *
+ * @ref tf_global_cfg_get
+ *
+ */
+/**
+ * Initializes the Global Cfg module with the requested DBs. Must be
+ * invoked as the first thing before any of the access functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_bind(struct tf *tfp,
+		   struct tf_global_cfg_cfg_parms *parms);
+
+/**
+ * Cleans up the private DBs and releases all the data.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to Global Cfg configuration parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int
+tf_global_cfg_unbind(struct tf *tfp);
+
+/**
+ * Updates the global configuration table
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_set(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+/**
+ * Get global configuration
+ *
+ * [in] tfp
+ *   Pointer to TF handle, used for HCAPI communication
+ *
+ * [in] parms
+ *   Pointer to global cfg parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_global_cfg_get(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *parms);
+
+#endif /* TF_GLOBAL_CFG_H */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 8c2dff8ad..035c0948d 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -991,6 +991,111 @@ tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+int
+tf_msg_get_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_get_global_cfg_input_t req = { 0 };
+	tf_get_global_cfg_output_t resp = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+	uint16_t resp_size = 0;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_GET_GLOBAL_CFG,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	resp_size = tfp_le_to_cpu_16(resp.size);
+	if (resp_size < params->config_sz_in_bytes)
+		return -EINVAL;
+
+	if (params->config)
+		tfp_memcpy(params->config,
+			   resp.data,
+			   resp_size);
+	else
+		return -EFAULT;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_set_global_cfg(struct tf *tfp,
+		      struct tf_dev_global_cfg_parms *params)
+{
+	int rc = 0;
+	struct tfp_send_msg_parms parms = { 0 };
+	tf_set_global_cfg_input_t req = { 0 };
+	uint32_t flags = 0;
+	uint8_t fw_session_id;
+
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	flags = (params->dir == TF_DIR_TX ?
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
+		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.type = tfp_cpu_to_le_32(params->type);
+	req.offset = tfp_cpu_to_le_32(params->offset);
+	tfp_memcpy(req.data, params->config,
+		   params->config_sz_in_bytes);
+	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SET_GLOBAL_CFG,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+
+	if (rc != 0)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 int
 tf_msg_bulk_get_tbl_entry(struct tf *tfp,
 			  enum tf_dir dir,
@@ -1066,8 +1171,8 @@ tf_msg_get_if_tbl_entry(struct tf *tfp,
 		return rc;
 	}
 
-	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_SET_INPUT_FLAGS_DIR_TX :
-		 TF_IF_TBL_SET_INPUT_FLAGS_DIR_RX);
+	flags = (params->dir == TF_DIR_TX ? TF_IF_TBL_GET_INPUT_FLAGS_DIR_TX :
+		 TF_IF_TBL_GET_INPUT_FLAGS_DIR_RX);
 
 	/* Populate the request */
 	req.fw_session_id =
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index c02a5203c..195710eb8 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -12,6 +12,7 @@
 #include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_tcam.h"
+#include "tf_global_cfg.h"
 
 struct tf;
 
@@ -448,6 +449,36 @@ int tf_msg_get_tbl_entry(struct tf *tfp,
 
 /* HWRM Tunneled messages */
 
+/**
+ * Sends global cfg read request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to read parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_get_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
+/**
+ * Sends global cfg update request to Firmware
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] params
+ *   Pointer to write parameters
+ *
+ * Returns:
+ *   0 on Success else internal Truflow error
+ */
+int tf_msg_set_global_cfg(struct tf *tfp,
+			  struct tf_dev_global_cfg_parms *params);
+
 /**
  * Sends bulk get message of a Table Type element to the firmware.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 31/51] net/bnxt: add support for EEM System memory
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (29 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
                           ` (21 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, Randy Schacher, Venkat Duvvuru

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

- Select EEM Host or System memory via config parameter
- Add EEM system memory support for kernel memory
- Dependent on DPDK changes that add support for the HWRM_OEM_CMD.

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 config/common_base                      |   1 +
 drivers/net/bnxt/Makefile               |   3 +
 drivers/net/bnxt/bnxt.h                 |   8 +
 drivers/net/bnxt/bnxt_hwrm.c            |  27 +
 drivers/net/bnxt/bnxt_hwrm.h            |   1 +
 drivers/net/bnxt/meson.build            |   2 +-
 drivers/net/bnxt/tf_core/Makefile       |   5 +-
 drivers/net/bnxt/tf_core/tf_core.c      |  13 +-
 drivers/net/bnxt/tf_core/tf_core.h      |   4 +-
 drivers/net/bnxt/tf_core/tf_device.c    |   5 +-
 drivers/net/bnxt/tf_core/tf_device_p4.c |   2 +-
 drivers/net/bnxt/tf_core/tf_em.h        | 113 +---
 drivers/net/bnxt/tf_core/tf_em_common.c | 683 ++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_common.h |  30 ++
 drivers/net/bnxt/tf_core/tf_em_host.c   | 689 +-----------------------
 drivers/net/bnxt/tf_core/tf_em_system.c | 541 ++++++++++++++++---
 drivers/net/bnxt/tf_core/tf_if_tbl.h    |   4 +-
 drivers/net/bnxt/tf_core/tf_msg.c       |  24 +
 drivers/net/bnxt/tf_core/tf_tbl.h       |   7 +
 drivers/net/bnxt/tf_core/tfp.c          |  12 +
 drivers/net/bnxt/tf_core/tfp.h          |  15 +
 21 files changed, 1319 insertions(+), 870 deletions(-)

diff --git a/config/common_base b/config/common_base
index fe30c515e..370a48f02 100644
--- a/config/common_base
+++ b/config/common_base
@@ -220,6 +220,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
 # Compile burst-oriented Broadcom BNXT PMD driver
 #
 CONFIG_RTE_LIBRTE_BNXT_PMD=y
+CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM=n
 
 #
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 349b09c36..6b9544b5d 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,9 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_ulp -I$(SRCDIR)/tf_core -I$(SRCDIR)/hcapi
 include $(SRCDIR)/tf_ulp/Makefile
 include $(SRCDIR)/tf_core/Makefile
 include $(SRCDIR)/hcapi/Makefile
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), y)
+CFLAGS += -DTF_USE_SYSTEM_MEM
+endif
 endif
 
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 65862abdc..43e5e7162 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -563,6 +563,13 @@ struct bnxt_rep_info {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define  MAX_TABLE_SUPPORT 4
+#define  MAX_DIR_SUPPORT   2
+struct bnxt_dmabuf_info {
+	uint32_t entry_num;
+	int      fd[MAX_DIR_SUPPORT][MAX_TABLE_SUPPORT];
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
 struct bnxt_flow_stat_info {
@@ -780,6 +787,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_dmabuf_info dmabuf;
 	struct bnxt_ulp_context	*ulp_ctx;
 	struct bnxt_flow_stat_info *flow_stat;
 	uint8_t			flow_xstat;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index e6a28d07c..2605ef039 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -5506,3 +5506,30 @@ int bnxt_hwrm_cfa_counter_qstats(struct bnxt *bp,
 
 	return 0;
 }
+
+#ifdef RTE_LIBRTE_BNXT_PMD_SYSTEM
+int
+bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num)
+{
+	struct hwrm_oem_cmd_input req = {0};
+	struct hwrm_oem_cmd_output *resp = bp->hwrm_cmd_resp_addr;
+	struct bnxt_dmabuf_info oem_data;
+	int rc = 0;
+
+	HWRM_PREP(&req, HWRM_OEM_CMD, BNXT_USE_CHIMP_MB);
+	req.IANA = 0x14e4;
+
+	memset(&oem_data, 0, sizeof(struct bnxt_dmabuf_info));
+	oem_data.entry_num = (entry_num);
+	memcpy(&req.oem_data[0], &oem_data, sizeof(struct bnxt_dmabuf_info));
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+	HWRM_CHECK_RESULT();
+
+	bp->dmabuf.entry_num = entry_num;
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+#endif /* RTE_LIBRTE_BNXT_PMD_SYSTEM */
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 87cd40779..9e0b79904 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -276,4 +276,5 @@ int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 				 uint16_t *vnic_id, uint16_t *svif);
 int bnxt_hwrm_parent_pf_qcfg(struct bnxt *bp);
 int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp);
+int bnxt_hwrm_oem_cmd(struct bnxt *bp, uint32_t entry_num);
 #endif
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index ace7353be..8f6ed419e 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -31,7 +31,6 @@ sources = files('bnxt_cpr.c',
         'tf_core/tf_em_common.c',
         'tf_core/tf_em_host.c',
         'tf_core/tf_em_internal.c',
-        'tf_core/tf_em_system.c',
 	'tf_core/tf_rm.c',
 	'tf_core/tf_tbl.c',
 	'tf_core/tfp.c',
@@ -46,6 +45,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/tf_if_tbl.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
+	'tf_core/tf_em_host.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
index 202db4150..750c25c5e 100644
--- a/drivers/net/bnxt/tf_core/Makefile
+++ b/drivers/net/bnxt/tf_core/Makefile
@@ -16,8 +16,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_internal.c
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM), n)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_host.c
-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em_system.c
+else
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD_SYSTEM) += tf_core/tf_em_system.c
+endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_session.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 0f119b45f..00b2775ed 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -540,10 +540,12 @@ tf_alloc_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_alloc_parms aparms = { 0 };
+	struct tf_tcam_alloc_parms aparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&aparms, 0, sizeof(struct tf_tcam_alloc_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -598,10 +600,13 @@ tf_set_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_set_parms sparms = { 0 };
+	struct tf_tcam_set_parms sparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&sparms, 0, sizeof(struct tf_tcam_set_parms));
+
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
@@ -667,10 +672,12 @@ tf_free_tcam_entry(struct tf *tfp,
 	int rc;
 	struct tf_session *tfs;
 	struct tf_dev_info *dev;
-	struct tf_tcam_free_parms fparms = { 0 };
+	struct tf_tcam_free_parms fparms;
 
 	TF_CHECK_PARMS2(tfp, parms);
 
+	memset(&fparms, 0, sizeof(struct tf_tcam_free_parms));
+
 	/* Retrieve the session information */
 	rc = tf_session_get_session(tfp, &tfs);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3f54ab16b..9e8042606 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -1731,7 +1731,7 @@ struct tf_set_if_tbl_entry_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -1768,7 +1768,7 @@ struct tf_get_if_tbl_entry_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_device.c b/drivers/net/bnxt/tf_core/tf_device.c
index ead958418..f08f7eba7 100644
--- a/drivers/net/bnxt/tf_core/tf_device.c
+++ b/drivers/net/bnxt/tf_core/tf_device.c
@@ -92,8 +92,11 @@ tf_dev_bind_p4(struct tf *tfp,
 	em_cfg.num_elements = TF_EM_TBL_TYPE_MAX;
 	em_cfg.cfg = tf_em_ext_p4;
 	em_cfg.resources = resources;
+#ifdef TF_USE_SYSTEM_MEM
+	em_cfg.mem_type = TF_EEM_MEM_TYPE_SYSTEM;
+#else
 	em_cfg.mem_type = TF_EEM_MEM_TYPE_HOST;
-
+#endif
 	rc = tf_em_ext_common_bind(tfp, &em_cfg);
 	if (rc) {
 		TFP_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index 652608264..dfe626c8a 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -126,7 +126,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
 	.tf_dev_free_ext_tbl = tf_tbl_ext_free,
 	.tf_dev_alloc_search_tbl = NULL,
 	.tf_dev_set_tbl = tf_tbl_set,
-	.tf_dev_set_ext_tbl = tf_tbl_ext_set,
+	.tf_dev_set_ext_tbl = tf_tbl_ext_common_set,
 	.tf_dev_get_tbl = tf_tbl_get,
 	.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,
 	.tf_dev_alloc_tcam = tf_tcam_alloc,
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
index 39a216341..089026178 100644
--- a/drivers/net/bnxt/tf_core/tf_em.h
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -16,6 +16,9 @@
 
 #include "hcapi/hcapi_cfa_defs.h"
 
+#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
 #define TF_HW_EM_KEY_MAX_SIZE 52
 #define TF_EM_KEY_RECORD_SIZE 64
 
@@ -69,8 +72,16 @@
 #error "Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define"
 #endif
 
+/*
+ * System memory always uses 4K pages
+ */
+#ifdef TF_USE_SYSTEM_MEM
+#define TF_EM_PAGE_SIZE (1 << TF_EM_PAGE_SIZE_4K)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SIZE_4K)
+#else
 #define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
 #define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+#endif
 
 /*
  * Used to build GFID:
@@ -168,39 +179,6 @@ struct tf_em_cfg_parms {
  * @ref tf_em_ext_common_alloc
  */
 
-/**
- * Allocates EEM Table scope
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- *   -ENOMEM - Out of memory
- */
-int tf_alloc_eem_tbl_scope(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free's EEM Table scope control block
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_free_eem_tbl_scope_cb(struct tf *tfp,
-			     struct tf_free_tbl_scope_parms *parms);
-
 /**
  * Insert record in to internal EM table
  *
@@ -374,8 +352,8 @@ int tf_em_ext_common_unbind(struct tf *tfp);
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_alloc(struct tf *tfp,
-			 struct tf_alloc_tbl_scope_parms *parms);
+int tf_em_ext_alloc(struct tf *tfp,
+		    struct tf_alloc_tbl_scope_parms *parms);
 
 /**
  * Free for external EEM using host memory
@@ -390,40 +368,8 @@ int tf_em_ext_host_alloc(struct tf *tfp,
  *   0       - Success
  *   -EINVAL - Parameter error
  */
-int tf_em_ext_host_free(struct tf *tfp,
-			struct tf_free_tbl_scope_parms *parms);
-
-/**
- * Alloc for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_alloc(struct tf *tfp,
-			   struct tf_alloc_tbl_scope_parms *parms);
-
-/**
- * Free for external EEM using system memory
- *
- * [in] tfp
- *   Pointer to TruFlow handle
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-int tf_em_ext_system_free(struct tf *tfp,
-			  struct tf_free_tbl_scope_parms *parms);
+int tf_em_ext_free(struct tf *tfp,
+		   struct tf_free_tbl_scope_parms *parms);
 
 /**
  * Common free for external EEM using host or system memory
@@ -510,8 +456,8 @@ tf_tbl_ext_free(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms);
 
 /**
  * Sets the specified external table type element.
@@ -529,26 +475,11 @@ int tf_tbl_ext_set(struct tf *tfp,
  *   - (0) if successful.
  *   - (-EINVAL) on failure.
  */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms);
+int tf_tbl_ext_set(struct tf *tfp,
+		   struct tf_tbl_set_parms *parms);
 
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data by invoking the
- * firmware.
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_system_set(struct tf *tfp,
-			  struct tf_tbl_set_parms *parms);
+int
+tf_em_ext_system_bind(struct tf *tfp,
+		      struct tf_em_cfg_parms *parms);
 
 #endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c
index 23a7fc9c2..8b02b8ba3 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.c
+++ b/drivers/net/bnxt/tf_core/tf_em_common.c
@@ -23,6 +23,8 @@
 
 #include "bnxt.h"
 
+/* Number of pointers per page_size */
+#define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
 /**
  * EM DBs.
@@ -281,19 +283,602 @@ tf_em_create_key_entry(struct cfa_p4_eem_entry_hdr *result,
 		       struct cfa_p4_eem_64b_entry *key_entry)
 {
 	key_entry->hdr.word1 = result->word1;
+	key_entry->hdr.pointer = result->pointer;
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
 
-	if (result->word1 & CFA_P4_EEM_ENTRY_ACT_REC_INT_MASK)
-		key_entry->hdr.pointer = result->pointer;
-	else
-		key_entry->hdr.pointer = result->pointer;
 
-	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == TF_PT_LVL_0) {
+		page_cnt[TF_PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == TF_PT_LVL_1) {
+		page_cnt[TF_PT_LVL_1] = num_data_pages;
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else if (max_lvl == TF_PT_LVL_2) {
+		page_cnt[TF_PT_LVL_2] = num_data_pages;
+		page_cnt[TF_PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
+		page_cnt[TF_PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = TF_PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == TF_PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == TF_PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int
+tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		 uint32_t page_size)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == TF_RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(page_size,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		TFP_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
+			    page_size);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, page_size,
+				page_cnt);
+
+	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	TFP_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 \
+		    " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * page_size,
+		    num_data_pages,
+		    page_cnt[TF_PT_LVL_0],
+		    page_cnt[TF_PT_LVL_1],
+		    page_cnt[TF_PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->rx_num_flows_in_k != 0 &&
+	    parms->rx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Rx key size required: %u\n",
+			    (parms->rx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+
+	if (parms->tx_num_flows_in_k != 0 &&
+	    parms->tx_max_key_sz_in_bits / 8 == 0) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: Tx key size required: %u\n",
+			    (parms->tx_max_key_sz_in_bits));
+		return -EINVAL;
+	}
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->rx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
+
+	return 0;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+static int
+tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t mask;
+	uint32_t key0_hash;
+	uint32_t key1_hash;
+	uint32_t key0_index;
+	uint32_t key1_index;
+	struct cfa_p4_eem_64b_entry key_entry;
+	uint32_t index;
+	enum hcapi_cfa_em_table_type table_type;
+	uint32_t gfid;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	uint64_t big_hash;
+	int rc;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+#ifdef TF_EEM_DEBUG
+	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
+#endif
+
+	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
+				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
+	key0_hash = (uint32_t)(big_hash >> 32);
+	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
+
+	key0_index = key0_hash & mask;
+	key1_index = key1_hash & mask;
 
 #ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)key_entry, TF_EM_KEY_RECORD_SIZE, "Create raw:");
+	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
+	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
 #endif
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Try to add to Key0 table, if that does not work then
+	 * try the key1 table.
+	 */
+	index = key0_index;
+	op.opcode = HCAPI_CFA_HWOPS_ADD;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = (uint8_t *)&key_entry;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (rc == 0) {
+		table_type = TF_KEY0_TABLE;
+	} else {
+		index = key1_index;
+
+		key_tbl.base0 =
+			(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
+		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+
+		rc = hcapi_cfa_key_hw_op(&op,
+					 &key_tbl,
+					 &key_obj,
+					 &key_loc);
+		if (rc != 0)
+			return rc;
+
+		table_type = TF_KEY1_TABLE;
+	}
+
+	TF_SET_GFID(gfid,
+		    index,
+		    table_type);
+	TF_SET_FLOW_ID(parms->flow_id,
+		       gfid,
+		       TF_GFID_TABLE_EXTERNAL,
+		       parms->dir);
+	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+				     0,
+				     0,
+				     0,
+				     index,
+				     0,
+				     table_type);
+
+	return 0;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+static int
+tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+		    struct tf_delete_em_entry_parms *parms)
+{
+	enum hcapi_cfa_em_table_type hash_type;
+	uint32_t index;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+	int rc;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	op.opcode = HCAPI_CFA_HWOPS_DEL;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables
+			[(hash_type == 0 ? TF_KEY0_TABLE : TF_KEY1_TABLE)];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
+	key_obj.data = NULL;
+	key_obj.size = TF_EM_KEY_RECORD_SIZE;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	if (!rc)
+		return rc;
+
+	return 0;
+}
+
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_insert_eem_entry
+		(tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int
+tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+
+	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_eem_entry(tbl_scope_cb, parms);
 }
 
+
 int
 tf_em_ext_common_bind(struct tf *tfp,
 		      struct tf_em_cfg_parms *parms)
@@ -341,6 +926,7 @@ tf_em_ext_common_bind(struct tf *tfp,
 		init = 1;
 
 	mem_type = parms->mem_type;
+
 	return 0;
 }
 
@@ -375,31 +961,88 @@ tf_em_ext_common_unbind(struct tf *tfp)
 	return 0;
 }
 
-int tf_tbl_ext_set(struct tf *tfp,
-		   struct tf_tbl_set_parms *parms)
+/**
+ * Sets the specified external table type element.
+ *
+ * This API sets the specified element data
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] parms
+ *   Pointer to table set parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_tbl_ext_common_set(struct tf *tfp,
+			  struct tf_tbl_set_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_tbl_ext_host_set(tfp, parms);
-	else
-		return tf_tbl_ext_system_set(tfp, parms);
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	uint32_t tbl_scope_id;
+	struct hcapi_cfa_hwop op;
+	struct hcapi_cfa_key_tbl key_tbl;
+	struct hcapi_cfa_key_data key_obj;
+	struct hcapi_cfa_key_loc key_loc;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	if (parms->data == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, invalid parms->data\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tbl_scope_id = parms->tbl_scope_id;
+
+	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+		TFP_DRV_LOG(ERR,
+			    "%s, Table scope not allocated\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	/* Get the table scope control block associated with the
+	 * external pool
+	 */
+	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		TFP_DRV_LOG(ERR,
+			    "%s, table scope error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	op.opcode = HCAPI_CFA_HWOPS_PUT;
+	key_tbl.base0 =
+		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
+	key_tbl.page_size = TF_EM_PAGE_SIZE;
+	key_obj.offset = parms->idx;
+	key_obj.data = parms->data;
+	key_obj.size = parms->data_sz_in_bytes;
+
+	rc = hcapi_cfa_key_hw_op(&op,
+				 &key_tbl,
+				 &key_obj,
+				 &key_loc);
+
+	return rc;
 }
 
 int
 tf_em_ext_common_alloc(struct tf *tfp,
 		       struct tf_alloc_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_alloc(tfp, parms);
-	else
-		return tf_em_ext_system_alloc(tfp, parms);
+	return tf_em_ext_alloc(tfp, parms);
 }
 
 int
 tf_em_ext_common_free(struct tf *tfp,
 		      struct tf_free_tbl_scope_parms *parms)
 {
-	if (mem_type == TF_EEM_MEM_TYPE_HOST)
-		return tf_em_ext_host_free(tfp, parms);
-	else
-		return tf_em_ext_system_free(tfp, parms);
+	return tf_em_ext_free(tfp, parms);
 }
diff --git a/drivers/net/bnxt/tf_core/tf_em_common.h b/drivers/net/bnxt/tf_core/tf_em_common.h
index bf01df9b8..fa313c458 100644
--- a/drivers/net/bnxt/tf_core/tf_em_common.h
+++ b/drivers/net/bnxt/tf_core/tf_em_common.h
@@ -101,4 +101,34 @@ void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
 			   uint32_t offset,
 			   enum hcapi_cfa_em_table_type table_type);
 
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+int tf_em_size_table(struct hcapi_cfa_em_table *tbl,
+		     uint32_t page_size);
+
 #endif /* _TF_EM_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 2626a59fe..8cc92c438 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -22,7 +22,6 @@
 
 #include "bnxt.h"
 
-
 #define PTU_PTE_VALID          0x1UL
 #define PTU_PTE_LAST           0x2UL
 #define PTU_PTE_NEXT_TO_LAST   0x4UL
@@ -30,20 +29,6 @@
 /* Number of pointers per page_size */
 #define MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
-#define TF_EM_PG_SZ_4K        (1 << 12)
-#define TF_EM_PG_SZ_8K        (1 << 13)
-#define TF_EM_PG_SZ_64K       (1 << 16)
-#define TF_EM_PG_SZ_256K      (1 << 18)
-#define TF_EM_PG_SZ_1M        (1 << 20)
-#define TF_EM_PG_SZ_2M        (1 << 21)
-#define TF_EM_PG_SZ_4M        (1 << 22)
-#define TF_EM_PG_SZ_1G        (1 << 30)
-
-#define TF_EM_CTX_ID_INVALID   0xFFFF
-
-#define TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
-#define TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
-
 /**
  * EM DBs.
  */
@@ -294,203 +279,6 @@ tf_em_setup_page_table(struct hcapi_cfa_em_table *tbl)
 	tbl->l0_dma_addr = tbl->pg_tbl[TF_PT_LVL_0].pg_pa_tbl[0];
 }
 
-/**
- * Given the page size, size of each data item (entry size),
- * and the total number of entries needed, determine the number
- * of page table levels and the number of data pages required.
- *
- * [in] page_size
- *   Page size
- *
- * [in] entry_size
- *   Entry size
- *
- * [in] num_entries
- *   Number of entries needed
- *
- * [out] num_data_pages
- *   Number of pages required
- *
- * Returns:
- *   Success  - Number of EM page levels required
- *   -ENOMEM  - Out of memory
- */
-static int
-tf_em_size_page_tbl_lvl(uint32_t page_size,
-			uint32_t entry_size,
-			uint32_t num_entries,
-			uint64_t *num_data_pages)
-{
-	uint64_t lvl_data_size = page_size;
-	int lvl = TF_PT_LVL_0;
-	uint64_t data_size;
-
-	*num_data_pages = 0;
-	data_size = (uint64_t)num_entries * entry_size;
-
-	while (lvl_data_size < data_size) {
-		lvl++;
-
-		if (lvl == TF_PT_LVL_1)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				page_size;
-		else if (lvl == TF_PT_LVL_2)
-			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
-				MAX_PAGE_PTRS(page_size) * page_size;
-		else
-			return -ENOMEM;
-	}
-
-	*num_data_pages = roundup(data_size, page_size) / page_size;
-
-	return lvl;
-}
-
-/**
- * Return the number of page table pages needed to
- * reference the given number of next level pages.
- *
- * [in] num_pages
- *   Number of EM pages
- *
- * [in] page_size
- *   Size of each EM page
- *
- * Returns:
- *   Number of EM page table pages
- */
-static uint32_t
-tf_em_page_tbl_pgcnt(uint32_t num_pages,
-		     uint32_t page_size)
-{
-	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
-		       MAX_PAGE_PTRS(page_size);
-	return 0;
-}
-
-/**
- * Given the number of data pages, page_size and the maximum
- * number of page table levels (already determined), size
- * the number of page table pages required at each level.
- *
- * [in] max_lvl
- *   Max number of levels
- *
- * [in] num_data_pages
- *   Number of EM data pages
- *
- * [in] page_size
- *   Size of an EM page
- *
- * [out] *page_cnt
- *   EM page count
- */
-static void
-tf_em_size_page_tbls(int max_lvl,
-		     uint64_t num_data_pages,
-		     uint32_t page_size,
-		     uint32_t *page_cnt)
-{
-	if (max_lvl == TF_PT_LVL_0) {
-		page_cnt[TF_PT_LVL_0] = num_data_pages;
-	} else if (max_lvl == TF_PT_LVL_1) {
-		page_cnt[TF_PT_LVL_1] = num_data_pages;
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else if (max_lvl == TF_PT_LVL_2) {
-		page_cnt[TF_PT_LVL_2] = num_data_pages;
-		page_cnt[TF_PT_LVL_1] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_2], page_size);
-		page_cnt[TF_PT_LVL_0] =
-		tf_em_page_tbl_pgcnt(page_cnt[TF_PT_LVL_1], page_size);
-	} else {
-		return;
-	}
-}
-
-/**
- * Size the EM table based on capabilities
- *
- * [in] tbl
- *   EM table to size
- *
- * Returns:
- *   0        - Success
- *   - EINVAL - Parameter error
- *   - ENOMEM - Out of memory
- */
-static int
-tf_em_size_table(struct hcapi_cfa_em_table *tbl)
-{
-	uint64_t num_data_pages;
-	uint32_t *page_cnt;
-	int max_lvl;
-	uint32_t num_entries;
-	uint32_t cnt = TF_EM_MIN_ENTRIES;
-
-	/* Ignore entry if both size and number are zero */
-	if (!tbl->entry_size && !tbl->num_entries)
-		return 0;
-
-	/* If only one is set then error */
-	if (!tbl->entry_size || !tbl->num_entries)
-		return -EINVAL;
-
-	/* Determine number of page table levels and the number
-	 * of data pages needed to process the given eem table.
-	 */
-	if (tbl->type == TF_RECORD_TABLE) {
-		/*
-		 * For action records just a memory size is provided. Work
-		 * backwards to resolve to number of entries
-		 */
-		num_entries = tbl->num_entries / tbl->entry_size;
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			num_entries = TF_EM_MIN_ENTRIES;
-		} else {
-			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
-				cnt *= 2;
-			num_entries = cnt;
-		}
-	} else {
-		num_entries = tbl->num_entries;
-	}
-
-	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
-					  tbl->entry_size,
-					  tbl->num_entries,
-					  &num_data_pages);
-	if (max_lvl < 0) {
-		TFP_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
-		TFP_DRV_LOG(WARNING,
-			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
-			    tbl->type, (uint64_t)num_entries * tbl->entry_size,
-			    TF_EM_PAGE_SIZE);
-		return -ENOMEM;
-	}
-
-	tbl->num_lvl = max_lvl + 1;
-	tbl->num_data_pages = num_data_pages;
-
-	/* Determine the number of pages needed at each level */
-	page_cnt = tbl->page_cnt;
-	memset(page_cnt, 0, sizeof(tbl->page_cnt));
-	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
-				page_cnt);
-
-	TFP_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
-	TFP_DRV_LOG(INFO,
-		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
-		    max_lvl + 1,
-		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
-		    num_data_pages,
-		    page_cnt[TF_PT_LVL_0],
-		    page_cnt[TF_PT_LVL_1],
-		    page_cnt[TF_PT_LVL_2]);
-
-	return 0;
-}
-
 /**
  * Unregisters EM Ctx in Firmware
  *
@@ -552,7 +340,7 @@ tf_em_ctx_reg(struct tf *tfp,
 		tbl = &ctxp->em_tables[i];
 
 		if (tbl->num_entries && tbl->entry_size) {
-			rc = tf_em_size_table(tbl);
+			rc = tf_em_size_table(tbl, TF_EM_PAGE_SIZE);
 
 			if (rc)
 				goto cleanup;
@@ -578,403 +366,8 @@ tf_em_ctx_reg(struct tf *tfp,
 	return rc;
 }
 
-
-/**
- * Validates EM number of entries requested
- *
- * [in] tbl_scope_cb
- *   Pointer to table scope control block to be populated
- *
- * [in] parms
- *   Pointer to input parameters
- *
- * Returns:
- *   0       - Success
- *   -EINVAL - Parameter error
- */
-static int
-tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
-			   struct tf_alloc_tbl_scope_parms *parms)
-{
-	uint32_t cnt;
-
-	if (parms->rx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
-		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->rx_mem_size_in_mb *
-					TF_MEGABYTE) / (key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
-				    "%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
-				    "%u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx flows "
-				    "requested:%u max:%u\n",
-				    parms->rx_num_flows_in_k * TF_KILOBYTE,
-			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		/* must be a power-of-2 supported value
-		 * in the range 32K - 128M
-		 */
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Rx requested: %u\n",
-				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->tx_mem_size_in_mb != 0) {
-		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
-		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
-				     + 1);
-		uint32_t num_entries = (parms->tx_mem_size_in_mb *
-					(TF_KILOBYTE * TF_KILOBYTE)) /
-			(key_b + action_b);
-
-		if (num_entries < TF_EM_MIN_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Insufficient memory requested:%uMB\n",
-				    parms->rx_mem_size_in_mb);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while (num_entries > cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-
-		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
-	} else {
-		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
-		    TF_EM_MIN_ENTRIES ||
-		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
-		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx flows "
-				    "requested:%u max:%u\n",
-				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
-			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
-			return -EINVAL;
-		}
-
-		cnt = TF_EM_MIN_ENTRIES;
-		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
-		       cnt <= TF_EM_MAX_ENTRIES)
-			cnt *= 2;
-
-		if (cnt > TF_EM_MAX_ENTRIES) {
-			TFP_DRV_LOG(ERR,
-				    "EEM: Invalid number of Tx requested: %u\n",
-		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
-			return -EINVAL;
-		}
-	}
-
-	if (parms->rx_num_flows_in_k != 0 &&
-	    (parms->rx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Rx key size required: %u\n",
-			    (parms->rx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-
-	if (parms->tx_num_flows_in_k != 0 &&
-	    (parms->tx_max_key_sz_in_bits / 8 == 0)) {
-		TFP_DRV_LOG(ERR,
-			    "EEM: Tx key size required: %u\n",
-			    (parms->tx_max_key_sz_in_bits));
-		return -EINVAL;
-	}
-	/* Rx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->rx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->rx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->rx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	/* Tx */
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY0_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_KEY1_TABLE].entry_size =
-		parms->tx_max_key_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].num_entries =
-		parms->tx_num_flows_in_k * TF_KILOBYTE;
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_RECORD_TABLE].entry_size =
-		parms->tx_max_action_entry_sz_in_bits / 8;
-
-	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[TF_EFC_TABLE].num_entries = 0;
-
-	return 0;
-}
-
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
- */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_insert_em_entry_parms *parms)
-{
-	uint32_t mask;
-	uint32_t key0_hash;
-	uint32_t key1_hash;
-	uint32_t key0_index;
-	uint32_t key1_index;
-	struct cfa_p4_eem_64b_entry key_entry;
-	uint32_t index;
-	enum hcapi_cfa_em_table_type table_type;
-	uint32_t gfid;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	uint64_t big_hash;
-	int rc;
-
-	/* Get mask to use on hash */
-	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE].num_entries);
-
-	if (!mask)
-		return -EINVAL;
-
-#ifdef TF_EEM_DEBUG
-	dump_raw((uint8_t *)parms->key, TF_HW_EM_KEY_MAX_SIZE + 4, "In Key");
-#endif
-
-	big_hash = hcapi_cfa_key_hash((uint64_t *)parms->key,
-				      (TF_HW_EM_KEY_MAX_SIZE + 4) * 8);
-	key0_hash = (uint32_t)(big_hash >> 32);
-	key1_hash = (uint32_t)(big_hash & 0xFFFFFFFF);
-
-	key0_index = key0_hash & mask;
-	key1_index = key1_hash & mask;
-
-#ifdef TF_EEM_DEBUG
-	TFP_DRV_LOG(DEBUG, "Key0 hash:0x%08x\n", key0_hash);
-	TFP_DRV_LOG(DEBUG, "Key1 hash:0x%08x\n", key1_hash);
-#endif
-	/*
-	 * Use the "result" arg to populate all of the key entry then
-	 * store the byte swapped "raw" entry in a local copy ready
-	 * for insertion in to the table.
-	 */
-	tf_em_create_key_entry((struct cfa_p4_eem_entry_hdr *)parms->em_record,
-				((uint8_t *)parms->key),
-				&key_entry);
-
-	/*
-	 * Try to add to Key0 table, if that does not work then
-	 * try the key1 table.
-	 */
-	index = key0_index;
-	op.opcode = HCAPI_CFA_HWOPS_ADD;
-	key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY0_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = (uint8_t *)&key_entry;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (rc == 0) {
-		table_type = TF_KEY0_TABLE;
-	} else {
-		index = key1_index;
-
-		key_tbl.base0 = (uint8_t *)
-		&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_KEY1_TABLE];
-		key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-
-		rc = hcapi_cfa_key_hw_op(&op,
-					 &key_tbl,
-					 &key_obj,
-					 &key_loc);
-		if (rc != 0)
-			return rc;
-
-		table_type = TF_KEY1_TABLE;
-	}
-
-	TF_SET_GFID(gfid,
-		    index,
-		    table_type);
-	TF_SET_FLOW_ID(parms->flow_id,
-		       gfid,
-		       TF_GFID_TABLE_EXTERNAL,
-		       parms->dir);
-	TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
-				     0,
-				     0,
-				     0,
-				     index,
-				     0,
-				     table_type);
-
-	return 0;
-}
-
-/** delete EEM hash entry API
- *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
- *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
- */
-static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
-		    struct tf_delete_em_entry_parms *parms)
-{
-	enum hcapi_cfa_em_table_type hash_type;
-	uint32_t index;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-	int rc;
-
-	if (parms->flow_handle == 0)
-		return -EINVAL;
-
-	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
-	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
-
-	op.opcode = HCAPI_CFA_HWOPS_DEL;
-	key_tbl.base0 = (uint8_t *)
-	&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[(hash_type == 0 ?
-							  TF_KEY0_TABLE :
-							  TF_KEY1_TABLE)];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = index * TF_EM_KEY_RECORD_SIZE;
-	key_obj.data = NULL;
-	key_obj.size = TF_EM_KEY_RECORD_SIZE;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	if (!rc)
-		return rc;
-
-	return 0;
-}
-
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_insert_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_insert_eem_entry(tbl_scope_cb, parms);
-}
-
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
 int
-tf_em_delete_ext_entry(struct tf *tfp __rte_unused,
-		       struct tf_delete_em_entry_parms *parms)
-{
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
-	}
-
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
-}
-
-int
-tf_em_ext_host_alloc(struct tf *tfp,
-		     struct tf_alloc_tbl_scope_parms *parms)
+tf_em_ext_alloc(struct tf *tfp, struct tf_alloc_tbl_scope_parms *parms)
 {
 	int rc;
 	enum tf_dir dir;
@@ -1081,7 +474,7 @@ tf_em_ext_host_alloc(struct tf *tfp,
 
 cleanup_full:
 	free_parms.tbl_scope_id = parms->tbl_scope_id;
-	tf_em_ext_host_free(tfp, &free_parms);
+	tf_em_ext_free(tfp, &free_parms);
 	return -EINVAL;
 
 cleanup:
@@ -1094,8 +487,8 @@ tf_em_ext_host_alloc(struct tf *tfp,
 }
 
 int
-tf_em_ext_host_free(struct tf *tfp,
-		    struct tf_free_tbl_scope_parms *parms)
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
 {
 	int rc = 0;
 	enum tf_dir  dir;
@@ -1136,75 +529,3 @@ tf_em_ext_host_free(struct tf *tfp,
 	tbl_scopes[parms->tbl_scope_id].tbl_scope_id = TF_TBL_SCOPE_INVALID;
 	return rc;
 }
-
-/**
- * Sets the specified external table type element.
- *
- * This API sets the specified element data
- *
- * [in] tfp
- *   Pointer to TF handle
- *
- * [in] parms
- *   Pointer to table set parameters
- *
- * Returns
- *   - (0) if successful.
- *   - (-EINVAL) on failure.
- */
-int tf_tbl_ext_host_set(struct tf *tfp,
-			struct tf_tbl_set_parms *parms)
-{
-	int rc = 0;
-	struct tf_tbl_scope_cb *tbl_scope_cb;
-	uint32_t tbl_scope_id;
-	struct hcapi_cfa_hwop op;
-	struct hcapi_cfa_key_tbl key_tbl;
-	struct hcapi_cfa_key_data key_obj;
-	struct hcapi_cfa_key_loc key_loc;
-
-	TF_CHECK_PARMS2(tfp, parms);
-
-	if (parms->data == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, invalid parms->data\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	tbl_scope_id = parms->tbl_scope_id;
-
-	if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
-		TFP_DRV_LOG(ERR,
-			    "%s, Table scope not allocated\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	/* Get the table scope control block associated with the
-	 * external pool
-	 */
-	tbl_scope_cb = tbl_scope_cb_find(tbl_scope_id);
-
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR,
-			    "%s, table scope error\n",
-			    tf_dir_2_str(parms->dir));
-		return -EINVAL;
-	}
-
-	op.opcode = HCAPI_CFA_HWOPS_PUT;
-	key_tbl.base0 =
-		(uint8_t *)&tbl_scope_cb->em_ctx_info[parms->dir].em_tables[TF_RECORD_TABLE];
-	key_tbl.page_size = TF_EM_PAGE_SIZE;
-	key_obj.offset = parms->idx;
-	key_obj.data = parms->data;
-	key_obj.size = parms->data_sz_in_bytes;
-
-	rc = hcapi_cfa_key_hw_op(&op,
-				 &key_tbl,
-				 &key_obj,
-				 &key_loc);
-
-	return rc;
-}
diff --git a/drivers/net/bnxt/tf_core/tf_em_system.c b/drivers/net/bnxt/tf_core/tf_em_system.c
index 10768df03..c47c8b93f 100644
--- a/drivers/net/bnxt/tf_core/tf_em_system.c
+++ b/drivers/net/bnxt/tf_core/tf_em_system.c
@@ -4,11 +4,24 @@
  */
 
 #include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <unistd.h>
+#include <string.h>
+
 #include <rte_common.h>
 #include <rte_errno.h>
 #include <rte_log.h>
 
 #include "tf_core.h"
+#include "tf_util.h"
+#include "tf_common.h"
 #include "tf_em.h"
 #include "tf_em_common.h"
 #include "tf_msg.h"
@@ -18,103 +31,503 @@
 
 #include "bnxt.h"
 
+enum tf_em_req_type {
+	TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ = 5,
+};
 
-/** insert EEM entry API
- *
- * returns:
- *  0
- *  TF_ERR	    - unable to get lock
- *
- * insert callback returns:
- *   0
- *   TF_ERR_EM_DUP  - key is already in table
+struct tf_em_bnxt_lfc_req_hdr {
+	uint32_t ver;
+	uint32_t bus;
+	uint32_t devfn;
+	enum tf_em_req_type req_type;
+};
+
+struct tf_em_bnxt_lfc_cfa_eem_std_hdr {
+	uint16_t version;
+	uint16_t size;
+	uint32_t flags;
+	#define TF_EM_BNXT_LFC_EEM_CFG_PRIMARY_FUNC     (1 << 0)
+};
+
+struct tf_em_bnxt_lfc_dmabuf_fd {
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+};
+
+#ifndef __user
+#define __user
+#endif
+
+struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req {
+	struct tf_em_bnxt_lfc_cfa_eem_std_hdr std;
+	uint8_t dir;
+	uint32_t flags;
+	void __user *dma_fd;
+};
+
+struct tf_em_bnxt_lfc_req {
+	struct tf_em_bnxt_lfc_req_hdr hdr;
+	union {
+		struct tf_em_bnxt_lfc_cfa_eem_dmabuf_export_req
+		       eem_dmabuf_export_req;
+		uint64_t hreq;
+	} req;
+};
+
+#define TF_EEM_BNXT_LFC_IOCTL_MAGIC     0x98
+#define BNXT_LFC_REQ    \
+	_IOW(TF_EEM_BNXT_LFC_IOCTL_MAGIC, 1, struct tf_em_bnxt_lfc_req)
+
+/**
+ * EM DBs.
  */
-static int
-tf_insert_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_insert_em_entry_parms *parms __rte_unused)
+extern void *eem_db[TF_DIR_MAX];
+
+extern struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+static void
+tf_em_dmabuf_mem_unmap(struct hcapi_cfa_em_table *tbl)
 {
-	return 0;
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no, pg_count;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			if (tp->pg_va_tbl != NULL &&
+			    tp->pg_va_tbl[page_no] != NULL &&
+			    tp->pg_size != 0) {
+				(void)munmap(tp->pg_va_tbl[page_no],
+					     tp->pg_size);
+			}
+		}
+
+		tfp_free((void *)tp->pg_va_tbl);
+		tfp_free((void *)tp->pg_pa_tbl);
+	}
 }
 
-/** delete EEM hash entry API
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
  *
- * returns:
- *   0
- *   -EINVAL	  - parameter error
- *   TF_NO_SESSION    - bad session ID
- *   TF_ERR_TBL_SCOPE - invalid table scope
- *   TF_ERR_TBL_IF    - invalid table interface
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
  *
- * insert callback returns
- *   0
- *   TF_NO_EM_MATCH - entry not found
+ * [in] dir
+ *   Receive or transmit direction
  */
+static void
+tf_em_ctx_unreg(struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct hcapi_cfa_em_ctx_mem_info *ctxp =
+		&tbl_scope_cb->em_ctx_info[dir];
+	struct hcapi_cfa_em_table *tbl;
+	int i;
+
+	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+			tf_em_dmabuf_mem_unmap(tbl);
+	}
+}
+
+static int tf_export_tbl_scope(int lfc_fd,
+			       int *fd,
+			       int bus,
+			       int devfn)
+{
+	struct tf_em_bnxt_lfc_req tf_lfc_req;
+	struct tf_em_bnxt_lfc_dmabuf_fd *dma_fd;
+	struct tfp_calloc_parms  mparms;
+	int rc;
+
+	memset(&tf_lfc_req, 0, sizeof(struct tf_em_bnxt_lfc_req));
+	tf_lfc_req.hdr.ver = 1;
+	tf_lfc_req.hdr.bus = bus;
+	tf_lfc_req.hdr.devfn = devfn;
+	tf_lfc_req.hdr.req_type = TF_EM_BNXT_LFC_CFA_EEM_DMABUF_EXPORT_REQ;
+	tf_lfc_req.req.eem_dmabuf_export_req.flags = O_ACCMODE;
+	tf_lfc_req.req.eem_dmabuf_export_req.std.version = 1;
+
+	mparms.nitems = 1;
+	mparms.size = sizeof(struct tf_em_bnxt_lfc_dmabuf_fd);
+	mparms.alignment = 0;
+	tfp_calloc(&mparms);
+	dma_fd = (struct tf_em_bnxt_lfc_dmabuf_fd *)mparms.mem_va;
+	tf_lfc_req.req.eem_dmabuf_export_req.dma_fd = dma_fd;
+
+	rc = ioctl(lfc_fd, BNXT_LFC_REQ, &tf_lfc_req);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "EXT EEM export chanel_fd %d, rc=%d\n",
+			    lfc_fd,
+			    rc);
+		tfp_free(dma_fd);
+		return rc;
+	}
+
+	memcpy(fd, dma_fd->fd, sizeof(dma_fd->fd));
+	tfp_free(dma_fd);
+
+	return rc;
+}
+
 static int
-tf_delete_eem_entry(struct tf_tbl_scope_cb *tbl_scope_cb __rte_unused,
-		    struct tf_delete_em_entry_parms *parms __rte_unused)
+tf_em_dmabuf_mem_map(struct hcapi_cfa_em_table *tbl,
+		     int dmabuf_fd)
 {
+	struct hcapi_cfa_em_page_tbl *tp;
+	int level;
+	uint32_t page_no;
+	uint32_t pg_count;
+	uint32_t offset;
+	struct tfp_calloc_parms parms;
+
+	for (level = (tbl->num_lvl - 1); level < tbl->num_lvl; level++) {
+		tp = &tbl->pg_tbl[level];
+
+		pg_count = tbl->page_cnt[level];
+		offset = 0;
+
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0)
+			return -ENOMEM;
+
+		tp->pg_va_tbl = parms.mem_va;
+		parms.nitems = pg_count;
+		parms.size = sizeof(void *);
+		parms.alignment = 0;
+
+		if ((tfp_calloc(&parms)) != 0) {
+			tfp_free((void *)tp->pg_va_tbl);
+			return -ENOMEM;
+		}
+
+		tp->pg_pa_tbl = parms.mem_va;
+		tp->pg_count = 0;
+		tp->pg_size =  TF_EM_PAGE_SIZE;
+
+		for (page_no = 0; page_no < pg_count; page_no++) {
+			tp->pg_va_tbl[page_no] = mmap(NULL,
+						      TF_EM_PAGE_SIZE,
+						      PROT_READ | PROT_WRITE,
+						      MAP_SHARED,
+						      dmabuf_fd,
+						      offset);
+			if (tp->pg_va_tbl[page_no] == (void *)-1) {
+				TFP_DRV_LOG(ERR,
+		"MMap memory error. level:%d page:%d pg_count:%d - %s\n",
+					    level,
+				     page_no,
+					    pg_count,
+					    strerror(errno));
+				return -ENOMEM;
+			}
+			offset += tp->pg_size;
+			tp->pg_count++;
+		}
+	}
+
 	return 0;
 }
 
-/** insert EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_insert_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_insert_em_entry_parms *parms)
+static int tf_mmap_tbl_scope(struct tf_tbl_scope_cb *tbl_scope_cb,
+			     enum tf_dir dir,
+			     int tbl_type,
+			     int dmabuf_fd)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct hcapi_cfa_em_table *tbl;
+
+	if (tbl_type == TF_EFC_TABLE)
+		return 0;
+
+	tbl = &tbl_scope_cb->em_ctx_info[dir].em_tables[tbl_type];
+	return tf_em_dmabuf_mem_map(tbl, dmabuf_fd);
+}
+
+#define TF_LFC_DEVICE "/dev/bnxt_lfc"
+
+static int
+tf_prepare_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	int lfc_fd;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	lfc_fd = open(TF_LFC_DEVICE, O_RDWR);
+	if (!lfc_fd) {
+		TFP_DRV_LOG(ERR,
+			    "EEM: open %s device error\n",
+			    TF_LFC_DEVICE);
+		return -ENOENT;
 	}
 
-	return tf_insert_eem_entry
-		(tbl_scope_cb, parms);
+	tbl_scope_cb->lfc_fd = lfc_fd;
+
+	return 0;
 }
 
-/** Delete EM hash entry API
- *
- *    returns:
- *    0       - Success
- *    -EINVAL - Error
- */
-int
-tf_em_delete_ext_sys_entry(struct tf *tfp __rte_unused,
-			   struct tf_delete_em_entry_parms *parms)
+static int
+offload_system_mmap(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
-	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int rc;
+	int dmabuf_fd;
+	enum tf_dir dir;
+	enum hcapi_cfa_em_table_type tbl_type;
 
-	tbl_scope_cb = tbl_scope_cb_find(parms->tbl_scope_id);
-	if (tbl_scope_cb == NULL) {
-		TFP_DRV_LOG(ERR, "Invalid tbl_scope_cb\n");
-		return -EINVAL;
+	rc = tf_prepare_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR, "EEM: Prepare bnxt_lfc channel failed\n");
+		return rc;
 	}
 
-	return tf_delete_eem_entry(tbl_scope_cb, parms);
+	rc = tf_export_tbl_scope(tbl_scope_cb->lfc_fd,
+				 (int *)tbl_scope_cb->fd,
+				 tbl_scope_cb->bus,
+				 tbl_scope_cb->devfn);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "export dmabuf fd failed\n");
+		return rc;
+	}
+
+	tbl_scope_cb->valid = true;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (tbl_type = TF_KEY0_TABLE; tbl_type <
+			     TF_MAX_TABLE; tbl_type++) {
+			if (tbl_type == TF_EFC_TABLE)
+				continue;
+
+			dmabuf_fd = tbl_scope_cb->fd[(dir ? 0 : 1)][tbl_type];
+			rc = tf_mmap_tbl_scope(tbl_scope_cb,
+					       dir,
+					       tbl_type,
+					       dmabuf_fd);
+			if (rc) {
+				TFP_DRV_LOG(ERR,
+					    "dir:%d tbl:%d mmap failed rc %d\n",
+					    dir,
+					    tbl_type,
+					    rc);
+				break;
+			}
+		}
+	}
+	return 0;
 }
 
-int
-tf_em_ext_system_alloc(struct tf *tfp __rte_unused,
-		       struct tf_alloc_tbl_scope_parms *parms __rte_unused)
+static int
+tf_destroy_dmabuf_bnxt_lfc_device(struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	close(tbl_scope_cb->lfc_fd);
+
 	return 0;
 }
 
-int
-tf_em_ext_system_free(struct tf *tfp __rte_unused,
-		      struct tf_free_tbl_scope_parms *parms __rte_unused)
+static int
+tf_dmabuf_alloc(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp,
+		tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[TF_KEY0_TABLE].num_entries);
+	if (rc)
+		PMD_DRV_LOG(ERR, "EEM: Failed to prepare system memory rc:%d\n",
+			    rc);
+
 	return 0;
 }
 
-int tf_tbl_ext_system_set(struct tf *tfp __rte_unused,
-			  struct tf_tbl_set_parms *parms __rte_unused)
+static int
+tf_dmabuf_free(struct tf *tfp, struct tf_tbl_scope_cb *tbl_scope_cb)
 {
+	int rc;
+
+	rc = tfp_msg_hwrm_oem_cmd(tfp, 0);
+	if (rc)
+		TFP_DRV_LOG(ERR, "EEM: Failed to cleanup system memory\n");
+
+	tf_destroy_dmabuf_bnxt_lfc_device(tbl_scope_cb);
+
 	return 0;
 }
+
+int
+tf_em_ext_alloc(struct tf *tfp,
+		struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_rm_allocate_parms aparms = { 0 };
+	struct tf_free_tbl_scope_parms free_parms;
+	struct tf_rm_free_parms fparms = { 0 };
+	int dir;
+	int i;
+	struct hcapi_cfa_em_table *em_tables;
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = (uint32_t *)&parms->tbl_scope_id;
+	rc = tf_rm_allocate(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to allocate table scope\n");
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+	tbl_scope_cb->index = parms->tbl_scope_id;
+	tbl_scope_cb->tbl_scope_id = parms->tbl_scope_id;
+	tbl_scope_cb->bus = tfs->session_id.internal.bus;
+	tbl_scope_cb->devfn = tfs->session_id.internal.device;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "EEM: Unable to query for EEM capability,"
+				    " rc:%s\n",
+				    strerror(-rc));
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	rc = tf_dmabuf_alloc(tfp, tbl_scope_cb);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System DMA buff alloc failed\n");
+		return -EIO;
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
+			if (i == TF_EFC_TABLE)
+				continue;
+
+			em_tables =
+				&tbl_scope_cb->em_ctx_info[dir].em_tables[i];
+
+			rc = tf_em_size_table(em_tables, TF_EM_PAGE_SIZE);
+			if (rc) {
+				TFP_DRV_LOG(ERR, "Size table failed\n");
+				goto cleanup;
+			}
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_create_tbl_pool_external(dir,
+					tbl_scope_cb,
+					em_tables[TF_RECORD_TABLE].num_entries,
+					em_tables[TF_RECORD_TABLE].entry_size);
+
+		if (rc) {
+			TFP_DRV_LOG(ERR,
+				    "%s TBL: Unable to allocate idx pools %s\n",
+				    tf_dir_2_str(dir),
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	rc = offload_system_mmap(tbl_scope_cb);
+
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "System alloc mmap failed\n");
+		goto cleanup_full;
+	}
+
+	return rc;
+
+cleanup_full:
+	free_parms.tbl_scope_id = parms->tbl_scope_id;
+	tf_em_ext_free(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	fparms.rm_db = eem_db[TF_DIR_RX];
+	fparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	fparms.index = parms->tbl_scope_id;
+	tf_rm_free(&fparms);
+	return -EINVAL;
+}
+
+int
+tf_em_ext_free(struct tf *tfp,
+	       struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	int dir;
+	struct tf_rm_free_parms aparms = { 0 };
+
+	TF_CHECK_PARMS2(tfp, parms);
+
+	/* Retrieve the session information */
+	rc = tf_session_get_session(tfp, &tfs);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to lookup session, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
+	tbl_scope_cb = &tbl_scopes[parms->tbl_scope_id];
+
+		/* Free Table control block */
+	aparms.rm_db = eem_db[TF_DIR_RX];
+	aparms.db_index = TF_EM_TBL_TYPE_TBL_SCOPE;
+	aparms.index = parms->tbl_scope_id;
+	rc = tf_rm_free(&aparms);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "Failed to free table scope\n");
+	}
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(dir,
+					     tbl_scope_cb);
+
+		/* Unmap memory */
+		tf_em_ctx_unreg(tbl_scope_cb, dir);
+
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+	}
+
+	tf_dmabuf_free(tfp, tbl_scope_cb);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_if_tbl.h b/drivers/net/bnxt/tf_core/tf_if_tbl.h
index 54d4c37f5..7eb72bd42 100644
--- a/drivers/net/bnxt/tf_core/tf_if_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_if_tbl.h
@@ -113,7 +113,7 @@ struct tf_if_tbl_set_parms {
 	/**
 	 * [in] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [in] Entry size
 	 */
@@ -143,7 +143,7 @@ struct tf_if_tbl_get_parms {
 	/**
 	 * [out] Entry data
 	 */
-	uint32_t *data;
+	uint8_t *data;
 	/**
 	 * [out] Entry size
 	 */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 035c0948d..ed506defa 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -813,7 +813,19 @@ tf_msg_tcam_entry_set(struct tf *tfp,
 	struct tf_msg_dma_buf buf = { 0 };
 	uint8_t *data = NULL;
 	int data_size = 0;
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = parms->hcapi_type;
 	req.idx = tfp_cpu_to_le_16(parms->idx);
 	if (parms->dir == TF_DIR_TX)
@@ -869,7 +881,19 @@ tf_msg_tcam_entry_free(struct tf *tfp,
 	struct hwrm_tf_tcam_free_input req =  { 0 };
 	struct hwrm_tf_tcam_free_output resp = { 0 };
 	struct tfp_send_msg_parms parms = { 0 };
+	uint8_t fw_session_id;
 
+	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
+	if (rc) {
+		TFP_DRV_LOG(ERR,
+			    "%s: Unable to lookup FW id, rc:%s\n",
+			    tf_dir_2_str(in_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
+	/* Populate the request */
+	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
 	req.type = in_parms->hcapi_type;
 	req.count = 1;
 	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 2a10b47ce..f20e8d729 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -38,6 +38,13 @@ struct tf_em_caps {
  */
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
+#ifdef TF_USE_SYSTEM_MEM
+	int lfc_fd;
+	uint32_t bus;
+	uint32_t devfn;
+	int fd[TF_DIR_MAX][TF_MAX_TABLE];
+	bool valid;
+#endif
 	int index;
 	struct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];
 	struct tf_em_caps em_caps[TF_DIR_MAX];
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
index 426a182a9..3eade3127 100644
--- a/drivers/net/bnxt/tf_core/tfp.c
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -87,6 +87,18 @@ tfp_send_msg_tunneled(struct tf *tfp,
 	return rc;
 }
 
+#ifdef TF_USE_SYSTEM_MEM
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows)
+{
+	return bnxt_hwrm_oem_cmd(container_of(tfp,
+					      struct bnxt,
+					      tfp),
+				 max_flows);
+}
+#endif /* TF_USE_SYSTEM_MEM */
+
 /**
  * Allocates zero'ed memory from the heap.
  *
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
index 8789eba1f..421a7d9f7 100644
--- a/drivers/net/bnxt/tf_core/tfp.h
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -170,6 +170,21 @@ int
 tfp_msg_hwrm_oem_cmd(struct tf *tfp,
 		     uint32_t max_flows);
 
+/**
+ * Sends OEM command message to Chimp
+ *
+ * [in] session, pointer to session handle
+ * [in] max_flows, max number of flows requested
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int
+tfp_msg_hwrm_oem_cmd(struct tf *tfp,
+		     uint32_t max_flows);
+
 /**
  * Allocates zero'ed memory from the heap.
  *
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 32/51] net/bnxt: integrate with the latest tf core changes
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (30 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
                           ` (20 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

ULP changes to integrate with the latest session open
interface in tf_core

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 46 ++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index c7281ab9a..a9ed5d92a 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -68,6 +68,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 	struct rte_eth_dev		*ethdev = bp->eth_dev;
 	int32_t				rc = 0;
 	struct tf_open_session_parms	params;
+	struct tf_session_resources	*resources;
 
 	memset(&params, 0, sizeof(params));
 
@@ -79,6 +80,51 @@ ulp_ctx_session_open(struct bnxt *bp,
 		return rc;
 	}
 
+	params.shadow_copy = false;
+	params.device_type = TF_DEVICE_TYPE_WH;
+	resources = &params.resources;
+	/** RX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_L2_CTXT] = 16;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_RX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 720;
+	resources->tbl_cnt[TF_DIR_RX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 720;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 16;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_RX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 416;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 2048;
+
+	/** TX **/
+	/* Identifiers */
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_L2_CTXT] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_WC_PROF] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_PROF_FUNC] = 8;
+	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_EM_PROF] = 8;
+
+	/* Table Types */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
+
+	/* TCAMs */
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
+	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_WC_TCAM] = 8;
+
+	/* EM */
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 8;
+
+	/* EEM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+
 	rc = tf_open_session(&bp->tfp, &params);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 33/51] net/bnxt: add support for internal encap records
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (31 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 34/51] net/bnxt: add support for if table processing Ajit Khaparde
                           ` (19 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Somnath Kotur, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

Modifications to allow internal encap records to be supported:
- Modified the mapper index table processing to handle encap without an
  action record
- Modified the session open code to reserve some 64 Byte internal encap
  records on tx
- Modified the blob encap swap to support encap without action record

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c   |  3 +++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c | 29 +++++++++++++---------------
 drivers/net/bnxt/tf_ulp/ulp_utils.c  |  2 +-
 3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index a9ed5d92a..4c1a1c44c 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -113,6 +113,9 @@ ulp_ctx_session_open(struct bnxt *bp,
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_FULL_ACT_RECORD] = 16;
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_STATS_64] = 16;
 
+	/* ENCAP */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_PROF_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 734db7c6c..a9a625f9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1473,7 +1473,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
 
-	if (!flds || !num_flds) {
+	if (!flds || (!num_flds && !encap_flds)) {
 		BNXT_TF_DBG(ERR, "template undefined for the index table\n");
 		return -EINVAL;
 	}
@@ -1482,7 +1482,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	for (i = 0; i < (num_flds + encap_flds); i++) {
 		/* set the swap index if encap swap bit is enabled */
 		if (parms->device_params->encap_byte_swap && encap_flds &&
-		    ((i + 1) == num_flds))
+		    (i == num_flds))
 			ulp_blob_encap_swap_idx_set(&data);
 
 		/* Process the result fields */
@@ -1495,18 +1495,15 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			BNXT_TF_DBG(ERR, "data field failed\n");
 			return rc;
 		}
+	}
 
-		/* if encap bit swap is enabled perform the bit swap */
-		if (parms->device_params->encap_byte_swap && encap_flds) {
-			if ((i + 1) == (num_flds + encap_flds))
-				ulp_blob_perform_encap_swap(&data);
+	/* if encap bit swap is enabled perform the bit swap */
+	if (parms->device_params->encap_byte_swap && encap_flds) {
+		ulp_blob_perform_encap_swap(&data);
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-			if ((i + 1) == (num_flds + encap_flds)) {
-				BNXT_TF_DBG(INFO, "Dump fter encap swap\n");
-				ulp_mapper_blob_dump(&data);
-			}
+		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
+		ulp_mapper_blob_dump(&data);
 #endif
-		}
 	}
 
 	/* Perform the tf table allocation by filling the alloc params */
@@ -1817,6 +1814,11 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+					    tbl->resource_func);
+				return rc;
+			}
 			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected action resource %d\n",
@@ -1824,11 +1826,6 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 			return -EINVAL;
 		}
 	}
-	if (rc) {
-		BNXT_TF_DBG(ERR, "Resource type %d failed\n",
-			    tbl->resource_func);
-		return rc;
-	}
 
 	return rc;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
index 3a4157f22..3afaac647 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_utils.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -478,7 +478,7 @@ ulp_blob_perform_encap_swap(struct ulp_blob *blob)
 		BNXT_TF_DBG(ERR, "invalid argument\n");
 		return; /* failure */
 	}
-	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx);
 	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
 
 	while (idx <= end_idx) {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 34/51] net/bnxt: add support for if table processing
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (32 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 35/51] net/bnxt: disable Tx vector mode if truflow is set Ajit Khaparde
                           ` (18 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for if table processing in the ulp mapper
layer. This enables support for the default partition action
record pointer interface table.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   2 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 141 +++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   1 +
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 117 ++++++++-------
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   8 +-
 6 files changed, 187 insertions(+), 83 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4c1a1c44c..4835b951e 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -115,6 +115,7 @@ ulp_ctx_session_open(struct bnxt *bp,
 
 	/* ENCAP */
 	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_64B] = 16;
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_ENCAP_16B] = 16;
 
 	/* TCAMs */
 	resources->tcam_cnt[TF_DIR_TX].cnt[TF_TCAM_TBL_TYPE_L2_CTXT_TCAM] = 8;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 22996e50e..384dc5b2c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -933,7 +933,7 @@ ulp_default_flow_db_cfa_action_get(struct bnxt_ulp_context *ulp_ctx,
 				   uint32_t flow_id,
 				   uint32_t *cfa_action)
 {
-	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX;
+	uint8_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION;
 	uint64_t hndl;
 	int32_t rc;
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index a9a625f9f..42bb98557 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -184,7 +184,8 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
 	return &ulp_act_tbl_list[idx];
 }
 
-/** Get a list of classifier tables that implement the flow
+/*
+ * Get a list of classifier tables that implement the flow
  * Gets a device dependent list of tables that implement the class template id
  *
  * dev_id [in] The device id of the forwarding element
@@ -193,13 +194,16 @@ ulp_mapper_action_tbl_list_get(uint32_t dev_id,
  *
  * num_tbls [out] The number of classifier tables in the returned array
  *
+ * fdb_tbl_idx [out] The flow database index Regular or default
+ *
  * returns An array of classifier tables to implement the flow, or NULL on
  * error
  */
 static struct bnxt_ulp_mapper_tbl_info *
 ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 			      uint32_t tid,
-			      uint32_t *num_tbls)
+			      uint32_t *num_tbls,
+			      uint32_t *fdb_tbl_idx)
 {
 	uint32_t idx;
 	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
@@ -212,7 +216,7 @@ ulp_mapper_class_tbl_list_get(uint32_t dev_id,
 	 */
 	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
 	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
-
+	*fdb_tbl_idx = ulp_class_tmpl_list[tidx].flow_db_table_type;
 	return &ulp_class_tbl_list[idx];
 }
 
@@ -256,7 +260,8 @@ ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
-			     uint32_t *num_flds)
+			     uint32_t *num_flds,
+			     uint32_t *num_encap_flds)
 {
 	uint32_t idx;
 
@@ -265,6 +270,7 @@ ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_tbl_info *tbl,
 
 	idx		= tbl->result_start_idx;
 	*num_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
 
 	/* NOTE: Need template to provide range checking define */
 	return &ulp_class_result_field_list[idx];
@@ -1146,6 +1152,7 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		struct bnxt_ulp_mapper_result_field_info *dflds;
 		struct bnxt_ulp_mapper_ident_info *idents;
 		uint32_t num_dflds, num_idents;
+		uint32_t encap_flds = 0;
 
 		/*
 		 * Since the cache entry is responsible for allocating
@@ -1166,8 +1173,9 @@ ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 
 		/* Create the result data blob */
-		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-		if (!dflds || !num_dflds) {
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds,
+						     &encap_flds);
+		if (!dflds || !num_dflds || encap_flds) {
 			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 			rc = -EINVAL;
 			goto error;
@@ -1293,6 +1301,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	int32_t	trc;
 	enum bnxt_ulp_flow_mem_type mtype = parms->device_params->flow_mem_type;
 	int32_t rc = 0;
+	uint32_t encap_flds = 0;
 
 	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
 	if (!kflds || !num_kflds) {
@@ -1327,8 +1336,8 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	 */
 
 	/* Create the result data blob */
-	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
-	if (!dflds || !num_dflds) {
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds, &encap_flds);
+	if (!dflds || !num_dflds || encap_flds) {
 		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
 		return -EINVAL;
 	}
@@ -1468,7 +1477,8 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Get the result fields list */
 	if (is_class_tbl)
-		flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+		flds = ulp_mapper_result_fields_get(tbl, &num_flds,
+						    &encap_flds);
 	else
 		flds = ulp_mapper_act_result_fields_get(tbl, &num_flds,
 							&encap_flds);
@@ -1761,6 +1771,76 @@ ulp_mapper_cache_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0;
+	struct tf_set_if_tbl_entry_parms iftbl_params = { 0 };
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	uint32_t encap_flds;
+
+	/* Initialize the blob data */
+	if (!ulp_blob_init(&data, tbl->result_bit_size,
+			   parms->device_params->byte_order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	/* Get the result fields list */
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds, &encap_flds);
+
+	if (!flds || !num_flds || encap_flds) {
+		BNXT_TF_DBG(ERR, "template undefined for the IF table\n");
+		return -EINVAL;
+	}
+
+	/* process the result fields, loop through them */
+	for (i = 0; i < num_flds; i++) {
+		/* Process the result fields */
+		rc = ulp_mapper_result_field_process(parms,
+						     tbl->direction,
+						     &flds[i],
+						     &data,
+						     "IFtable Result");
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	/* Get the index details from computed field */
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+
+	/* Perform the tf table set by filling the set params */
+	iftbl_params.dir = tbl->direction;
+	iftbl_params.type = tbl->resource_type;
+	iftbl_params.data = ulp_blob_data_get(&data, &tmplen);
+	iftbl_params.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+	iftbl_params.idx = idx;
+
+	rc = tf_set_if_tbl_entry(tfp, &iftbl_params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+			    iftbl_params.type,
+			    (iftbl_params.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iftbl_params.idx,
+			    rc);
+		return rc;
+	}
+
+	/*
+	 * TBD: Need to look at the need to store idx in flow db for restore
+	 * the table to its orginial state on deletion of this entry.
+	 */
+	return rc;
+}
+
 static int32_t
 ulp_mapper_glb_resource_info_init(struct tf *tfp,
 				  struct bnxt_ulp_mapper_data *mapper_data)
@@ -1862,6 +1942,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE:
 			rc = ulp_mapper_cache_tbl_process(parms, tbl);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_IF_TABLE:
+			rc = ulp_mapper_if_tbl_process(parms, tbl);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
 				    tbl->resource_func);
@@ -2064,20 +2147,29 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Get the action table entry from device id and act context id */
 	parms.act_tid = cparms->act_tid;
-	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
-						     parms.act_tid,
-						     &parms.num_atbls);
-	if (!parms.atbls || !parms.num_atbls) {
-		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		return -EINVAL;
+
+	/*
+	 * Perform the action table get only if act template is not zero
+	 * for act template zero like for default rules ignore the action
+	 * table processing.
+	 */
+	if (parms.act_tid) {
+		parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+							     parms.act_tid,
+							     &parms.num_atbls);
+		if (!parms.atbls || !parms.num_atbls) {
+			BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			return -EINVAL;
+		}
 	}
 
 	/* Get the class table entry from device id and act context id */
 	parms.class_tid = cparms->class_tid;
 	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
 						    parms.class_tid,
-						    &parms.num_ctbls);
+						    &parms.num_ctbls,
+						    &parms.tbl_idx);
 	if (!parms.ctbls || !parms.num_ctbls) {
 		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
 			    parms.dev_id, parms.class_tid);
@@ -2111,7 +2203,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	 * free each of them.
 	 */
 	rc = ulp_flow_db_fid_alloc(ulp_ctx,
-				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   parms.tbl_idx,
 				   cparms->func_id,
 				   &parms.fid);
 	if (rc) {
@@ -2120,11 +2212,14 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	}
 
 	/* Process the action template list from the selected action table*/
-	rc = ulp_mapper_action_tbls_process(&parms);
-	if (rc) {
-		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
-			    parms.dev_id, parms.act_tid);
-		goto flow_error;
+	if (parms.act_tid) {
+		rc = ulp_mapper_action_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "action tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.act_tid);
+			goto flow_error;
+		}
 	}
 
 	/* All good. Now process the class template */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 89c08ab25..517422338 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -256,6 +256,7 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 			BNXT_TF_DBG(ERR, "Mark index greater than allocated\n");
 			return -EINVAL;
 		}
+		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
 	}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index ac84f88e9..66343b918 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -88,35 +88,36 @@ enum bnxt_ulp_byte_order {
 };
 
 enum bnxt_ulp_cf_idx {
-	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 0,
-	BNXT_ULP_CF_IDX_O_VTAG_NUM = 1,
-	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 2,
-	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 3,
-	BNXT_ULP_CF_IDX_I_VTAG_NUM = 4,
-	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 5,
-	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 6,
-	BNXT_ULP_CF_IDX_INCOMING_IF = 7,
-	BNXT_ULP_CF_IDX_DIRECTION = 8,
-	BNXT_ULP_CF_IDX_SVIF_FLAG = 9,
-	BNXT_ULP_CF_IDX_O_L3 = 10,
-	BNXT_ULP_CF_IDX_I_L3 = 11,
-	BNXT_ULP_CF_IDX_O_L4 = 12,
-	BNXT_ULP_CF_IDX_I_L4 = 13,
-	BNXT_ULP_CF_IDX_DEV_PORT_ID = 14,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 15,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 16,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 17,
-	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 18,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 19,
-	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 20,
-	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 21,
-	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 22,
-	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 23,
-	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 24,
-	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 25,
-	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 26,
-	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 27,
-	BNXT_ULP_CF_IDX_LAST = 28
+	BNXT_ULP_CF_IDX_NOT_USED = 0,
+	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 1,
+	BNXT_ULP_CF_IDX_O_VTAG_NUM = 2,
+	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 3,
+	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 4,
+	BNXT_ULP_CF_IDX_I_VTAG_NUM = 5,
+	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 6,
+	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 7,
+	BNXT_ULP_CF_IDX_INCOMING_IF = 8,
+	BNXT_ULP_CF_IDX_DIRECTION = 9,
+	BNXT_ULP_CF_IDX_SVIF_FLAG = 10,
+	BNXT_ULP_CF_IDX_O_L3 = 11,
+	BNXT_ULP_CF_IDX_I_L3 = 12,
+	BNXT_ULP_CF_IDX_O_L4 = 13,
+	BNXT_ULP_CF_IDX_I_L4 = 14,
+	BNXT_ULP_CF_IDX_DEV_PORT_ID = 15,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 16,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 17,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 18,
+	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 19,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 20,
+	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 21,
+	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 22,
+	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 23,
+	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 24,
+	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 25,
+	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
+	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
+	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
+	BNXT_ULP_CF_IDX_LAST = 29
 };
 
 enum bnxt_ulp_critical_resource {
@@ -133,11 +134,6 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
-enum bnxt_ulp_df_param_type {
-	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
-	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
-};
-
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
@@ -154,7 +150,8 @@ enum bnxt_ulp_flow_mem_type {
 enum bnxt_ulp_glb_regfile_index {
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
 	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 2
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
 };
 
 enum bnxt_ulp_hdr_type {
@@ -204,22 +201,22 @@ enum bnxt_ulp_priority {
 };
 
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 0,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 1,
-	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 2,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 3,
-	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 4,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 5,
-	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 6,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 7,
-	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 8,
-	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 9,
-	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 10,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 11,
-	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 12,
-	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 13,
-	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 14,
-	BNXT_ULP_REGFILE_INDEX_NOT_USED = 15,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_REGFILE_INDEX_CLASS_TID = 1,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 8,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 9,
+	BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR = 10,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 11,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 12,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
+	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
+	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
 	BNXT_ULP_REGFILE_INDEX_LAST = 16
 };
 
@@ -265,10 +262,10 @@ enum bnxt_ulp_resource_func {
 enum bnxt_ulp_resource_sub_type {
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT_INDEX = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT_INDEX = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_ACT_IDX = 1,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
 	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
 };
 
@@ -282,7 +279,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
 	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
 	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
@@ -398,7 +394,6 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
 	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
 	BNXT_ULP_SYM_NO = 0,
@@ -489,6 +484,11 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_wh_plus {
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+};
+
 enum bnxt_ulp_act_prop_sz {
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
 	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
@@ -588,4 +588,9 @@ enum bnxt_ulp_act_hid {
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
 	BNXT_ULP_ACT_HID_0040 = 0x0040
 };
+
+enum bnxt_ulp_df_tpl {
+	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5c4335847..1188223aa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -150,9 +150,10 @@ struct bnxt_ulp_device_params {
 
 /* Flow Mapper */
 struct bnxt_ulp_mapper_tbl_list_info {
-	uint32_t	device_name;
-	uint32_t	start_tbl_idx;
-	uint32_t	num_tbls;
+	uint32_t		device_name;
+	uint32_t		start_tbl_idx;
+	uint32_t		num_tbls;
+	enum bnxt_ulp_fdb_type	flow_db_table_type;
 };
 
 struct bnxt_ulp_mapper_tbl_info {
@@ -183,6 +184,7 @@ struct bnxt_ulp_mapper_tbl_info {
 
 	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
+	uint32_t			comp_field_idx;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 35/51] net/bnxt: disable Tx vector mode if truflow is set
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (33 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 34/51] net/bnxt: add support for if table processing Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
                           ` (17 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Somnath Kotur, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The vector mode in the tx handler is disabled when truflow is
enabled since truflow now requires bd action record support.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 72cc2daa6..93c2f0db9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1115,12 +1115,15 @@ bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
 {
 #ifdef RTE_ARCH_X86
 #ifndef RTE_LIBRTE_IEEE1588
+	struct bnxt *bp = eth_dev->data->dev_private;
+
 	/*
 	 * Vector mode transmit can be enabled only if not using scatter rx
 	 * or tx offloads.
 	 */
 	if (!eth_dev->data->scattered_rx &&
-	    !eth_dev->data->dev_conf.txmode.offloads) {
+	    !eth_dev->data->dev_conf.txmode.offloads &&
+	    !BNXT_TRUFLOW_EN(bp)) {
 		PMD_DRV_LOG(INFO, "Using vector mode transmit for port %d\n",
 			    eth_dev->data->port_id);
 		return bnxt_xmit_pkts_vec;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 36/51] net/bnxt: add index opcode and operand to mapper table
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (34 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 35/51] net/bnxt: disable Tx vector mode if truflow is set Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
                           ` (16 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Extended the regfile and computed field operations to a common
index opcode operation and added globlal resource operations are
also part of the index opcode operation.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 56 ++++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  9 ++-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 45 +++++----------
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  8 +++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  4 +-
 5 files changed, 80 insertions(+), 42 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 42bb98557..7b3b3d698 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1443,7 +1443,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	struct bnxt_ulp_mapper_result_field_info *flds;
 	struct ulp_flow_db_res_params	fid_parms;
 	struct ulp_blob	data;
-	uint64_t idx;
+	uint64_t idx = 0;
 	uint16_t tmplen;
 	uint32_t i, num_flds;
 	int32_t rc = 0, trc = 0;
@@ -1516,6 +1516,42 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 	}
 
+	/*
+	 * Check for index opcode, if it is Global then
+	 * no need to allocate the table, just set the table
+	 * and exit since it is not maintained in the flow db.
+	 */
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_GLOBAL) {
+		/* get the index from index operand */
+		if (tbl->index_operand < BNXT_ULP_GLB_REGFILE_INDEX_LAST &&
+		    ulp_mapper_glb_resource_read(parms->mapper_data,
+						 tbl->direction,
+						 tbl->index_operand,
+						 &idx)) {
+			BNXT_TF_DBG(ERR, "Glbl regfile[%d] read failed.\n",
+				    tbl->index_operand);
+			return -EINVAL;
+		}
+		/* set the Tf index table */
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->resource_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes = ULP_BITS_2_BYTE(tmplen);
+		sparms.idx		= tfp_be_to_cpu_64(idx);
+		sparms.tbl_scope_id	= tbl_scope_id;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Glbl Set table[%d][%s][%d] failed rc=%d\n",
+				    sparms.type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+			return rc;
+		}
+		return 0; /* success */
+	}
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1546,11 +1582,13 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 
 	/* Always storing values in Regfile in BE */
 	idx = tfp_cpu_to_be_64(idx);
-	rc = ulp_regfile_write(parms->regfile, tbl->regfile_idx, idx);
-	if (!rc) {
-		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
-			    tbl->regfile_idx);
-		goto error;
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_ALLOCATE) {
+		rc = ulp_regfile_write(parms->regfile, tbl->index_operand, idx);
+		if (!rc) {
+			BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+				    tbl->index_operand);
+			goto error;
+		}
 	}
 
 	/* Perform the tf table set by filling the set params */
@@ -1815,7 +1853,11 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	}
 
 	/* Get the index details from computed field */
-	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->comp_field_idx);
+	if (tbl->index_opcode != BNXT_ULP_INDEX_OPCODE_COMP_FIELD) {
+		BNXT_TF_DBG(ERR, "Invalid tbl index opcode\n");
+		return -EINVAL;
+	}
+	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->index_operand);
 
 	/* Perform the tf table set by filling the set params */
 	iftbl_params.dir = tbl->direction;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 8af23eff1..9b14fa0bd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -76,7 +76,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -90,7 +91,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	},
 	{
@@ -104,7 +106,8 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
 	}
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index e773afd60..d4c7bfa4d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -113,8 +113,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 0,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -135,8 +134,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -157,8 +155,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 1,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -179,8 +176,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -201,8 +197,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -223,8 +218,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 2,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -245,8 +239,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -267,8 +260,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 3,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -289,8 +281,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -311,8 +302,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -333,8 +323,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 4,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -355,8 +344,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
@@ -377,8 +365,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 5,
 	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
@@ -399,8 +386,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
@@ -421,8 +407,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.ident_start_idx = 6,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES,
-	.regfile_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 66343b918..0215a5dde 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -161,6 +161,14 @@ enum bnxt_ulp_hdr_type {
 	BNXT_ULP_HDR_TYPE_LAST = 3
 };
 
+enum bnxt_ulp_index_opcode {
+	BNXT_ULP_INDEX_OPCODE_NOT_USED = 0,
+	BNXT_ULP_INDEX_OPCODE_ALLOCATE = 1,
+	BNXT_ULP_INDEX_OPCODE_GLOBAL = 2,
+	BNXT_ULP_INDEX_OPCODE_COMP_FIELD = 3,
+	BNXT_ULP_INDEX_OPCODE_LAST = 4
+};
+
 enum bnxt_ulp_mapper_opc {
 	BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 1188223aa..a3ddd33fd 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -182,9 +182,9 @@ struct bnxt_ulp_mapper_tbl_info {
 	uint32_t	ident_start_idx;
 	uint16_t	ident_nums;
 
-	enum bnxt_ulp_regfile_index	regfile_idx;
 	enum bnxt_ulp_mark_db_opcode	mark_db_opcode;
-	uint32_t			comp_field_idx;
+	enum bnxt_ulp_index_opcode	index_opcode;
+	uint32_t			index_operand;
 };
 
 struct bnxt_ulp_mapper_class_key_field_info {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 37/51] net/bnxt: add support for global resource templates
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (35 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 38/51] net/bnxt: add support for internal exact match Ajit Khaparde
                           ` (15 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the global resource templates, so that they
can be reused by the other regular templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 178 +++++++++++++++++-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   1 +
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   3 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   6 +
 4 files changed, 181 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 7b3b3d698..6fd55b2a2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -80,15 +80,20 @@ ulp_mapper_glb_resource_write(struct bnxt_ulp_mapper_data *data,
  * returns 0 on success
  */
 static int32_t
-ulp_mapper_resource_ident_allocate(struct tf *tfp,
+ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 				   struct bnxt_ulp_mapper_data *mapper_data,
 				   struct bnxt_ulp_glb_resource_info *glb_res)
 {
 	struct tf_alloc_identifier_parms iparms = { 0 };
 	struct tf_free_identifier_parms fparms;
 	uint64_t regval;
+	struct tf *tfp;
 	int32_t rc = 0;
 
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
 	iparms.ident_type = glb_res->resource_type;
 	iparms.dir = glb_res->direction;
 
@@ -115,13 +120,76 @@ ulp_mapper_resource_ident_allocate(struct tf *tfp,
 		return rc;
 	}
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res[%s][%d][%d] = 0x%04x\n",
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
 		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
 		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
 #endif
 	return rc;
 }
 
+/*
+ * Internal function to allocate index tbl resource and store it in mapper data.
+ *
+ * returns 0 on success
+ */
+static int32_t
+ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
+				    struct bnxt_ulp_mapper_data *mapper_data,
+				    struct bnxt_ulp_glb_resource_info *glb_res)
+{
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+	uint64_t regval;
+	struct tf *tfp;
+	uint32_t tbl_scope_id;
+	int32_t rc = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp)
+		return -EINVAL;
+
+	/* Get the scope id */
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	aparms.type = glb_res->resource_type;
+	aparms.dir = glb_res->direction;
+	aparms.search_enable = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO;
+	aparms.tbl_scope_id = tbl_scope_id;
+
+	/* Allocate the index tbl using tf api */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to alloc identifier [%s][%d]\n",
+			    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    aparms.type);
+		return rc;
+	}
+
+	/* entries are stored as big-endian format */
+	regval = tfp_cpu_to_be_64((uint64_t)aparms.idx);
+	/* write to the mapper global resource */
+	rc = ulp_mapper_glb_resource_write(mapper_data, glb_res, regval);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to write to global resource id\n");
+		/* Free the identifer when update failed */
+		free_parms.dir = aparms.dir;
+		free_parms.type = aparms.type;
+		free_parms.idx = aparms.idx;
+		tf_free_tbl_entry(tfp, &free_parms);
+		return rc;
+	}
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
+	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
+		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
+		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
+#endif
+	return rc;
+}
+
 /* Retrieve the cache initialization parameters for the tbl_idx */
 static struct bnxt_ulp_cache_tbl_params *
 ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
@@ -132,6 +200,16 @@ ulp_mapper_cache_tbl_params_get(uint32_t tbl_idx)
 	return &ulp_cache_tbl_params[tbl_idx];
 }
 
+/* Retrieve the global template table */
+static uint32_t *
+ulp_mapper_glb_template_table_get(uint32_t *num_entries)
+{
+	if (!num_entries)
+		return NULL;
+	*num_entries = BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ;
+	return ulp_glb_template_tbl;
+}
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -659,7 +737,10 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 			return -EINVAL;
 		}
 		act_bit = tfp_be_to_cpu_64(act_bit);
-		act_val = ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit);
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit))
+			act_val = 1;
+		else
+			act_val = 0;
 		if (fld->field_bit_size > ULP_BYTE_2_BITS(sizeof(act_val))) {
 			BNXT_TF_DBG(ERR, "%s field size is incorrect\n", name);
 			return -EINVAL;
@@ -1552,6 +1633,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 		return 0; /* success */
 	}
+
 	/* Perform the tf table allocation by filling the alloc params */
 	aparms.dir		= tbl->direction;
 	aparms.type		= tbl->resource_type;
@@ -1616,6 +1698,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	fid_parms.direction	= tbl->direction;
 	fid_parms.resource_func	= tbl->resource_func;
 	fid_parms.resource_type	= tbl->resource_type;
+	fid_parms.resource_sub_type = tbl->resource_sub_type;
 	fid_parms.resource_hndl	= aparms.idx;
 	fid_parms.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO;
 
@@ -1884,7 +1967,7 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 }
 
 static int32_t
-ulp_mapper_glb_resource_info_init(struct tf *tfp,
+ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 				  struct bnxt_ulp_mapper_data *mapper_data)
 {
 	struct bnxt_ulp_glb_resource_info *glb_res;
@@ -1901,15 +1984,23 @@ ulp_mapper_glb_resource_info_init(struct tf *tfp,
 	for (idx = 0; idx < num_glb_res_ids; idx++) {
 		switch (glb_res[idx].resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
-			rc = ulp_mapper_resource_ident_allocate(tfp,
+			rc = ulp_mapper_resource_ident_allocate(ulp_ctx,
 								mapper_data,
 								&glb_res[idx]);
 			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_resource_index_tbl_alloc(ulp_ctx,
+								 mapper_data,
+								 &glb_res[idx]);
+			break;
 		default:
 			BNXT_TF_DBG(ERR, "Global resource %x not supported\n",
 				    glb_res[idx].resource_func);
+			rc = -EINVAL;
 			break;
 		}
+		if (rc)
+			return rc;
 	}
 	return rc;
 }
@@ -2125,7 +2216,9 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 			res.resource_func = ent->resource_func;
 			res.direction = dir;
 			res.resource_type = ent->resource_type;
-			res.resource_hndl = ent->resource_hndl;
+			/*convert it from BE to cpu */
+			res.resource_hndl =
+				tfp_be_to_cpu_64(ent->resource_hndl);
 			ulp_mapper_resource_free(ulp_ctx, &res);
 		}
 	}
@@ -2144,6 +2237,71 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
 
+/* Function to handle the default global templates that are allocated during
+ * the startup and reused later.
+ */
+static int32_t
+ulp_mapper_glb_template_table_init(struct bnxt_ulp_context *ulp_ctx)
+{
+	uint32_t *glbl_tmpl_list;
+	uint32_t num_glb_tmpls, idx, dev_id;
+	struct bnxt_ulp_mapper_parms parms;
+	struct bnxt_ulp_mapper_data *mapper_data;
+	int32_t rc = 0;
+
+	glbl_tmpl_list = ulp_mapper_glb_template_table_get(&num_glb_tmpls);
+	if (!glbl_tmpl_list || !num_glb_tmpls)
+		return rc; /* No global templates to process */
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mapper_data = bnxt_ulp_cntxt_ptr2_mapper_data_get(ulp_ctx);
+	if (!mapper_data) {
+		BNXT_TF_DBG(ERR, "Failed to get the ulp mapper data\n");
+		return -EINVAL;
+	}
+
+	/* Iterate the global resources and process each one */
+	for (idx = 0; idx < num_glb_tmpls; idx++) {
+		/* Initialize the parms structure */
+		memset(&parms, 0, sizeof(parms));
+		parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+		parms.ulp_ctx = ulp_ctx;
+		parms.dev_id = dev_id;
+		parms.mapper_data = mapper_data;
+
+		/* Get the class table entry from dev id and class id */
+		parms.class_tid = glbl_tmpl_list[idx];
+		parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+							    parms.class_tid,
+							    &parms.num_ctbls,
+							    &parms.tbl_idx);
+		if (!parms.ctbls || !parms.num_ctbls) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		parms.device_params = bnxt_ulp_device_params_get(parms.dev_id);
+		if (!parms.device_params) {
+			BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return -EINVAL;
+		}
+		rc = ulp_mapper_class_tbls_process(&parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "class tables failed creation for %d:%d\n",
+				    parms.dev_id, parms.class_tid);
+			return rc;
+		}
+	}
+	return rc;
+}
+
 /* Function to handle the mapping of the Flow to be compatible
  * with the underlying hardware.
  */
@@ -2316,7 +2474,7 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 	}
 
 	/* Allocate the global resource ids */
-	rc = ulp_mapper_glb_resource_info_init(tfp, data);
+	rc = ulp_mapper_glb_resource_info_init(ulp_ctx, data);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to initialize global resource ids\n");
 		goto error;
@@ -2344,6 +2502,12 @@ ulp_mapper_init(struct bnxt_ulp_context *ulp_ctx)
 		}
 	}
 
+	rc = ulp_mapper_glb_template_table_init(ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize global templates\n");
+		goto error;
+	}
+
 	return 0;
 error:
 	/* Ignore the return code in favor of returning the original error. */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 0215a5dde..7c0dc5ee4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -26,6 +26,7 @@
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
 #define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 2efd11447..beca3baa7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -546,3 +546,6 @@ uint32_t bnxt_ulp_encap_vtag_map[] = {
 	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
 	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
+
+uint32_t ulp_glb_template_tbl[] = {
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index a3ddd33fd..4bcd02ba2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -299,4 +299,10 @@ extern struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[];
  */
 extern struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[];
 
+/*
+ * The ulp_global template table is used to initialize default entries
+ * that could be reused by other templates.
+ */
+extern uint32_t ulp_glb_template_tbl[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 38/51] net/bnxt: add support for internal exact match
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (36 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
                           ` (14 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the internal exact match entries.
Prior to this patch only external entries were supported.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 38 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 13 +++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 21 ++++++----
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  4 ++
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   |  6 +--
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 13 ++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |  7 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  5 +++
 8 files changed, 85 insertions(+), 22 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 4835b951e..1b52861d4 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -213,8 +213,27 @@ static int32_t
 ulp_eem_tbl_scope_init(struct bnxt *bp)
 {
 	struct tf_alloc_tbl_scope_parms params = {0};
+	uint32_t dev_id;
+	struct bnxt_ulp_device_params *dparms;
 	int rc;
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope alloc is not required\n");
+		return 0;
+	}
+
 	bnxt_init_tbl_scope_parms(bp, &params);
 
 	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
@@ -240,6 +259,8 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 	struct tf_free_tbl_scope_parms	params = {0};
 	struct tf			*tfp;
 	int32_t				rc = 0;
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id;
 
 	if (!ulp_ctx || !ulp_ctx->cfg_data)
 		return -EINVAL;
@@ -254,6 +275,23 @@ ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
 		return -EINVAL;
 	}
 
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(bp->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	if (dparms->flow_mem_type != BNXT_ULP_FLOW_MEM_TYPE_EXT) {
+		BNXT_TF_DBG(INFO, "Table Scope free is not required\n");
+		return 0;
+	}
+
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 384dc5b2c..7696de2a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -114,7 +114,8 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info *resource_info,
 	}
 
 	/* Store the handle as 64bit only for EM table entries */
-	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE &&
+	    params->resource_func != BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
 		resource_info->resource_type = params->resource_type;
 		resource_info->resource_sub_type = params->resource_sub_type;
@@ -145,7 +146,8 @@ ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info *resource_info,
 	/* use the helper function to get the resource func */
 	params->resource_func = ulp_flow_db_resource_func_get(resource_info);
 
-	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+	if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    params->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 		params->resource_hndl = resource_info->resource_em_handle;
 	} else if (params->resource_func & ULP_FLOW_DB_RES_FUNC_NEED_LOWER) {
 		params->resource_hndl = resource_info->resource_hndl;
@@ -908,7 +910,9 @@ ulp_flow_db_resource_hndl_get(struct bnxt_ulp_context *ulp_ctx,
 				}
 
 			} else if (resource_func ==
-				   BNXT_ULP_RESOURCE_FUNC_EM_TABLE){
+				   BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+				   resource_func ==
+				   BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE) {
 				*res_hndl = fid_res->resource_em_handle;
 				return 0;
 			}
@@ -966,7 +970,8 @@ static void ulp_flow_db_res_dump(struct ulp_fdb_resource_info	*r,
 
 	BNXT_TF_DBG(DEBUG, "Resource func = %x, nxt_resource_idx = %x\n",
 		    res_func, (ULP_FLOW_DB_RES_NXT_MASK & r->nxt_resource_idx));
-	if (res_func == BNXT_ULP_RESOURCE_FUNC_EM_TABLE)
+	if (res_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE ||
+	    res_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		BNXT_TF_DBG(DEBUG, "EM Handle = 0x%016" PRIX64 "\n",
 			    r->resource_em_handle);
 	else
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 6fd55b2a2..e2b771c9f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -556,15 +556,18 @@ ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp,
 }
 
 static inline int32_t
-ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
-			  struct tf *tfp,
-			  struct ulp_flow_db_res_params *res)
+ulp_mapper_em_entry_free(struct bnxt_ulp_context *ulp,
+			 struct tf *tfp,
+			 struct ulp_flow_db_res_params *res)
 {
 	struct tf_delete_em_entry_parms fparms = { 0 };
 	int32_t rc;
 
 	fparms.dir		= res->direction;
-	fparms.mem		= TF_MEM_EXTERNAL;
+	if (res->resource_func == BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE)
+		fparms.mem = TF_MEM_EXTERNAL;
+	else
+		fparms.mem = TF_MEM_INTERNAL;
 	fparms.flow_handle	= res->resource_hndl;
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
@@ -1443,7 +1446,7 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 #endif
 
 	/* do the transpose for the internal EM keys */
-	if (tbl->resource_type == TF_MEM_INTERNAL)
+	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
 		ulp_blob_perform_byte_reverse(&key);
 
 	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
@@ -2066,7 +2069,8 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
 			break;
-		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+		case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
 			rc = ulp_mapper_em_tbl_process(parms, tbl);
 			break;
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
@@ -2119,8 +2123,9 @@ ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
 	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
 		break;
-	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
-		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+	case BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE:
+	case BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE:
+		rc = ulp_mapper_em_entry_free(ulp, tfp, res);
 		break;
 	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 517422338..b3527eccb 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -87,6 +87,9 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	/* Need to allocate 2 * Num flows to account for hash type bit */
 	mark_tbl->gfid_num_entries = dparms->mark_db_gfid_entries;
+	if (!mark_tbl->gfid_num_entries)
+		goto gfid_not_required;
+
 	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
 					 mark_tbl->gfid_num_entries *
 					 sizeof(struct bnxt_gfid_mark_info),
@@ -109,6 +112,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_num_entries - 1,
 		    mark_tbl->gfid_mask);
 
+gfid_not_required:
 	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 	return 0;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index d4c7bfa4d..8eb559050 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -179,7 +179,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -284,7 +284,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
@@ -389,7 +389,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
 	.resource_type = TF_MEM_EXTERNAL,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 7c0dc5ee4..3168d29a9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -149,10 +149,11 @@ enum bnxt_ulp_flow_mem_type {
 };
 
 enum bnxt_ulp_glb_regfile_index {
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 0,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 1,
-	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LOOPBACK_AREC_INDEX = 2,
-	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 3
+	BNXT_ULP_GLB_REGFILE_INDEX_NOT_USED = 0,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID = 1,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID = 2,
+	BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR = 3,
+	BNXT_ULP_GLB_REGFILE_INDEX_LAST = 4
 };
 
 enum bnxt_ulp_hdr_type {
@@ -257,8 +258,8 @@ enum bnxt_ulp_match_type_bitmask {
 
 enum bnxt_ulp_resource_func {
 	BNXT_ULP_RESOURCE_FUNC_INVALID = 0x00,
-	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 0x20,
-	BNXT_ULP_RESOURCE_FUNC_RSVD1 = 0x40,
+	BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE = 0x20,
+	BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE = 0x40,
 	BNXT_ULP_RESOURCE_FUNC_RSVD2 = 0x60,
 	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0x80,
 	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 0x81,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index beca3baa7..7c440e3a4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -321,7 +321,12 @@ struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	.mark_db_gfid_entries   = 65536,
 	.flow_count_db_entries  = 16384,
 	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2
+	.num_phy_ports          = 2,
+	.ext_cntr_table_type    = 0,
+	.byte_count_mask        = 0x00000003ffffffff,
+	.packet_count_mask      = 0xfffffffc00000000,
+	.byte_count_shift       = 0,
+	.packet_count_shift     = 36
 	}
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4bcd02ba2..5a7a7b910 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -146,6 +146,11 @@ struct bnxt_ulp_device_params {
 	uint64_t			flow_db_num_entries;
 	uint32_t			flow_count_db_entries;
 	uint32_t			num_resources_per_flow;
+	uint32_t			ext_cntr_table_type;
+	uint64_t			byte_count_mask;
+	uint64_t			packet_count_mask;
+	uint32_t			byte_count_shift;
+	uint32_t			packet_count_shift;
 };
 
 /* Flow Mapper */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 39/51] net/bnxt: add conditional execution of mapper tables
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (37 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 38/51] net/bnxt: add support for internal exact match Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 40/51] net/bnxt: allow port MAC qcfg command for trusted VF Ajit Khaparde
                           ` (13 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for conditional execution of the mapper tables so that
actions like count will have table processed only if action count
is configured.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 45 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |  8 ++++
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    | 12 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  2 +
 5 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e2b771c9f..d0931d411 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2008,6 +2008,44 @@ ulp_mapper_glb_resource_info_init(struct bnxt_ulp_context *ulp_ctx,
 	return rc;
 }
 
+/*
+ * Function to process the conditional opcode of the mapper table.
+ * returns 1 to skip the table.
+ * return 0 to continue processing the table.
+ */
+static int32_t
+ulp_mapper_tbl_cond_opcode_process(struct bnxt_ulp_mapper_parms *parms,
+				   struct bnxt_ulp_mapper_tbl_info *tbl)
+{
+	int32_t rc = 1;
+
+	switch (tbl->cond_opcode) {
+	case BNXT_ULP_COND_OPCODE_NOP:
+		rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_COMP_FIELD:
+		if (tbl->cond_operand < BNXT_ULP_CF_IDX_LAST &&
+		    ULP_COMP_FLD_IDX_RD(parms, tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_ACTION_BIT:
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_HDR_BIT:
+		if (ULP_BITMAP_ISSET(parms->hdr_bitmap->bits,
+				     tbl->cond_operand))
+			rc = 0;
+		break;
+	default:
+		BNXT_TF_DBG(ERR,
+			    "Invalid arg in mapper tbl for cond opcode\n");
+		break;
+	}
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -2027,6 +2065,9 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	for (i = 0; i < parms->num_atbls; i++) {
 		tbl = &parms->atbls[i];
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
 			rc = ulp_mapper_index_tbl_process(parms, tbl, false);
@@ -2065,6 +2106,9 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 	for (i = 0; i < parms->num_ctbls; i++) {
 		struct bnxt_ulp_mapper_tbl_info *tbl = &parms->ctbls[i];
 
+		if (ulp_mapper_tbl_cond_opcode_process(parms, tbl))
+			continue;
+
 		switch (tbl->resource_func) {
 		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
 			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
@@ -2326,6 +2370,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 	memset(&parms, 0, sizeof(parms));
 	parms.act_prop = cparms->act_prop;
 	parms.act_bitmap = cparms->act;
+	parms.hdr_bitmap = cparms->hdr_bitmap;
 	parms.regfile = &regfile;
 	parms.hdr_field = cparms->hdr_field;
 	parms.comp_fld = cparms->comp_fld;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b159081b1..19134830a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -62,6 +62,7 @@ struct bnxt_ulp_mapper_parms {
 	uint32_t				num_ctbls;
 	struct ulp_rte_act_prop			*act_prop;
 	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_bitmap		*hdr_bitmap;
 	struct ulp_rte_hdr_field		*hdr_field;
 	uint32_t				*comp_fld;
 	struct ulp_regfile			*regfile;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 41ac77c6f..8fffaecce 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1128,6 +1128,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv4 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
@@ -1148,6 +1152,10 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
 		       &ip_type, sizeof(uint32_t));
 
+		/* update the computed field to notify it is ipv6 header */
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+				    1);
+
 		if (!ulp_rte_item_skip_void(&item, 1))
 			return BNXT_TF_RC_ERROR;
 	} else {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 3168d29a9..27628a510 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -118,7 +118,17 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
 	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
 	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
-	BNXT_ULP_CF_IDX_LAST = 29
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 29,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 30,
+	BNXT_ULP_CF_IDX_LAST = 31
+};
+
+enum bnxt_ulp_cond_opcode {
+	BNXT_ULP_COND_OPCODE_NOP = 0,
+	BNXT_ULP_COND_OPCODE_COMP_FIELD = 1,
+	BNXT_ULP_COND_OPCODE_ACTION_BIT = 2,
+	BNXT_ULP_COND_OPCODE_HDR_BIT = 3,
+	BNXT_ULP_COND_OPCODE_LAST = 4
 };
 
 enum bnxt_ulp_critical_resource {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 5a7a7b910..df999b18c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -165,6 +165,8 @@ struct bnxt_ulp_mapper_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t			resource_type; /* TF_ enum type */
 	enum bnxt_ulp_resource_sub_type	resource_sub_type;
+	enum bnxt_ulp_cond_opcode	cond_opcode;
+	uint32_t			cond_operand;
 	uint8_t		direction;
 	uint32_t	priority;
 	uint8_t		srch_b4_alloc;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 40/51] net/bnxt: allow port MAC qcfg command for trusted VF
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (38 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
@ 2020-07-03 21:01         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 41/51] net/bnxt: enhancements for port db Ajit Khaparde
                           ` (12 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:01 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Issue HWRM_PORT_MAC_QCFG command on trusted vf to fetch the port count.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2605ef039..6ade32d1b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3194,14 +3194,14 @@ int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
 
 	bp->port_svif = BNXT_SVIF_INVALID;
 
-	if (!BNXT_PF(bp))
+	if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
 		return 0;
 
 	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
-	HWRM_CHECK_RESULT();
+	HWRM_CHECK_RESULT_SILENT();
 
 	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
 	if (port_svif_info &
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 41/51] net/bnxt: enhancements for port db
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (39 preceding siblings ...)
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 40/51] net/bnxt: allow port MAC qcfg command for trusted VF Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
                           ` (11 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

1. Add "enum bnxt_ulp_intf_type” as the second parameter for the
   port & func helper functions
2. Return vfrep related port & func information in the helper functions
3. Allocate phy_port_list dynamically based on port count
4. Introduce ulp_func_id_tbl array for book keeping func related
   information indexed by func_id

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h                  |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c           |  64 ++++++++--
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h |   6 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c       |   2 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c  |   9 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.c    | 143 +++++++++++++++++------
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    |  56 +++++++--
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  22 +++-
 8 files changed, 250 insertions(+), 62 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 43e5e7162..32acced60 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -23,6 +23,7 @@
 
 #include "tf_core.h"
 #include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -879,10 +880,11 @@ extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
-uint16_t bnxt_get_vnic_id(uint16_t port);
-uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
-uint16_t bnxt_get_fw_func_id(uint16_t port);
-uint16_t bnxt_get_parif(uint16_t port);
+uint16_t bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif,
+		       enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type);
+uint16_t bnxt_get_parif(uint16_t port, enum bnxt_ulp_intf_type type);
 uint16_t bnxt_get_phy_port_id(uint16_t port);
 uint16_t bnxt_get_vport(uint16_t port);
 enum bnxt_ulp_intf_type
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 93c2f0db9..74627f4ee 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5066,25 +5066,48 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 }
 
 uint16_t
-bnxt_get_svif(uint16_t port_id, bool func_svif)
+bnxt_get_svif(uint16_t port_id, bool func_svif,
+	      enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->svif;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
 uint16_t
-bnxt_get_vnic_id(uint16_t port)
+bnxt_get_vnic_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt_vnic_info *vnic;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->dflt_vnic_id;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
@@ -5093,12 +5116,23 @@ bnxt_get_vnic_id(uint16_t port)
 }
 
 uint16_t
-bnxt_get_fw_func_id(uint16_t port)
+bnxt_get_fw_func_id(uint16_t port, enum bnxt_ulp_intf_type type)
 {
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port];
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid;
+
+		eth_dev = vfr->parent_dev;
+	}
+
 	bp = eth_dev->data->dev_private;
 
 	return bp->fw_fid;
@@ -5115,8 +5149,14 @@ bnxt_get_interface_type(uint16_t port)
 		return BNXT_ULP_INTF_TYPE_VF_REP;
 
 	bp = eth_dev->data->dev_private;
-	return BNXT_PF(bp) ? BNXT_ULP_INTF_TYPE_PF
-			   : BNXT_ULP_INTF_TYPE_VF;
+	if (BNXT_PF(bp))
+		return BNXT_ULP_INTF_TYPE_PF;
+	else if (BNXT_VF_IS_TRUSTED(bp))
+		return BNXT_ULP_INTF_TYPE_TRUSTED_VF;
+	else if (BNXT_VF(bp))
+		return BNXT_ULP_INTF_TYPE_VF;
+
+	return BNXT_ULP_INTF_TYPE_INVALID;
 }
 
 uint16_t
@@ -5129,6 +5169,9 @@ bnxt_get_phy_port_id(uint16_t port_id)
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
 		vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
 		eth_dev = vfr->parent_dev;
 	}
 
@@ -5138,15 +5181,20 @@ bnxt_get_phy_port_id(uint16_t port_id)
 }
 
 uint16_t
-bnxt_get_parif(uint16_t port_id)
+bnxt_get_parif(uint16_t port_id, enum bnxt_ulp_intf_type type)
 {
-	struct bnxt_vf_representor *vfr;
 	struct rte_eth_dev *eth_dev;
 	struct bnxt *bp;
 
 	eth_dev = &rte_eth_devices[port_id];
 	if (BNXT_ETH_DEV_IS_REPRESENTOR(eth_dev)) {
-		vfr = eth_dev->data->dev_private;
+		struct bnxt_vf_representor *vfr = eth_dev->data->dev_private;
+		if (!vfr)
+			return 0;
+
+		if (type == BNXT_ULP_INTF_TYPE_VF_REP)
+			return vfr->fw_fid - 1;
+
 		eth_dev = vfr->parent_dev;
 	}
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index f772d4919..ebb71405b 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -6,6 +6,11 @@
 #ifndef _BNXT_TF_COMMON_H_
 #define _BNXT_TF_COMMON_H_
 
+#include <inttypes.h>
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db_enum.h"
+
 #define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
 
 #define BNXT_ULP_EM_FLOWS			8192
@@ -48,6 +53,7 @@ enum ulp_direction_type {
 enum bnxt_ulp_intf_type {
 	BNXT_ULP_INTF_TYPE_INVALID = 0,
 	BNXT_ULP_INTF_TYPE_PF,
+	BNXT_ULP_INTF_TYPE_TRUSTED_VF,
 	BNXT_ULP_INTF_TYPE_VF,
 	BNXT_ULP_INTF_TYPE_PF_REP,
 	BNXT_ULP_INTF_TYPE_VF_REP,
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 1b52861d4..e5e7e5f43 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -658,7 +658,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	rc = ulp_dparms_init(bp, bp->ulp_ctx);
 
 	/* create the port database */
-	rc = ulp_port_db_init(bp->ulp_ctx);
+	rc = ulp_port_db_init(bp->ulp_ctx, bp->port_cnt);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to create the port database\n");
 		goto jump_to_error;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6eb2d6146..138b0b73d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -128,7 +128,8 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	mapper_cparms.act_prop = &params.act_prop;
 	mapper_cparms.class_tid = class_id;
 	mapper_cparms.act_tid = act_tmpl;
-	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id,
+						    BNXT_ULP_INTF_TYPE_INVALID);
 	mapper_cparms.dir = params.dir;
 
 	/* Call the ulp mapper to create the flow in the hardware. */
@@ -226,7 +227,8 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	}
 
 	flow_id = (uint32_t)(uintptr_t)flow;
-	func_id = bnxt_get_fw_func_id(dev->data->port_id);
+	func_id = bnxt_get_fw_func_id(dev->data->port_id,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
@@ -270,7 +272,8 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	if (ulp_ctx_deinit_allowed(bp)) {
 		ret = ulp_flow_db_session_flow_flush(ulp_ctx);
 	} else if (bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx)) {
-		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id);
+		func_id = bnxt_get_fw_func_id(eth_dev->data->port_id,
+					      BNXT_ULP_INTF_TYPE_INVALID);
 		ret = ulp_flow_db_function_flow_flush(ulp_ctx, func_id);
 	}
 	if (ret)
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index ea27ef41f..659cefa07 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -33,7 +33,7 @@ ulp_port_db_allocate_ifindex(struct bnxt_ulp_port_db *port_db)
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt)
 {
 	struct bnxt_ulp_port_db *port_db;
 
@@ -60,6 +60,18 @@ int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt)
 			    "Failed to allocate mem for port interface list\n");
 		goto error_free;
 	}
+
+	/* Allocate the phy port list */
+	port_db->phy_port_list = rte_zmalloc("bnxt_ulp_phy_port_list",
+					     port_cnt *
+					     sizeof(struct ulp_phy_port_info),
+					     0);
+	if (!port_db->phy_port_list) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate mem for phy port list\n");
+		goto error_free;
+	}
+
 	return 0;
 
 error_free:
@@ -89,6 +101,7 @@ int32_t	ulp_port_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 	bnxt_ulp_cntxt_ptr2_port_db_set(ulp_ctxt, NULL);
 
 	/* Free up all the memory. */
+	rte_free(port_db->phy_port_list);
 	rte_free(port_db->ulp_intf_list);
 	rte_free(port_db);
 	return 0;
@@ -110,6 +123,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	struct ulp_phy_port_info *port_data;
 	struct bnxt_ulp_port_db *port_db;
 	struct ulp_interface_info *intf;
+	struct ulp_func_if_info *func;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -134,20 +148,48 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 	intf = &port_db->ulp_intf_list[ifindex];
 
 	intf->type = bnxt_get_interface_type(port_id);
+	intf->drv_func_id = bnxt_get_fw_func_id(port_id,
+						BNXT_ULP_INTF_TYPE_INVALID);
+
+	func = &port_db->ulp_func_id_tbl[intf->drv_func_id];
+	if (!func->func_valid) {
+		func->func_svif = bnxt_get_svif(port_id, true,
+						BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_spif = bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+		func->func_valid = true;
+	}
 
-	intf->func_id = bnxt_get_fw_func_id(port_id);
-	intf->func_svif = bnxt_get_svif(port_id, 1);
-	intf->func_spif = bnxt_get_phy_port_id(port_id);
-	intf->func_parif = bnxt_get_parif(port_id);
-	intf->default_vnic = bnxt_get_vnic_id(port_id);
-	intf->phy_port_id = bnxt_get_phy_port_id(port_id);
+	if (intf->type == BNXT_ULP_INTF_TYPE_VF_REP) {
+		intf->vf_func_id =
+			bnxt_get_fw_func_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+
+		func = &port_db->ulp_func_id_tbl[intf->vf_func_id];
+		func->func_svif =
+			bnxt_get_svif(port_id, true, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->func_spif =
+			bnxt_get_phy_port_id(port_id);
+		func->func_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
+		func->func_vnic =
+			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
+		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+	}
 
-	if (intf->type == BNXT_ULP_INTF_TYPE_PF) {
-		port_data = &port_db->phy_port_list[intf->phy_port_id];
-		port_data->port_svif = bnxt_get_svif(port_id, 0);
+	port_data = &port_db->phy_port_list[func->phy_port_id];
+	if (!port_data->port_valid) {
+		port_data->port_svif =
+			bnxt_get_svif(port_id, false,
+				      BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_spif = bnxt_get_phy_port_id(port_id);
-		port_data->port_parif = bnxt_get_parif(port_id);
+		port_data->port_parif =
+			bnxt_get_parif(port_id, BNXT_ULP_INTF_TYPE_INVALID);
 		port_data->port_vport = bnxt_get_vport(port_id);
+		port_data->port_valid = true;
 	}
 
 	return 0;
@@ -194,6 +236,7 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 			    uint32_t ifindex,
+			    uint32_t fid_type,
 			    uint16_t *func_id)
 {
 	struct bnxt_ulp_port_db *port_db;
@@ -203,7 +246,12 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*func_id =  port_db->ulp_intf_list[ifindex].func_id;
+
+	if (fid_type == BNXT_ULP_DRV_FUNC_FID)
+		*func_id =  port_db->ulp_intf_list[ifindex].drv_func_id;
+	else
+		*func_id =  port_db->ulp_intf_list[ifindex].vf_func_id;
+
 	return 0;
 }
 
@@ -212,7 +260,7 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * svif_type [in] the svif type of the given ifindex.
  * svif [out] the svif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -220,21 +268,27 @@ ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t svif_type,
 		     uint16_t *svif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*svif = port_db->ulp_intf_list[ifindex].func_svif;
+
+	if (svif_type == BNXT_ULP_DRV_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
+	} else if (svif_type == BNXT_ULP_VF_FUNC_SVIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*svif = port_db->ulp_func_id_tbl[func_id].func_svif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*svif = port_db->phy_port_list[phy_port_id].port_svif;
 	}
 
@@ -246,7 +300,7 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * spif_type [in] the spif type of the given ifindex.
  * spif [out] the spif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -254,21 +308,27 @@ ulp_port_db_svif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t spif_type,
 		     uint16_t *spif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*spif = port_db->ulp_intf_list[ifindex].func_spif;
+
+	if (spif_type == BNXT_ULP_DRV_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
+	} else if (spif_type == BNXT_ULP_VF_FUNC_SPIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*spif = port_db->ulp_func_id_tbl[func_id].func_spif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*spif = port_db->phy_port_list[phy_port_id].port_spif;
 	}
 
@@ -280,7 +340,7 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
  *
  * ulp_ctxt [in] Ptr to ulp context
  * ifindex [in] ulp ifindex
- * dir [in] the direction for the flow.
+ * parif_type [in] the parif type of the given ifindex.
  * parif [out] the parif of the given ifindex.
  *
  * Returns 0 on success or negative number on failure.
@@ -288,21 +348,26 @@ ulp_port_db_spif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 		     uint32_t ifindex,
-		     uint32_t dir,
+		     uint32_t parif_type,
 		     uint16_t *parif)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	if (dir == ULP_DIR_EGRESS) {
-		*parif = port_db->ulp_intf_list[ifindex].func_parif;
+	if (parif_type == BNXT_ULP_DRV_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
+	} else if (parif_type == BNXT_ULP_VF_FUNC_PARIF) {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*parif = port_db->ulp_func_id_tbl[func_id].func_parif;
 	} else {
-		phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 		*parif = port_db->phy_port_list[phy_port_id].port_parif;
 	}
 
@@ -321,16 +386,26 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
 			     uint32_t ifindex,
+			     uint32_t vnic_type,
 			     uint16_t *vnic)
 {
 	struct bnxt_ulp_port_db *port_db;
+	uint16_t func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	*vnic = port_db->ulp_intf_list[ifindex].default_vnic;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC) {
+		func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	} else {
+		func_id = port_db->ulp_intf_list[ifindex].vf_func_id;
+		*vnic = port_db->ulp_func_id_tbl[func_id].func_vnic;
+	}
+
 	return 0;
 }
 
@@ -348,14 +423,16 @@ ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 		      uint32_t ifindex, uint16_t *vport)
 {
 	struct bnxt_ulp_port_db *port_db;
-	uint16_t phy_port_id;
+	uint16_t phy_port_id, func_id;
 
 	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
 	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
 		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
 		return -EINVAL;
 	}
-	phy_port_id = port_db->ulp_intf_list[ifindex].phy_port_id;
+
+	func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+	phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
 	*vport = port_db->phy_port_list[phy_port_id].port_vport;
 	return 0;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 87de3bcbc..b1419a34c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -9,19 +9,54 @@
 #include "bnxt_ulp.h"
 
 #define BNXT_PORT_DB_MAX_INTF_LIST		256
+#define BNXT_PORT_DB_MAX_FUNC			2048
 
-/* Structure for the Port database resource information. */
-struct ulp_interface_info {
-	enum bnxt_ulp_intf_type	type;
-	uint16_t		func_id;
+enum bnxt_ulp_svif_type {
+	BNXT_ULP_DRV_FUNC_SVIF = 0,
+	BNXT_ULP_VF_FUNC_SVIF,
+	BNXT_ULP_PHY_PORT_SVIF
+};
+
+enum bnxt_ulp_spif_type {
+	BNXT_ULP_DRV_FUNC_SPIF = 0,
+	BNXT_ULP_VF_FUNC_SPIF,
+	BNXT_ULP_PHY_PORT_SPIF
+};
+
+enum bnxt_ulp_parif_type {
+	BNXT_ULP_DRV_FUNC_PARIF = 0,
+	BNXT_ULP_VF_FUNC_PARIF,
+	BNXT_ULP_PHY_PORT_PARIF
+};
+
+enum bnxt_ulp_vnic_type {
+	BNXT_ULP_DRV_FUNC_VNIC = 0,
+	BNXT_ULP_VF_FUNC_VNIC
+};
+
+enum bnxt_ulp_fid_type {
+	BNXT_ULP_DRV_FUNC_FID,
+	BNXT_ULP_VF_FUNC_FID
+};
+
+struct ulp_func_if_info {
+	uint16_t		func_valid;
 	uint16_t		func_svif;
 	uint16_t		func_spif;
 	uint16_t		func_parif;
-	uint16_t		default_vnic;
+	uint16_t		func_vnic;
 	uint16_t		phy_port_id;
 };
 
+/* Structure for the Port database resource information. */
+struct ulp_interface_info {
+	enum bnxt_ulp_intf_type	type;
+	uint16_t		drv_func_id;
+	uint16_t		vf_func_id;
+};
+
 struct ulp_phy_port_info {
+	uint16_t	port_valid;
 	uint16_t	port_svif;
 	uint16_t	port_spif;
 	uint16_t	port_parif;
@@ -35,7 +70,8 @@ struct bnxt_ulp_port_db {
 
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
-	struct ulp_phy_port_info	phy_port_list[RTE_MAX_ETHPORTS];
+	struct ulp_phy_port_info	*phy_port_list;
+	struct ulp_func_if_info		ulp_func_id_tbl[BNXT_PORT_DB_MAX_FUNC];
 };
 
 /*
@@ -46,7 +82,7 @@ struct bnxt_ulp_port_db {
  *
  * Returns 0 on success or negative number on failure.
  */
-int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt);
+int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt);
 
 /*
  * Deinitialize the port database. Memory is deallocated in
@@ -94,7 +130,8 @@ ulp_port_db_dev_port_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_function_id_get(struct bnxt_ulp_context *ulp_ctxt,
-			    uint32_t ifindex, uint16_t *func_id);
+			    uint32_t ifindex, uint32_t fid_type,
+			    uint16_t *func_id);
 
 /*
  * Api to get the svif for a given ulp ifindex.
@@ -150,7 +187,8 @@ ulp_port_db_parif_get(struct bnxt_ulp_context *ulp_ctxt,
  */
 int32_t
 ulp_port_db_default_vnic_get(struct bnxt_ulp_context *ulp_ctxt,
-			     uint32_t ifindex, uint16_t *vnic);
+			     uint32_t ifindex, uint32_t vnic_type,
+			     uint16_t *vnic);
 
 /*
  * Api to get the vport id for a given ulp ifindex.
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 8fffaecce..073b3537f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -166,6 +166,8 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 	uint16_t port_id = svif;
 	uint32_t dir = 0;
 	struct ulp_rte_hdr_field *hdr_field;
+	enum bnxt_ulp_svif_type svif_type;
+	enum bnxt_ulp_intf_type if_type;
 	uint32_t ifindex;
 	int32_t rc;
 
@@ -187,7 +189,18 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 				    "Invalid port id\n");
 			return BNXT_TF_RC_ERROR;
 		}
-		ulp_port_db_svif_get(params->ulp_ctx, ifindex, dir, &svif);
+
+		if (dir == ULP_DIR_INGRESS) {
+			svif_type = BNXT_ULP_PHY_PORT_SVIF;
+		} else {
+			if_type = bnxt_get_interface_type(port_id);
+			if (if_type == BNXT_ULP_INTF_TYPE_VF_REP)
+				svif_type = BNXT_ULP_VF_FUNC_SVIF;
+			else
+				svif_type = BNXT_ULP_DRV_FUNC_SVIF;
+		}
+		ulp_port_db_svif_get(params->ulp_ctx, ifindex, svif_type,
+				     &svif);
 		svif = rte_cpu_to_be_16(svif);
 	}
 	hdr_field = &params->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX];
@@ -1256,7 +1269,7 @@ ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
 
 	/* copy the PF of the current device into VNIC Property */
 	svif = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
-	svif = bnxt_get_vnic_id(svif);
+	svif = bnxt_get_vnic_id(svif, BNXT_ULP_INTF_TYPE_INVALID);
 	svif = rte_cpu_to_be_32(svif);
 	memcpy(&params->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 	       &svif, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1280,7 +1293,8 @@ ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using VF conversion */
-		pid = bnxt_get_vnic_id(vf_action->id);
+		pid = bnxt_get_vnic_id(vf_action->id,
+				       BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
@@ -1307,7 +1321,7 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 			return BNXT_TF_RC_PARSE_ERR;
 		}
 		/* TBD: Update the computed VNIC using port conversion */
-		pid = bnxt_get_vnic_id(port_id->id);
+		pid = bnxt_get_vnic_id(port_id->id, BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 42/51] net/bnxt: manage VF to VFR conduit
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (40 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 41/51] net/bnxt: enhancements for port db Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
                           ` (10 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

When VF-VFR conduits are created, a mark is added to the mark database.
mark_flag indicates whether the mark is valid and has VFR information
(VFR_ID bit in mark_flag). Rx path was checking for this VFR_ID bit.
However, while adding the mark to the mark database, VFR_ID bit is not
set in mark_flag.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index b3527eccb..b2c8c349c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -18,6 +18,8 @@
 						BNXT_ULP_MARK_VALID)
 #define ULP_MARK_DB_ENTRY_IS_INVALID(mark_info) (!((mark_info)->flags &\
 						   BNXT_ULP_MARK_VALID))
+#define ULP_MARK_DB_ENTRY_SET_VFR_ID(mark_info) ((mark_info)->flags |=\
+						 BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_VFR_ID(mark_info) ((mark_info)->flags &\
 						BNXT_ULP_MARK_VFR_ID)
 #define ULP_MARK_DB_ENTRY_IS_GLOBAL_HW_FID(mark_info) ((mark_info)->flags &\
@@ -263,6 +265,9 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		BNXT_TF_DBG(DEBUG, "Set LFID[0x%0x] = 0x%0x\n", fid, mark);
 		mtbl->lfid_tbl[fid].mark_id = mark;
 		ULP_MARK_DB_ENTRY_SET_VALID(&mtbl->lfid_tbl[fid]);
+
+		if (mark_flag & BNXT_ULP_MARK_VFR_ID)
+			ULP_MARK_DB_ENTRY_SET_VFR_ID(&mtbl->lfid_tbl[fid]);
 	}
 
 	return 0;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 43/51] net/bnxt: parse reps along with other dev-args
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (41 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
                           ` (9 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Representor dev-args need to be parsed during pci probe as they determine
subsequent probe of VF representor ports as well.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 74627f4ee..bf9f8fda1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -97,8 +97,10 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 #define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
 #define BNXT_DEVARG_FLOW_XSTAT	"flow-xstat"
 #define BNXT_DEVARG_MAX_NUM_KFLOWS  "max-num-kflows"
+#define BNXT_DEVARG_REPRESENTOR	"representor"
 
 static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_REPRESENTOR,
 	BNXT_DEVARG_TRUFLOW,
 	BNXT_DEVARG_FLOW_XSTAT,
 	BNXT_DEVARG_MAX_NUM_KFLOWS,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 44/51] net/bnxt: fill mapper parameters with default rules
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (42 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
                           ` (8 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Default rules are needed for the packets to be punted between the
following entities in the non-offloaded path
1. Device PORT to DPDK App
2. DPDK App to Device PORT
3. VF Representor to VF
4. VF to VF Representor

This patch fills all the relevant information in the computed fields
& the act_prop fields for the flow mapper to create the necessary
tables in the hardware to enable the default rules.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c                |   6 +-
 drivers/net/bnxt/meson.build                  |   1 +
 drivers/net/bnxt/tf_ulp/Makefile              |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |  24 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |  30 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c       | 385 ++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  10 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |   3 +-
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |   5 +
 9 files changed, 444 insertions(+), 21 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index bf9f8fda1..678aa20e7 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1274,9 +1274,6 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
-	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_deinit(bp);
-
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
@@ -1332,6 +1329,9 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
+	if (BNXT_TRUFLOW_EN(bp))
+		bnxt_ulp_deinit(bp);
+
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
 
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 8f6ed419e..2939857ca 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -61,6 +61,7 @@ sources = files('bnxt_cpr.c',
 	'tf_ulp/ulp_rte_parser.c',
 	'tf_ulp/bnxt_ulp_flow.c',
 	'tf_ulp/ulp_port_db.c',
+	'tf_ulp/ulp_def_rules.c',
 
 	'rte_pmd_bnxt.c')
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 57341f876..3f1b43bae 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -16,3 +16,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index eecc09cea..3563f63fa 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -12,6 +12,8 @@
 
 #include "rte_ethdev.h"
 
+#include "ulp_template_db_enum.h"
+
 struct bnxt_ulp_data {
 	uint32_t			tbl_scope_id;
 	struct bnxt_ulp_mark_tbl	*mark_tbl;
@@ -49,6 +51,12 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+struct ulp_tlv_param {
+	enum bnxt_ulp_df_param_type type;
+	uint32_t length;
+	uint8_t value[16];
+};
+
 /*
  * Allow the deletion of context only for the bnxt device that
  * created the session
@@ -127,4 +135,20 @@ bnxt_ulp_cntxt_ptr2_port_db_set(struct bnxt_ulp_context	*ulp_ctx,
 struct bnxt_ulp_port_db *
 bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx);
 
+/* Function to create default flows. */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id);
+
+/* Function to destroy default flows. */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev,
+			 uint32_t flow_id);
+
+int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		      struct rte_flow_error *error);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 138b0b73d..7ef306e58 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -207,7 +207,7 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev,
 }
 
 /* Function to destroy the rte flow. */
-static int
+int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 		      struct rte_flow *flow,
 		      struct rte_flow_error *error)
@@ -220,9 +220,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
@@ -233,17 +234,22 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	if (ulp_flow_db_validate_flow_func(ulp_ctx, flow_id, func_id) ==
 	    false) {
 		BNXT_TF_DBG(ERR, "Incorrect device params\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+		if (error)
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
 		return -EINVAL;
 	}
 
-	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id);
-	if (ret)
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
+	ret = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret) {
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+		if (error)
+			rte_flow_error_set(error, -ret,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to destroy flow.");
+	}
 
 	return ret;
 }
diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
new file mode 100644
index 000000000..46b558f31
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt_tf_common.h"
+#include "ulp_template_struct.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_db_field.h"
+#include "ulp_utils.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+struct bnxt_ulp_def_param_handler {
+	int32_t (*vfr_func)(struct bnxt_ulp_context *ulp_ctx,
+			    struct ulp_tlv_param *param,
+			    struct bnxt_ulp_mapper_create_parms *mapper_params);
+};
+
+static int32_t
+ulp_set_svif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t svif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t svif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_svif_get(ulp_ctx, ifindex, svif_type, &svif);
+	if (rc)
+		return rc;
+
+	if (svif_type == BNXT_ULP_PHY_PORT_SVIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SVIF;
+	else if (svif_type == BNXT_ULP_DRV_FUNC_SVIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SVIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SVIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, svif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_spif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t spif_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t spif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_spif_get(ulp_ctx, ifindex, spif_type, &spif);
+	if (rc)
+		return rc;
+
+	if (spif_type == BNXT_ULP_PHY_PORT_SPIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_SPIF;
+	else if (spif_type == BNXT_ULP_DRV_FUNC_SPIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_SPIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_SPIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, spif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_parif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t  ifindex, uint8_t parif_type,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t parif;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_parif_get(ulp_ctx, ifindex, parif_type, &parif);
+	if (rc)
+		return rc;
+
+	if (parif_type == BNXT_ULP_PHY_PORT_PARIF)
+		idx = BNXT_ULP_CF_IDX_PHY_PORT_PARIF;
+	else if (parif_type == BNXT_ULP_DRV_FUNC_PARIF)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_PARIF;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_PARIF;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, parif);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vport_in_comp_fld(struct bnxt_ulp_context *ulp_ctx, uint32_t ifindex,
+			  struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vport;
+	int rc;
+
+	rc = ulp_port_db_vport_get(ulp_ctx, ifindex, &vport);
+	if (rc)
+		return rc;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_PHY_PORT_VPORT,
+			    vport);
+	return 0;
+}
+
+static int32_t
+ulp_set_vnic_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
+			 uint32_t  ifindex, uint8_t vnic_type,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t vnic;
+	uint8_t idx;
+	int rc;
+
+	rc = ulp_port_db_default_vnic_get(ulp_ctx, ifindex, vnic_type, &vnic);
+	if (rc)
+		return rc;
+
+	if (vnic_type == BNXT_ULP_DRV_FUNC_VNIC)
+		idx = BNXT_ULP_CF_IDX_DRV_FUNC_VNIC;
+	else
+		idx = BNXT_ULP_CF_IDX_VF_FUNC_VNIC;
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, idx, vnic);
+
+	return 0;
+}
+
+static int32_t
+ulp_set_vlan_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	struct ulp_rte_act_prop *act_prop = mapper_params->act_prop;
+
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_SET_VLAN_VID)) {
+		BNXT_TF_DBG(ERR,
+			    "VLAN already set, multiple VLANs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	port_id = rte_cpu_to_be_16(port_id);
+
+	ULP_BITMAP_SET(mapper_params->act->bits,
+		       BNXT_ULP_ACTION_BIT_SET_VLAN_VID);
+
+	memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG],
+	       &port_id, sizeof(port_id));
+
+	return 0;
+}
+
+static int32_t
+ulp_set_mark_in_act_prop(uint16_t port_id,
+			 struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	if (ULP_BITMAP_ISSET(mapper_params->act->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		BNXT_TF_DBG(ERR,
+			    "MARK already set, multiple MARKs unsupported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	ULP_COMP_FLD_IDX_WR(mapper_params, BNXT_ULP_CF_IDX_DEV_PORT_ID,
+			    port_id);
+
+	return 0;
+}
+
+static int32_t
+ulp_df_dev_port_handler(struct bnxt_ulp_context *ulp_ctx,
+			struct ulp_tlv_param *param,
+			struct bnxt_ulp_mapper_create_parms *mapper_params)
+{
+	uint16_t port_id;
+	uint32_t ifindex;
+	int rc;
+
+	port_id = param->value[0] | param->value[1];
+
+	rc = ulp_port_db_dev_port_to_ulp_index(ulp_ctx, port_id, &ifindex);
+	if (rc) {
+		BNXT_TF_DBG(ERR,
+				"Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* Set port SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SVIF */
+	rc = ulp_set_svif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_SVIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_PHY_PORT_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func SPIF */
+	rc = ulp_set_spif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_DRV_FUNC_SPIF,
+				      mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set port PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_PHY_PORT_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set DRV Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex,
+				       BNXT_ULP_DRV_FUNC_PARIF, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF Func PARIF */
+	rc = ulp_set_parif_in_comp_fld(ulp_ctx, ifindex, BNXT_ULP_VF_FUNC_PARIF,
+				       mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set uplink VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, true, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VF VNIC */
+	rc = ulp_set_vnic_in_comp_fld(ulp_ctx, ifindex, false, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VPORT */
+	rc = ulp_set_vport_in_comp_fld(ulp_ctx, ifindex, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set VLAN */
+	rc = ulp_set_vlan_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	/* Set MARK */
+	rc = ulp_set_mark_in_act_prop(port_id, mapper_params);
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
+struct bnxt_ulp_def_param_handler ulp_def_handler_tbl[] = {
+	[BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID] = {
+			.vfr_func = ulp_df_dev_port_handler }
+};
+
+/*
+ * Function to create default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * param_list [in] Ptr to a list of parameters (Currently, only DPDK port_id).
+ * ulp_class_tid [in] Class template ID number.
+ * flow_id [out] Ptr to flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_create(struct rte_eth_dev *eth_dev,
+			struct ulp_tlv_param *param_list,
+			uint32_t ulp_class_tid,
+			uint32_t *flow_id)
+{
+	struct ulp_rte_hdr_field	hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	uint32_t			comp_fld[BNXT_ULP_CF_IDX_LAST];
+	struct bnxt_ulp_mapper_create_parms mapper_params = { 0 };
+	struct ulp_rte_act_prop		act_prop;
+	struct ulp_rte_act_bitmap	act = { 0 };
+	struct bnxt_ulp_context		*ulp_ctx;
+	uint32_t type;
+	int rc;
+
+	memset(&mapper_params, 0, sizeof(mapper_params));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(comp_fld, 0, sizeof(comp_fld));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	mapper_params.hdr_field = hdr_field;
+	mapper_params.act = &act;
+	mapper_params.act_prop = &act_prop;
+	mapper_params.comp_fld = comp_fld;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized. "
+				 "Failed to create default flow.\n");
+		return -EINVAL;
+	}
+
+	type = param_list->type;
+	while (type != BNXT_ULP_DF_PARAM_TYPE_LAST) {
+		if (ulp_def_handler_tbl[type].vfr_func) {
+			rc = ulp_def_handler_tbl[type].vfr_func(ulp_ctx,
+								param_list,
+								&mapper_params);
+			if (rc) {
+				BNXT_TF_DBG(ERR,
+					    "Failed to create default flow.\n");
+				return rc;
+			}
+		}
+
+		param_list++;
+		type = param_list->type;
+	}
+
+	mapper_params.class_tid = ulp_class_tid;
+
+	rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, flow_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create default flow.\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/*
+ * Function to destroy default rules for the following paths
+ * 1) Device PORT to DPDK App
+ * 2) DPDK App to Device PORT
+ * 3) VF Representor to VF
+ * 4) VF to VF Representor
+ *
+ * eth_dev [in] Ptr to rte eth device.
+ * flow_id [in] Flow identifier.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int rc;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return -EINVAL;
+	}
+
+	rc = ulp_mapper_flow_destroy(ulp_ctx, flow_id,
+				     BNXT_ULP_DEFAULT_FLOW_TABLE);
+	if (rc)
+		BNXT_TF_DBG(ERR, "Failed to destroy flow.\n");
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index d0931d411..e39398a1b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2274,16 +2274,15 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 }
 
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type)
 {
 	if (!ulp_ctx) {
 		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
 		return -EINVAL;
 	}
 
-	return ulp_mapper_resources_free(ulp_ctx,
-					 fid,
-					 BNXT_ULP_REGULAR_FLOW_TABLE);
+	return ulp_mapper_resources_free(ulp_ctx, fid, flow_tbl_type);
 }
 
 /* Function to handle the default global templates that are allocated during
@@ -2486,7 +2485,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
 
 flow_error:
 	/* Free all resources that were allocated during flow creation */
-	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid,
+				      BNXT_ULP_REGULAR_FLOW_TABLE);
 	if (trc)
 		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 19134830a..b35065449 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -109,7 +109,8 @@ ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
 
 /* Function that frees all resources associated with the flow. */
 int32_t
-ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid,
+			enum bnxt_ulp_flow_db_tables flow_tbl_type);
 
 /*
  * Function that frees all resources and can be called on default or regular
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 27628a510..2346797db 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -145,6 +145,11 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST = 4
 };
 
+enum bnxt_ulp_df_param_type {
+	BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID = 0,
+	BNXT_ULP_DF_PARAM_TYPE_LAST = 1
+};
+
 enum bnxt_ulp_direction {
 	BNXT_ULP_DIRECTION_INGRESS = 0,
 	BNXT_ULP_DIRECTION_EGRESS = 1,
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 45/51] net/bnxt: add VF-rep and stat templates
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (43 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
                           ` (7 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru, Somnath Kotur

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The support for VF representor and counters is added to the
ulp templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          |   21 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    2 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  424 +-
 .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5198 +++++++++++++----
 .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  409 +-
 .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   87 +-
 7 files changed, 4948 insertions(+), 1656 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index e39398a1b..3f175fb51 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -22,7 +22,7 @@ ulp_mapper_glb_resource_info_list_get(uint32_t *num_entries)
 {
 	if (!num_entries)
 		return NULL;
-	*num_entries = BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+	*num_entries = BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 	return ulp_glb_resource_tbl;
 }
 
@@ -119,11 +119,6 @@ ulp_mapper_resource_ident_allocate(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_identifier(tfp, &fparms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Ident [%s][%d][%d] = 0x%04x\n",
-		    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, iparms.ident_type, iparms.id);
-#endif
 	return rc;
 }
 
@@ -182,11 +177,6 @@ ulp_mapper_resource_index_tbl_alloc(struct bnxt_ulp_context *ulp_ctx,
 		tf_free_tbl_entry(tfp, &free_parms);
 		return rc;
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	BNXT_TF_DBG(DEBUG, "Allocated Glb Res Index [%s][%d][%d] = 0x%04x\n",
-		    (aparms.dir == TF_DIR_RX) ? "RX" : "TX",
-		    glb_res->glb_regfile_index, aparms.type, aparms.idx);
-#endif
 	return rc;
 }
 
@@ -1441,9 +1431,6 @@ ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 			return rc;
 		}
 	}
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-	ulp_mapper_result_dump("EEM Result", tbl, &data);
-#endif
 
 	/* do the transpose for the internal EM keys */
 	if (tbl->resource_func == BNXT_ULP_RESOURCE_FUNC_INT_EM_TABLE)
@@ -1594,10 +1581,6 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	/* if encap bit swap is enabled perform the bit swap */
 	if (parms->device_params->encap_byte_swap && encap_flds) {
 		ulp_blob_perform_encap_swap(&data);
-#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
-		BNXT_TF_DBG(INFO, "Dump after encap swap\n");
-		ulp_mapper_blob_dump(&data);
-#endif
 	}
 
 	/*
@@ -2255,7 +2238,7 @@ ulp_mapper_glb_resource_info_deinit(struct bnxt_ulp_context *ulp_ctx,
 
 	/* Iterate the global resources and process each one */
 	for (dir = TF_DIR_RX; dir < TF_DIR_MAX; dir++) {
-		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ;
+		for (idx = 0; idx < BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ;
 		      idx++) {
 			ent = &mapper_data->glb_res_tbl[dir][idx];
 			if (ent->resource_func ==
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index b35065449..f6d55449b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -46,7 +46,7 @@ struct bnxt_ulp_mapper_glb_resource_entry {
 
 struct bnxt_ulp_mapper_data {
 	struct bnxt_ulp_mapper_glb_resource_entry
-		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ];
+		glb_res_tbl[TF_DIR_MAX][BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ];
 	struct bnxt_ulp_mapper_cache_entry
 		*cache_tbl[BNXT_ULP_CACHE_TBL_MAX_SZ];
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 9b14fa0bd..3d6507399 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -9,62 +9,293 @@
 #include "ulp_rte_parser.h"
 
 uint16_t ulp_act_sig_tbl[BNXT_ULP_ACT_SIG_TBL_MAX_SZ] = {
-	[BNXT_ULP_ACT_HID_00a1] = 1,
-	[BNXT_ULP_ACT_HID_0029] = 2,
-	[BNXT_ULP_ACT_HID_0040] = 3
+	[BNXT_ULP_ACT_HID_0002] = 1,
+	[BNXT_ULP_ACT_HID_0022] = 2,
+	[BNXT_ULP_ACT_HID_0026] = 3,
+	[BNXT_ULP_ACT_HID_0006] = 4,
+	[BNXT_ULP_ACT_HID_0009] = 5,
+	[BNXT_ULP_ACT_HID_0029] = 6,
+	[BNXT_ULP_ACT_HID_002d] = 7,
+	[BNXT_ULP_ACT_HID_004b] = 8,
+	[BNXT_ULP_ACT_HID_004a] = 9,
+	[BNXT_ULP_ACT_HID_004f] = 10,
+	[BNXT_ULP_ACT_HID_004e] = 11,
+	[BNXT_ULP_ACT_HID_006c] = 12,
+	[BNXT_ULP_ACT_HID_0070] = 13,
+	[BNXT_ULP_ACT_HID_0021] = 14,
+	[BNXT_ULP_ACT_HID_0025] = 15,
+	[BNXT_ULP_ACT_HID_0043] = 16,
+	[BNXT_ULP_ACT_HID_0042] = 17,
+	[BNXT_ULP_ACT_HID_0047] = 18,
+	[BNXT_ULP_ACT_HID_0046] = 19,
+	[BNXT_ULP_ACT_HID_0064] = 20,
+	[BNXT_ULP_ACT_HID_0068] = 21,
+	[BNXT_ULP_ACT_HID_00a1] = 22,
+	[BNXT_ULP_ACT_HID_00df] = 23
 };
 
 struct bnxt_ulp_act_match_info ulp_act_match_list[] = {
 	[1] = {
-	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_hid = BNXT_ULP_ACT_HID_0002,
 	.act_sig = { .bits =
-		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
-		BNXT_ULP_ACTION_BIT_MARK |
-		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DROP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
-	.act_tid = 0
+	.act_tid = 1
 	},
 	[2] = {
+	.act_hid = BNXT_ULP_ACT_HID_0022,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[3] = {
+	.act_hid = BNXT_ULP_ACT_HID_0026,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[4] = {
+	.act_hid = BNXT_ULP_ACT_HID_0006,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_DROP |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[5] = {
+	.act_hid = BNXT_ULP_ACT_HID_0009,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[6] = {
 	.act_hid = BNXT_ULP_ACT_HID_0029,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
 		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[7] = {
+	.act_hid = BNXT_ULP_ACT_HID_002d,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
 		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.act_tid = 1
 	},
-	[3] = {
-	.act_hid = BNXT_ULP_ACT_HID_0040,
+	[8] = {
+	.act_hid = BNXT_ULP_ACT_HID_004b,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[9] = {
+	.act_hid = BNXT_ULP_ACT_HID_004a,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[10] = {
+	.act_hid = BNXT_ULP_ACT_HID_004f,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[11] = {
+	.act_hid = BNXT_ULP_ACT_HID_004e,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[12] = {
+	.act_hid = BNXT_ULP_ACT_HID_006c,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[13] = {
+	.act_hid = BNXT_ULP_ACT_HID_0070,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_RSS |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[14] = {
+	.act_hid = BNXT_ULP_ACT_HID_0021,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[15] = {
+	.act_hid = BNXT_ULP_ACT_HID_0025,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[16] = {
+	.act_hid = BNXT_ULP_ACT_HID_0043,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[17] = {
+	.act_hid = BNXT_ULP_ACT_HID_0042,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[18] = {
+	.act_hid = BNXT_ULP_ACT_HID_0047,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[19] = {
+	.act_hid = BNXT_ULP_ACT_HID_0046,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[20] = {
+	.act_hid = BNXT_ULP_ACT_HID_0064,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[21] = {
+	.act_hid = BNXT_ULP_ACT_HID_0068,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_ACTION_BIT_COUNT |
+		BNXT_ULP_ACTION_BIT_POP_VLAN |
+		BNXT_ULP_ACTION_BIT_DEC_TTL |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 1
+	},
+	[22] = {
+	.act_hid = BNXT_ULP_ACT_HID_00a1,
+	.act_sig = { .bits =
+		BNXT_ULP_ACTION_BIT_VXLAN_DECAP |
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_VNIC |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+	.act_tid = 2
+	},
+	[23] = {
+	.act_hid = BNXT_ULP_ACT_HID_00df,
 	.act_sig = { .bits =
 		BNXT_ULP_ACTION_BIT_VXLAN_ENCAP |
 		BNXT_ULP_ACTION_BIT_VPORT |
 		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
-	.act_tid = 2
+	.act_tid = 3
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 0
+	.num_tbls = 2,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 1,
-	.start_tbl_idx = 1
+	.start_tbl_idx = 2,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
-	.num_tbls = 1,
-	.start_tbl_idx = 2
+	.num_tbls = 3,
+	.start_tbl_idx = 3,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_STATS_64,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_ACTION_BIT,
+	.cond_operand = BNXT_ULP_ACTION_BIT_COUNT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
@@ -72,7 +303,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 0,
+	.result_start_idx = 1,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -87,7 +318,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 26,
+	.result_start_idx = 27,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 0,
@@ -97,12 +328,46 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 53,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
+	.direction = TF_DIR_TX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 56,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 3,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
 	.resource_type = TF_TBL_TYPE_EXT,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_TX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.result_start_idx = 52,
+	.result_start_idx = 59,
 	.result_bit_size = 128,
 	.result_num_fields = 26,
 	.encap_num_fields = 12,
@@ -114,10 +379,19 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
-	.field_bit_size = 14,
+	.field_bit_size = 64,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
 	.field_bit_size = 1,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
@@ -131,7 +405,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_COUNT >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_COUNT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -187,7 +471,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DEC_TTL & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -195,11 +489,7 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.result_operand = {
-		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
@@ -212,7 +502,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_POP_VLAN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
@@ -224,7 +524,17 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 1,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT,
+	.result_operand = {
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 56) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 48) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 40) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 32) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 24) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 16) & 0xff,
+		((uint64_t)BNXT_ULP_ACTION_BIT_DROP >> 8) & 0xff,
+		(uint64_t)BNXT_ULP_ACTION_BIT_DROP & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 14,
@@ -308,7 +618,11 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 4,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 12,
@@ -336,6 +650,50 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 128,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP,
+	.result_operand = {
+		(BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 14,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
index 8eb559050..feac30af2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_class.c
@@ -10,8 +10,8 @@
 
 uint16_t ulp_class_sig_tbl[BNXT_ULP_CLASS_SIG_TBL_MAX_SZ] = {
 	[BNXT_ULP_CLASS_HID_0080] = 1,
-	[BNXT_ULP_CLASS_HID_0000] = 2,
-	[BNXT_ULP_CLASS_HID_0087] = 3
+	[BNXT_ULP_CLASS_HID_0087] = 2,
+	[BNXT_ULP_CLASS_HID_0000] = 3
 };
 
 struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
@@ -23,1871 +23,4722 @@ struct bnxt_ulp_class_match_info ulp_class_match_list[] = {
 		BNXT_ULP_HDR_BIT_O_UDP |
 		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 0,
+	.class_tid = 8,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[2] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0000,
+	.class_hid = BNXT_ULP_CLASS_HID_0087,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
+		BNXT_ULP_HDR_BIT_T_VXLAN |
+		BNXT_ULP_HDR_BIT_I_ETH |
+		BNXT_ULP_HDR_BIT_I_IPV4 |
+		BNXT_ULP_HDR_BIT_I_UDP |
+		BNXT_ULP_FLOW_DIR_BITMASK_ING },
 	.field_sig = { .bits =
-		BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR |
-		BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT |
-		BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR |
+		BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT |
+		BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 1,
+	.class_tid = 9,
 	.act_vnic = 0,
 	.wc_pri = 0
 	},
 	[3] = {
-	.class_hid = BNXT_ULP_CLASS_HID_0087,
+	.class_hid = BNXT_ULP_CLASS_HID_0000,
 	.hdr_sig = { .bits =
 		BNXT_ULP_HDR_BIT_O_ETH |
 		BNXT_ULP_HDR_BIT_O_IPV4 |
 		BNXT_ULP_HDR_BIT_O_UDP |
-		BNXT_ULP_HDR_BIT_T_VXLAN |
-		BNXT_ULP_HDR_BIT_I_ETH |
-		BNXT_ULP_HDR_BIT_I_IPV4 |
-		BNXT_ULP_HDR_BIT_I_UDP |
-		BNXT_ULP_FLOW_DIR_BITMASK_ING },
+		BNXT_ULP_FLOW_DIR_BITMASK_EGR },
 	.field_sig = { .bits =
-		BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR |
-		BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT |
-		BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR |
+		BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT |
+		BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT |
 		BNXT_ULP_MATCH_TYPE_BITMASK_EM },
-	.class_tid = 2,
+	.class_tid = 10,
 	.act_vnic = 0,
 	.wc_pri = 0
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
-	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 4,
+	.start_tbl_idx = 0,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 2,
+	.start_tbl_idx = 4,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((3 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 6,
+	.start_tbl_idx = 6,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((4 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 0
+	.start_tbl_idx = 12,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
 	},
-	[((1 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((5 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 17,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((6 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 20,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((7 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 23,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_DEFAULT
+	},
+	[((8 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 5
+	.start_tbl_idx = 24,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	},
-	[((2 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+	[((9 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
+		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 5,
+	.start_tbl_idx = 29,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
+	},
+	[((10 << BNXT_ULP_LOG2_MAX_NUM_DEV) |
 		BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
 	.num_tbls = 5,
-	.start_tbl_idx = 10
+	.start_tbl_idx = 34,
+	.flow_db_table_type = BNXT_ULP_FDB_TYPE_REGULAR
 	}
 };
 
 struct bnxt_ulp_mapper_tbl_info ulp_class_tbl_list[] = {
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 0,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
 	.result_start_idx = 0,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 0,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 2,
+	.key_start_idx = 0,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 1,
+	.result_start_idx = 26,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 15,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 14,
-	.result_bit_size = 10,
+	.result_start_idx = 39,
+	.result_bit_size = 32,
 	.result_num_fields = 1,
 	.encap_num_fields = 0,
-	.ident_start_idx = 1,
-	.ident_nums = 1,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 40,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_PHY_PORT_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 41,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 18,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 15,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 67,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 60,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 23,
-	.result_bit_size = 64,
-	.result_num_fields = 9,
-	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 80,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 71,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 32,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 92,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 2,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 73,
+	.key_start_idx = 26,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 33,
+	.result_start_idx = 118,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_RX,
+	.result_start_idx = 131,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 86,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 46,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.key_start_idx = 39,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 157,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 3,
-	.ident_nums = 1,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_TX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 89,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 47,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 52,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 170,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
 	.direction = TF_DIR_TX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 131,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 55,
+	.key_start_idx = 65,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 183,
 	.result_bit_size = 64,
-	.result_num_fields = 9,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_DFLT_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 196,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IF_TABLE,
+	.resource_type = TF_IF_TBL_TYPE_PROF_PARIF_ERR_ACT_REC_PTR,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 197,
+	.result_bit_size = 32,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_COMP_FIELD,
+	.index_operand = BNXT_ULP_CF_IDX_VF_FUNC_PARIF
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 142,
-	.blob_key_bit_size = 12,
-	.key_bit_size = 12,
-	.key_num_fields = 2,
-	.result_start_idx = 64,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+	.result_start_idx = 198,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 4,
-	.ident_nums = 1,
-	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_VFR_FLAG,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
 	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
 	.direction = TF_DIR_RX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 144,
+	.key_start_idx = 78,
 	.blob_key_bit_size = 167,
 	.key_bit_size = 167,
 	.key_num_fields = 13,
-	.result_start_idx = 65,
+	.result_start_idx = 224,
 	.result_bit_size = 64,
 	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_ACT_ENCAP_16B,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
-	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 157,
-	.blob_key_bit_size = 16,
-	.key_bit_size = 16,
-	.key_num_fields = 3,
-	.result_start_idx = 78,
-	.result_bit_size = 10,
-	.result_num_fields = 1,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 237,
+	.result_bit_size = 0,
+	.result_num_fields = 0,
+	.encap_num_fields = 12,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 249,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
 	.encap_num_fields = 0,
-	.ident_start_idx = 5,
-	.ident_nums = 1,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
-	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
 	},
 	{
 	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
-	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
-	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
-	.direction = TF_DIR_RX,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
 	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 160,
-	.blob_key_bit_size = 81,
-	.key_bit_size = 81,
-	.key_num_fields = 42,
-	.result_start_idx = 79,
-	.result_bit_size = 38,
-	.result_num_fields = 8,
+	.key_start_idx = 91,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 275,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 0,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
 	},
 	{
-	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
-	.resource_type = TF_MEM_EXTERNAL,
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
 	.resource_sub_type =
-		BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED,
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
 	.direction = TF_DIR_RX,
-	.priority = BNXT_ULP_PRIORITY_NOT_USED,
-	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
-	.key_start_idx = 202,
-	.blob_key_bit_size = 448,
-	.key_bit_size = 448,
-	.key_num_fields = 11,
-	.result_start_idx = 87,
-	.result_bit_size = 64,
+	.result_start_idx = 288,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_ALLOCATE,
+	.index_operand = BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 104,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 314,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 117,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 327,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.resource_type = TF_TBL_TYPE_FULL_ACT_RECORD,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION,
+	.direction = TF_DIR_TX,
+	.result_start_idx = 340,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.index_opcode = BNXT_ULP_INDEX_OPCODE_GLOBAL,
+	.index_operand = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 130,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 366,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 0,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 132,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 367,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 145,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 380,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 1,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 148,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 381,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 190,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 389,
+	.result_bit_size = 64,
 	.result_num_fields = 9,
 	.encap_num_fields = 0,
-	.ident_start_idx = 6,
+	.ident_start_idx = 2,
 	.ident_nums = 0,
 	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
 	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
-	}
-};
-
-struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 201,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 398,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 2,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 203,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 399,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 216,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 412,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 3,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 219,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 413,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.key_start_idx = 261,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 421,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 272,
+	.blob_key_bit_size = 12,
+	.key_bit_size = 12,
+	.key_num_fields = 2,
+	.result_start_idx = 430,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 4,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 274,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 431,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_CACHE_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.resource_sub_type =
+		BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 287,
+	.blob_key_bit_size = 16,
+	.key_bit_size = 16,
+	.key_num_fields = 3,
+	.result_start_idx = 444,
+	.result_bit_size = 10,
+	.result_num_fields = 1,
+	.encap_num_fields = 0,
+	.ident_start_idx = 5,
+	.ident_nums = 1
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.resource_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_TX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 290,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 445,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_NOP,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_NO
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EXT_EM_TABLE,
+	.resource_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_TX,
+	.key_start_idx = 332,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 448,
+	.key_num_fields = 11,
+	.result_start_idx = 453,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.encap_num_fields = 0,
+	.ident_start_idx = 6,
+	.ident_nums = 0,
+	.mark_db_opcode = BNXT_ULP_MARK_DB_OPCODE_SET_IF_MARK_ACTION,
+	.critical_resource = BNXT_ULP_CRITICAL_RESOURCE_YES
+	}
+};
+
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x02, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x02}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_SVIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_SVIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L4_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L3_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_L2_HDR_VALID_YES,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.spec_operand = {
+		BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {
+		(BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
+	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.spec_operand = {
+		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_PARIF >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_PARIF & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DEV_PORT_ID >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DEV_PORT_ID & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_DST_PORT & 0xff,
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT & 0xff,
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
+	.field_bit_size = 32,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR >> 8) & 0xff,
+		BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_DRV_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_DRV_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x81, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x02}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 80,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_PHY_PORT_VPORT >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_PHY_PORT_VPORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_DST_PORT & 0xff,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.mask_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_SVIF_INDEX >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_SVIF_INDEX & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 12,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_CLASS_TID >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_CLASS_TID & 0xff,
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_COMP_FIELD,
+	.result_operand = {
+		(BNXT_ULP_CF_IDX_VF_FUNC_VNIC >> 8) & 0xff,
+		BNXT_ULP_CF_IDX_VF_FUNC_VNIC & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_ACTION_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_L2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TUN_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_TYPE_UDP,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL4_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL3_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_TL2_HDR_VALID_YES,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 9,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 7,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_GLB_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID >> 8) & 0xff,
-		BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 2,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 4,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
 	.field_bit_size = 1,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
-		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 251,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 3,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_DST_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_DST_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 16,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
-	.spec_operand = {
-		BNXT_ULP_SYM_IP_PROTO_UDP,
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_CONSTANT,
+	.result_operand = {
+		(BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT >> 8) & 0xff,
+		BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT & 0xff,
 		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 32,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_HDR_FIELD,
-	.spec_operand = {
-		(BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR >> 8) & 0xff,
-		BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 48,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 24,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 10,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
 	},
 	{
-	.field_bit_size = 8,
-	.mask_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO,
-	.spec_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
-	.spec_operand = {
-		(BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 >> 8) & 0xff,
-		BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 & 0xff,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
-		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
-	}
-};
-
-struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	},
 	{
 	.field_bit_size = 10,
 	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
@@ -2309,7 +5160,12 @@ struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	},
 	{
 	.field_bit_size = 16,
-	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_ZERO
+	.result_opcode = BNXT_ULP_MAPPER_OPC_SET_TO_REGFILE,
+	.result_operand = {
+		(BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
 	},
 	{
 	.field_bit_size = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 2346797db..695546437 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -6,7 +6,7 @@
 #ifndef ULP_TEMPLATE_DB_H_
 #define ULP_TEMPLATE_DB_H_
 
-#define BNXT_ULP_REGFILE_MAX_SZ 16
+#define BNXT_ULP_REGFILE_MAX_SZ 17
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_CACHE_TBL_MAX_SZ 4
@@ -18,15 +18,15 @@
 #define BNXT_ULP_CLASS_HID_SHFTL 23
 #define BNXT_ULP_CLASS_HID_MASK 255
 #define BNXT_ULP_ACT_SIG_TBL_MAX_SZ 256
-#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 4
+#define BNXT_ULP_ACT_MATCH_LIST_MAX_SZ 24
 #define BNXT_ULP_ACT_HID_LOW_PRIME 7919
 #define BNXT_ULP_ACT_HID_HIGH_PRIME 7919
-#define BNXT_ULP_ACT_HID_SHFTR 0
+#define BNXT_ULP_ACT_HID_SHFTR 23
 #define BNXT_ULP_ACT_HID_SHFTL 23
 #define BNXT_ULP_ACT_HID_MASK 255
 #define BNXT_ULP_CACHE_TBL_IDENT_MAX_NUM 2
-#define BNXT_ULP_GLB_RESOURCE_INFO_TBL_MAX_SZ 3
-#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 0
+#define BNXT_ULP_GLB_RESOURCE_TBL_MAX_SZ 5
+#define BNXT_ULP_GLB_TEMPLATE_TBL_MAX_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -242,7 +242,8 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 13,
 	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
 	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
-	BNXT_ULP_REGFILE_INDEX_LAST = 16
+	BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR = 16,
+	BNXT_ULP_REGFILE_INDEX_LAST = 17
 };
 
 enum bnxt_ulp_search_before_alloc {
@@ -252,18 +253,18 @@ enum bnxt_ulp_search_before_alloc {
 };
 
 enum bnxt_ulp_fdb_resource_flags {
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01,
-	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_INGR = 0x00,
+	BNXT_ULP_FDB_RESOURCE_FLAGS_DIR_EGR = 0x01
 };
 
 enum bnxt_ulp_fdb_type {
-	BNXT_ULP_FDB_TYPE_DEFAULT = 1,
-	BNXT_ULP_FDB_TYPE_REGULAR = 0
+	BNXT_ULP_FDB_TYPE_REGULAR = 0,
+	BNXT_ULP_FDB_TYPE_DEFAULT = 1
 };
 
 enum bnxt_ulp_flow_dir_bitmask {
-	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000,
-	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000
+	BNXT_ULP_FLOW_DIR_BITMASK_ING = 0x0000000000000000,
+	BNXT_ULP_FLOW_DIR_BITMASK_EGR = 0x8000000000000000
 };
 
 enum bnxt_ulp_match_type_bitmask {
@@ -285,190 +286,66 @@ enum bnxt_ulp_resource_func {
 };
 
 enum bnxt_ulp_resource_sub_type {
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
-	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
-	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL = 0,
 	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_VFR_CFA_ACTION = 1,
-	BNXT_ULP_RESOURCE_SUB_TYPE_NOT_USED = 0
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT = 2,
+	BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT = 3,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM = 0,
+	BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM = 1
 };
 
 enum bnxt_ulp_sym {
-	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
-	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
-	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
-	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
-	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
-	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
-	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
-	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
-	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
-	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
-	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
-	BNXT_ULP_SYM_ECV_VALID_NO = 0,
-	BNXT_ULP_SYM_ECV_VALID_YES = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
-	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
-	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
-	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
-	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
-	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
-	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
-	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
-	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
-	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
-	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
-	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
-	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
-	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
-	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
-	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
-	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
-	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
-	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
-	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
-	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
-	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
-	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
-	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
-	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
-	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
-	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
-	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
-	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_PKT_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
-	BNXT_ULP_SYM_POP_VLAN_NO = 0,
-	BNXT_ULP_SYM_POP_VLAN_YES = 1,
 	BNXT_ULP_SYM_RECYCLE_CNT_IGNORE = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
 	BNXT_ULP_SYM_RECYCLE_CNT_ONE = 1,
-	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
 	BNXT_ULP_SYM_RECYCLE_CNT_TWO = 2,
-	BNXT_ULP_SYM_RECYCLE_CNT_ZERO = 0,
+	BNXT_ULP_SYM_RECYCLE_CNT_THREE = 3,
+	BNXT_ULP_SYM_AGG_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_AGG_ERROR_NO = 0,
+	BNXT_ULP_SYM_AGG_ERROR_YES = 1,
 	BNXT_ULP_SYM_RESERVED_IGNORE = 0,
-	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
-	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
-	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_IGNORE = 0,
+	BNXT_ULP_SYM_HREC_NEXT_NO = 0,
+	BNXT_ULP_SYM_HREC_NEXT_YES = 1,
 	BNXT_ULP_SYM_TL2_HDR_VALID_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_NO = 0,
 	BNXT_ULP_SYM_TL2_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
-	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_IGNORE = 0,
-	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
 	BNXT_ULP_SYM_TL2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_TL2_UC_MC_BC_BC = 3,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_IGNORE = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_NO = 0,
 	BNXT_ULP_SYM_TL2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_TL2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL3_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
 	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
-	BNXT_ULP_SYM_TL3_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL3_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
-	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_TL3_HDR_ISIP_YES = 1,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_IGNORE = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_NO = 0,
 	BNXT_ULP_SYM_TL3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_TL3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TL4_HDR_ERROR_YES = 1,
@@ -478,40 +355,164 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_TL4_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
 	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
-	BNXT_ULP_SYM_TL4_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TL4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_NO = 0,
 	BNXT_ULP_SYM_TUN_HDR_ERROR_YES = 1,
-	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_IGNORE = 0,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
 	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
-	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_IGNORE = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_NO = 0,
-	BNXT_ULP_SYM_TUN_HDR_VALID_YES = 1,
-	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
-	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_FLAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L2_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_IGNORE = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_UC = 0,
+	BNXT_ULP_SYM_L2_UC_MC_BC_MC = 2,
+	BNXT_ULP_SYM_L2_UC_MC_BC_BC = 3,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_IGNORE = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_NO = 0,
+	BNXT_ULP_SYM_L2_VTAG_PRESENT_YES = 1,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_IGNORE = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_NO = 0,
+	BNXT_ULP_SYM_L2_TWO_VTAGS_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L3_HDR_ISIP_IGNORE = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_NO = 0,
+	BNXT_ULP_SYM_L3_HDR_ISIP_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_SRC_YES = 1,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_IGNORE = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_NO = 0,
+	BNXT_ULP_SYM_L3_IPV6_CMP_DST_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_VALID_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_VALID_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_ERROR_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_ERROR_YES = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_IGNORE = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_NO = 0,
+	BNXT_ULP_SYM_L4_HDR_IS_UDP_TCP_YES = 1,
+	BNXT_ULP_SYM_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_ECV_VALID_NO = 0,
+	BNXT_ULP_SYM_ECV_VALID_YES = 1,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_CUSTOM_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_L2_EN_NO = 0,
+	BNXT_ULP_SYM_ECV_L2_EN_YES = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP = 0,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI = 1,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_IVLAN_PRI = 2,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_REMAP_DIFFSERV = 3,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI = 4,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_REMAP_DIFFSERV = 5,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_ENCAP_PRI = 6,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_REMAP_DIFFSERV = 7,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_0 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_1 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_2 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_3 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_4 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_5 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_6 = 8,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_0_PRI_7 = 8,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
 	BNXT_ULP_SYM_WH_PLUS_INT_ACT_REC = 1,
-	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
-	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_WH_PLUS_EXT_ACT_REC = 0,
 	BNXT_ULP_SYM_WH_PLUS_UC_ACT_REC = 0,
+	BNXT_ULP_SYM_WH_PLUS_MC_ACT_REC = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_DROP_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_POP_VLAN_NO = 0,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_YES = 1,
+	BNXT_ULP_SYM_ACT_REC_METER_EN_NO = 0,
+	BNXT_ULP_SYM_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_SYM_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY_LOOPBACK_PORT = 16,
+	BNXT_ULP_SYM_STINGRAY_EXT_EM_MAX_KEY_SIZE = 448,
+	BNXT_ULP_SYM_STINGRAY2_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_THOR_LOOPBACK_PORT = 3,
+	BNXT_ULP_SYM_MATCH_TYPE_EM = 0,
+	BNXT_ULP_SYM_MATCH_TYPE_WM = 1,
+	BNXT_ULP_SYM_IP_PROTO_ICMP = 1,
+	BNXT_ULP_SYM_IP_PROTO_IGMP = 2,
+	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
+	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
 enum bnxt_ulp_wh_plus {
-	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448,
-	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4
+	BNXT_ULP_WH_PLUS_LOOPBACK_PORT = 4,
+	BNXT_ULP_WH_PLUS_EXT_EM_MAX_KEY_SIZE = 448
 };
 
 enum bnxt_ulp_act_prop_sz {
@@ -604,18 +605,44 @@ enum bnxt_ulp_act_prop_idx {
 
 enum bnxt_ulp_class_hid {
 	BNXT_ULP_CLASS_HID_0080 = 0x0080,
-	BNXT_ULP_CLASS_HID_0000 = 0x0000,
-	BNXT_ULP_CLASS_HID_0087 = 0x0087
+	BNXT_ULP_CLASS_HID_0087 = 0x0087,
+	BNXT_ULP_CLASS_HID_0000 = 0x0000
 };
 
 enum bnxt_ulp_act_hid {
-	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_0002 = 0x0002,
+	BNXT_ULP_ACT_HID_0022 = 0x0022,
+	BNXT_ULP_ACT_HID_0026 = 0x0026,
+	BNXT_ULP_ACT_HID_0006 = 0x0006,
+	BNXT_ULP_ACT_HID_0009 = 0x0009,
 	BNXT_ULP_ACT_HID_0029 = 0x0029,
-	BNXT_ULP_ACT_HID_0040 = 0x0040
+	BNXT_ULP_ACT_HID_002d = 0x002d,
+	BNXT_ULP_ACT_HID_004b = 0x004b,
+	BNXT_ULP_ACT_HID_004a = 0x004a,
+	BNXT_ULP_ACT_HID_004f = 0x004f,
+	BNXT_ULP_ACT_HID_004e = 0x004e,
+	BNXT_ULP_ACT_HID_006c = 0x006c,
+	BNXT_ULP_ACT_HID_0070 = 0x0070,
+	BNXT_ULP_ACT_HID_0021 = 0x0021,
+	BNXT_ULP_ACT_HID_0025 = 0x0025,
+	BNXT_ULP_ACT_HID_0043 = 0x0043,
+	BNXT_ULP_ACT_HID_0042 = 0x0042,
+	BNXT_ULP_ACT_HID_0047 = 0x0047,
+	BNXT_ULP_ACT_HID_0046 = 0x0046,
+	BNXT_ULP_ACT_HID_0064 = 0x0064,
+	BNXT_ULP_ACT_HID_0068 = 0x0068,
+	BNXT_ULP_ACT_HID_00a1 = 0x00a1,
+	BNXT_ULP_ACT_HID_00df = 0x00df
 };
 
 enum bnxt_ulp_df_tpl {
 	BNXT_ULP_DF_TPL_PORT_TO_VS = 1,
-	BNXT_ULP_DF_TPL_VS_TO_PORT = 2
+	BNXT_ULP_DF_TPL_VS_TO_PORT = 2,
+	BNXT_ULP_DF_TPL_VFREP_TO_VF = 3,
+	BNXT_ULP_DF_TPL_VF_TO_VFREP = 4,
+	BNXT_ULP_DF_TPL_DRV_FUNC_SVIF_PUSH_VLAN = 5,
+	BNXT_ULP_DF_TPL_PORT_SVIF_VID_VNIC_POP_VLAN = 6,
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC = 7
 };
+
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
index 84b952304..769542042 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_field.h
@@ -6,220 +6,275 @@
 #ifndef ULP_HDR_FIELD_ENUMS_H_
 #define ULP_HDR_FIELD_ENUMS_H_
 
-enum bnxt_ulp_hf0 {
-	BNXT_ULP_HF0_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF0_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF0_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF0_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF0_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF0_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF0_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF0_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF0_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF0_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF0_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF0_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF0_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF0_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF0_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF0_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF0_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF0_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF0_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF0_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF0_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF0_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF0_IDX_O_UDP_CSUM              = 23
-};
-
 enum bnxt_ulp_hf1 {
-	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF1_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF1_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF1_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF1_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF1_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF1_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF1_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF1_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF1_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF1_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF1_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF1_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF1_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF1_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF1_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF1_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF1_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF1_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF1_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF1_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF1_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF1_IDX_O_UDP_CSUM              = 23
+	BNXT_ULP_HF1_IDX_SVIF_INDEX              = 0
 };
 
 enum bnxt_ulp_hf2 {
-	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0,
-	BNXT_ULP_HF2_IDX_O_ETH_DMAC              = 1,
-	BNXT_ULP_HF2_IDX_O_ETH_SMAC              = 2,
-	BNXT_ULP_HF2_IDX_O_ETH_TYPE              = 3,
-	BNXT_ULP_HF2_IDX_OO_VLAN_CFI_PRI         = 4,
-	BNXT_ULP_HF2_IDX_OO_VLAN_VID             = 5,
-	BNXT_ULP_HF2_IDX_OO_VLAN_TYPE            = 6,
-	BNXT_ULP_HF2_IDX_OI_VLAN_CFI_PRI         = 7,
-	BNXT_ULP_HF2_IDX_OI_VLAN_VID             = 8,
-	BNXT_ULP_HF2_IDX_OI_VLAN_TYPE            = 9,
-	BNXT_ULP_HF2_IDX_O_IPV4_VER              = 10,
-	BNXT_ULP_HF2_IDX_O_IPV4_TOS              = 11,
-	BNXT_ULP_HF2_IDX_O_IPV4_LEN              = 12,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_ID          = 13,
-	BNXT_ULP_HF2_IDX_O_IPV4_FRAG_OFF         = 14,
-	BNXT_ULP_HF2_IDX_O_IPV4_TTL              = 15,
-	BNXT_ULP_HF2_IDX_O_IPV4_NEXT_PID         = 16,
-	BNXT_ULP_HF2_IDX_O_IPV4_CSUM             = 17,
-	BNXT_ULP_HF2_IDX_O_IPV4_SRC_ADDR         = 18,
-	BNXT_ULP_HF2_IDX_O_IPV4_DST_ADDR         = 19,
-	BNXT_ULP_HF2_IDX_O_UDP_SRC_PORT          = 20,
-	BNXT_ULP_HF2_IDX_O_UDP_DST_PORT          = 21,
-	BNXT_ULP_HF2_IDX_O_UDP_LENGTH            = 22,
-	BNXT_ULP_HF2_IDX_O_UDP_CSUM              = 23,
-	BNXT_ULP_HF2_IDX_T_VXLAN_FLAGS           = 24,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD0           = 25,
-	BNXT_ULP_HF2_IDX_T_VXLAN_VNI             = 26,
-	BNXT_ULP_HF2_IDX_T_VXLAN_RSVD1           = 27,
-	BNXT_ULP_HF2_IDX_I_ETH_DMAC              = 28,
-	BNXT_ULP_HF2_IDX_I_ETH_SMAC              = 29,
-	BNXT_ULP_HF2_IDX_I_ETH_TYPE              = 30,
-	BNXT_ULP_HF2_IDX_IO_VLAN_CFI_PRI         = 31,
-	BNXT_ULP_HF2_IDX_IO_VLAN_VID             = 32,
-	BNXT_ULP_HF2_IDX_IO_VLAN_TYPE            = 33,
-	BNXT_ULP_HF2_IDX_II_VLAN_CFI_PRI         = 34,
-	BNXT_ULP_HF2_IDX_II_VLAN_VID             = 35,
-	BNXT_ULP_HF2_IDX_II_VLAN_TYPE            = 36,
-	BNXT_ULP_HF2_IDX_I_IPV4_VER              = 37,
-	BNXT_ULP_HF2_IDX_I_IPV4_TOS              = 38,
-	BNXT_ULP_HF2_IDX_I_IPV4_LEN              = 39,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_ID          = 40,
-	BNXT_ULP_HF2_IDX_I_IPV4_FRAG_OFF         = 41,
-	BNXT_ULP_HF2_IDX_I_IPV4_TTL              = 42,
-	BNXT_ULP_HF2_IDX_I_IPV4_NEXT_PID         = 43,
-	BNXT_ULP_HF2_IDX_I_IPV4_CSUM             = 44,
-	BNXT_ULP_HF2_IDX_I_IPV4_SRC_ADDR         = 45,
-	BNXT_ULP_HF2_IDX_I_IPV4_DST_ADDR         = 46,
-	BNXT_ULP_HF2_IDX_I_UDP_SRC_PORT          = 47,
-	BNXT_ULP_HF2_IDX_I_UDP_DST_PORT          = 48,
-	BNXT_ULP_HF2_IDX_I_UDP_LENGTH            = 49,
-	BNXT_ULP_HF2_IDX_I_UDP_CSUM              = 50
-};
-
-enum bnxt_ulp_hf_bitmask0 {
-	BNXT_ULP_HF0_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF0_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF0_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF0_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF0_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF0_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF2_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf3 {
+	BNXT_ULP_HF3_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf4 {
+	BNXT_ULP_HF4_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf5 {
+	BNXT_ULP_HF5_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf6 {
+	BNXT_ULP_HF6_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf7 {
+	BNXT_ULP_HF7_IDX_SVIF_INDEX              = 0
+};
+
+enum bnxt_ulp_hf8 {
+	BNXT_ULP_HF8_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF8_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF8_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF8_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF8_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF8_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF8_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF8_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF8_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF8_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF8_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF8_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF8_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF8_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF8_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF8_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF8_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF8_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF8_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF8_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF8_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF8_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF8_IDX_O_UDP_CSUM              = 23
+};
+
+enum bnxt_ulp_hf9 {
+	BNXT_ULP_HF9_IDX_SVIF_INDEX              = 0,
+	BNXT_ULP_HF9_IDX_O_ETH_DMAC              = 1,
+	BNXT_ULP_HF9_IDX_O_ETH_SMAC              = 2,
+	BNXT_ULP_HF9_IDX_O_ETH_TYPE              = 3,
+	BNXT_ULP_HF9_IDX_OO_VLAN_CFI_PRI         = 4,
+	BNXT_ULP_HF9_IDX_OO_VLAN_VID             = 5,
+	BNXT_ULP_HF9_IDX_OO_VLAN_TYPE            = 6,
+	BNXT_ULP_HF9_IDX_OI_VLAN_CFI_PRI         = 7,
+	BNXT_ULP_HF9_IDX_OI_VLAN_VID             = 8,
+	BNXT_ULP_HF9_IDX_OI_VLAN_TYPE            = 9,
+	BNXT_ULP_HF9_IDX_O_IPV4_VER              = 10,
+	BNXT_ULP_HF9_IDX_O_IPV4_TOS              = 11,
+	BNXT_ULP_HF9_IDX_O_IPV4_LEN              = 12,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_ID          = 13,
+	BNXT_ULP_HF9_IDX_O_IPV4_FRAG_OFF         = 14,
+	BNXT_ULP_HF9_IDX_O_IPV4_TTL              = 15,
+	BNXT_ULP_HF9_IDX_O_IPV4_PROTO_ID         = 16,
+	BNXT_ULP_HF9_IDX_O_IPV4_CSUM             = 17,
+	BNXT_ULP_HF9_IDX_O_IPV4_SRC_ADDR         = 18,
+	BNXT_ULP_HF9_IDX_O_IPV4_DST_ADDR         = 19,
+	BNXT_ULP_HF9_IDX_O_UDP_SRC_PORT          = 20,
+	BNXT_ULP_HF9_IDX_O_UDP_DST_PORT          = 21,
+	BNXT_ULP_HF9_IDX_O_UDP_LENGTH            = 22,
+	BNXT_ULP_HF9_IDX_O_UDP_CSUM              = 23,
+	BNXT_ULP_HF9_IDX_T_VXLAN_FLAGS           = 24,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD0           = 25,
+	BNXT_ULP_HF9_IDX_T_VXLAN_VNI             = 26,
+	BNXT_ULP_HF9_IDX_T_VXLAN_RSVD1           = 27,
+	BNXT_ULP_HF9_IDX_I_ETH_DMAC              = 28,
+	BNXT_ULP_HF9_IDX_I_ETH_SMAC              = 29,
+	BNXT_ULP_HF9_IDX_I_ETH_TYPE              = 30,
+	BNXT_ULP_HF9_IDX_IO_VLAN_CFI_PRI         = 31,
+	BNXT_ULP_HF9_IDX_IO_VLAN_VID             = 32,
+	BNXT_ULP_HF9_IDX_IO_VLAN_TYPE            = 33,
+	BNXT_ULP_HF9_IDX_II_VLAN_CFI_PRI         = 34,
+	BNXT_ULP_HF9_IDX_II_VLAN_VID             = 35,
+	BNXT_ULP_HF9_IDX_II_VLAN_TYPE            = 36,
+	BNXT_ULP_HF9_IDX_I_IPV4_VER              = 37,
+	BNXT_ULP_HF9_IDX_I_IPV4_TOS              = 38,
+	BNXT_ULP_HF9_IDX_I_IPV4_LEN              = 39,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_ID          = 40,
+	BNXT_ULP_HF9_IDX_I_IPV4_FRAG_OFF         = 41,
+	BNXT_ULP_HF9_IDX_I_IPV4_TTL              = 42,
+	BNXT_ULP_HF9_IDX_I_IPV4_PROTO_ID         = 43,
+	BNXT_ULP_HF9_IDX_I_IPV4_CSUM             = 44,
+	BNXT_ULP_HF9_IDX_I_IPV4_SRC_ADDR         = 45,
+	BNXT_ULP_HF9_IDX_I_IPV4_DST_ADDR         = 46,
+	BNXT_ULP_HF9_IDX_I_UDP_SRC_PORT          = 47,
+	BNXT_ULP_HF9_IDX_I_UDP_DST_PORT          = 48,
+	BNXT_ULP_HF9_IDX_I_UDP_LENGTH            = 49,
+	BNXT_ULP_HF9_IDX_I_UDP_CSUM              = 50
+};
+
+enum bnxt_ulp_hf10 {
+	BNXT_ULP_HF10_IDX_SVIF_INDEX             = 0,
+	BNXT_ULP_HF10_IDX_O_ETH_DMAC             = 1,
+	BNXT_ULP_HF10_IDX_O_ETH_SMAC             = 2,
+	BNXT_ULP_HF10_IDX_O_ETH_TYPE             = 3,
+	BNXT_ULP_HF10_IDX_OO_VLAN_CFI_PRI        = 4,
+	BNXT_ULP_HF10_IDX_OO_VLAN_VID            = 5,
+	BNXT_ULP_HF10_IDX_OO_VLAN_TYPE           = 6,
+	BNXT_ULP_HF10_IDX_OI_VLAN_CFI_PRI        = 7,
+	BNXT_ULP_HF10_IDX_OI_VLAN_VID            = 8,
+	BNXT_ULP_HF10_IDX_OI_VLAN_TYPE           = 9,
+	BNXT_ULP_HF10_IDX_O_IPV4_VER             = 10,
+	BNXT_ULP_HF10_IDX_O_IPV4_TOS             = 11,
+	BNXT_ULP_HF10_IDX_O_IPV4_LEN             = 12,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_ID         = 13,
+	BNXT_ULP_HF10_IDX_O_IPV4_FRAG_OFF        = 14,
+	BNXT_ULP_HF10_IDX_O_IPV4_TTL             = 15,
+	BNXT_ULP_HF10_IDX_O_IPV4_PROTO_ID        = 16,
+	BNXT_ULP_HF10_IDX_O_IPV4_CSUM            = 17,
+	BNXT_ULP_HF10_IDX_O_IPV4_SRC_ADDR        = 18,
+	BNXT_ULP_HF10_IDX_O_IPV4_DST_ADDR        = 19,
+	BNXT_ULP_HF10_IDX_O_UDP_SRC_PORT         = 20,
+	BNXT_ULP_HF10_IDX_O_UDP_DST_PORT         = 21,
+	BNXT_ULP_HF10_IDX_O_UDP_LENGTH           = 22,
+	BNXT_ULP_HF10_IDX_O_UDP_CSUM             = 23
 };
 
 enum bnxt_ulp_hf_bitmask1 {
-	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF1_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF1_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF1_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF1_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF1_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+	BNXT_ULP_HF1_BITMASK_SVIF_INDEX          = 0x8000000000000000
 };
 
 enum bnxt_ulp_hf_bitmask2 {
-	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
-	BNXT_ULP_HF2_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
-	BNXT_ULP_HF2_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
-	BNXT_ULP_HF2_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_VER          = 0x0020000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_NEXT_PID     = 0x0000800000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
-	BNXT_ULP_HF2_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
-	BNXT_ULP_HF2_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
-	BNXT_ULP_HF2_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
-	BNXT_ULP_HF2_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
-	BNXT_ULP_HF2_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_VID         = 0x0000000010000000,
-	BNXT_ULP_HF2_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_VER          = 0x0000000004000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_NEXT_PID     = 0x0000000000100000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
-	BNXT_ULP_HF2_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
-	BNXT_ULP_HF2_BITMASK_I_UDP_CSUM          = 0x0000000000002000
+	BNXT_ULP_HF2_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask3 {
+	BNXT_ULP_HF3_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask4 {
+	BNXT_ULP_HF4_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask5 {
+	BNXT_ULP_HF5_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask6 {
+	BNXT_ULP_HF6_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask7 {
+	BNXT_ULP_HF7_BITMASK_SVIF_INDEX          = 0x8000000000000000
+};
+
+enum bnxt_ulp_hf_bitmask8 {
+	BNXT_ULP_HF8_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF8_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF8_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF8_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF8_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF8_BITMASK_O_UDP_CSUM          = 0x0000010000000000
+};
+
+enum bnxt_ulp_hf_bitmask9 {
+	BNXT_ULP_HF9_BITMASK_SVIF_INDEX          = 0x8000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_DMAC          = 0x4000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_SMAC          = 0x2000000000000000,
+	BNXT_ULP_HF9_BITMASK_O_ETH_TYPE          = 0x1000000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_CFI_PRI     = 0x0800000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_VID         = 0x0400000000000000,
+	BNXT_ULP_HF9_BITMASK_OO_VLAN_TYPE        = 0x0200000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_CFI_PRI     = 0x0100000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_VID         = 0x0080000000000000,
+	BNXT_ULP_HF9_BITMASK_OI_VLAN_TYPE        = 0x0040000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_VER          = 0x0020000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TOS          = 0x0010000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_LEN          = 0x0008000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_ID      = 0x0004000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_FRAG_OFF     = 0x0002000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_TTL          = 0x0001000000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_PROTO_ID     = 0x0000800000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_CSUM         = 0x0000400000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_SRC_ADDR     = 0x0000200000000000,
+	BNXT_ULP_HF9_BITMASK_O_IPV4_DST_ADDR     = 0x0000100000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_SRC_PORT      = 0x0000080000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_DST_PORT      = 0x0000040000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_LENGTH        = 0x0000020000000000,
+	BNXT_ULP_HF9_BITMASK_O_UDP_CSUM          = 0x0000010000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_FLAGS       = 0x0000008000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD0       = 0x0000004000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_VNI         = 0x0000002000000000,
+	BNXT_ULP_HF9_BITMASK_T_VXLAN_RSVD1       = 0x0000001000000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_DMAC          = 0x0000000800000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_SMAC          = 0x0000000400000000,
+	BNXT_ULP_HF9_BITMASK_I_ETH_TYPE          = 0x0000000200000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_CFI_PRI     = 0x0000000100000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_VID         = 0x0000000080000000,
+	BNXT_ULP_HF9_BITMASK_IO_VLAN_TYPE        = 0x0000000040000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_CFI_PRI     = 0x0000000020000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_VID         = 0x0000000010000000,
+	BNXT_ULP_HF9_BITMASK_II_VLAN_TYPE        = 0x0000000008000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_VER          = 0x0000000004000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TOS          = 0x0000000002000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_LEN          = 0x0000000001000000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_ID      = 0x0000000000800000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_FRAG_OFF     = 0x0000000000400000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_TTL          = 0x0000000000200000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_PROTO_ID     = 0x0000000000100000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_CSUM         = 0x0000000000080000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_SRC_ADDR     = 0x0000000000040000,
+	BNXT_ULP_HF9_BITMASK_I_IPV4_DST_ADDR     = 0x0000000000020000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_SRC_PORT      = 0x0000000000010000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_DST_PORT      = 0x0000000000008000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_LENGTH        = 0x0000000000004000,
+	BNXT_ULP_HF9_BITMASK_I_UDP_CSUM          = 0x0000000000002000
 };
 
+enum bnxt_ulp_hf_bitmask10 {
+	BNXT_ULP_HF10_BITMASK_SVIF_INDEX         = 0x8000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_DMAC         = 0x4000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_SMAC         = 0x2000000000000000,
+	BNXT_ULP_HF10_BITMASK_O_ETH_TYPE         = 0x1000000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_CFI_PRI    = 0x0800000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_VID        = 0x0400000000000000,
+	BNXT_ULP_HF10_BITMASK_OO_VLAN_TYPE       = 0x0200000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_CFI_PRI    = 0x0100000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_VID        = 0x0080000000000000,
+	BNXT_ULP_HF10_BITMASK_OI_VLAN_TYPE       = 0x0040000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_VER         = 0x0020000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TOS         = 0x0010000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_LEN         = 0x0008000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_ID     = 0x0004000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_FRAG_OFF    = 0x0002000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_TTL         = 0x0001000000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_PROTO_ID    = 0x0000800000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_CSUM        = 0x0000400000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_SRC_ADDR    = 0x0000200000000000,
+	BNXT_ULP_HF10_BITMASK_O_IPV4_DST_ADDR    = 0x0000100000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_SRC_PORT     = 0x0000080000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_DST_PORT     = 0x0000040000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_LENGTH       = 0x0000020000000000,
+	BNXT_ULP_HF10_BITMASK_O_UDP_CSUM         = 0x0000010000000000
+};
 #endif
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 7c440e3a4..f0a57cf65 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -294,60 +294,72 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 
 struct bnxt_ulp_cache_tbl_params ulp_cache_tbl_params[] = {
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_L2_CNTXT_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_RX] = {
-		.num_entries        = 16384
+		TF_DIR_RX] = {
+		.num_entries             = 16384
 	},
 	[BNXT_ULP_RESOURCE_SUB_TYPE_CACHE_TYPE_PROFILE_TCAM << 1 |
-	TF_DIR_TX] = {
-		.num_entries        = 16384
+		TF_DIR_TX] = {
+		.num_entries             = 16384
 	}
 };
 
 struct bnxt_ulp_device_params ulp_device_params[BNXT_ULP_DEVICE_ID_LAST] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
-	.flow_mem_type          = BNXT_ULP_FLOW_MEM_TYPE_EXT,
-	.byte_order             = BNXT_ULP_BYTE_ORDER_LE,
-	.encap_byte_swap        = 1,
-	.flow_db_num_entries    = 32768,
-	.mark_db_lfid_entries   = 65536,
-	.mark_db_gfid_entries   = 65536,
-	.flow_count_db_entries  = 16384,
-	.num_resources_per_flow = 8,
-	.num_phy_ports          = 2,
-	.ext_cntr_table_type    = 0,
-	.byte_count_mask        = 0x00000003ffffffff,
-	.packet_count_mask      = 0xfffffffc00000000,
-	.byte_count_shift       = 0,
-	.packet_count_shift     = 36
+		.flow_mem_type           = BNXT_ULP_FLOW_MEM_TYPE_EXT,
+		.byte_order              = BNXT_ULP_BYTE_ORDER_LE,
+		.encap_byte_swap         = 1,
+		.flow_db_num_entries     = 32768,
+		.mark_db_lfid_entries    = 65536,
+		.mark_db_gfid_entries    = 65536,
+		.flow_count_db_entries   = 16384,
+		.num_resources_per_flow  = 8,
+		.num_phy_ports           = 2,
+		.ext_cntr_table_type     = 0,
+		.byte_count_mask         = 0x0000000fffffffff,
+		.packet_count_mask       = 0xffffffff00000000,
+		.byte_count_shift        = 0,
+		.packet_count_shift      = 36
 	}
 };
 
 struct bnxt_ulp_glb_resource_info ulp_glb_resource_tbl[] = {
 	[0] = {
-	.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index       = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction               = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_RX
 	},
 	[1] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_PROF_FUNC,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
-	.direction          = TF_DIR_TX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_PROF_FUNC,
+	.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_PROF_FUNC_ID,
+		.direction               = TF_DIR_TX
 	},
 	[2] = {
-	.resource_func      = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
-	.resource_type      = TF_IDENT_TYPE_L2_CTXT,
-	.glb_regfile_index  = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
-	.direction          = TF_DIR_RX
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_RX
+	},
+	[3] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+		.resource_type           = TF_IDENT_TYPE_L2_CTXT,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_L2_CNTXT_ID,
+		.direction               = TF_DIR_TX
+	},
+	[4] = {
+		.resource_func           = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+		.resource_type           = TF_TBL_TYPE_FULL_ACT_RECORD,
+		.glb_regfile_index = BNXT_ULP_GLB_REGFILE_INDEX_GLB_LB_AREC_PTR,
+		.direction               = TF_DIR_TX
 	}
 };
 
@@ -547,10 +559,11 @@ struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
 };
 
 uint32_t bnxt_ulp_encap_vtag_map[] = {
-	[0] = BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
-	[1] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
-	[2] = BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_NOP,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_1_ENCAP_PRI,
+	BNXT_ULP_SYM_ECV_VTAG_TYPE_ADD_2_ENCAP_PRI
 };
 
 uint32_t ulp_glb_template_tbl[] = {
+	BNXT_ULP_DF_TPL_LOOPBACK_ACTION_REC
 };
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 46/51] net/bnxt: create default flow rules for the VF-rep
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (44 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
                           ` (6 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Invoked 3 new APIs for the default flow create/destroy and to get
the action ptr for a default flow.
Changed ulp_intf_update() to accept rte_eth_dev as input and invoke
the same from the VF rep start function.
ULP Mark Manager will indicate if the cfa_code returned in the
Rx completion descriptor was for one of the default flow rules
created for the VF representor conduit. The mark_id returned
in such a case would be the VF rep's DPDK Port id, which can be
used to get the corresponding rte_eth_dev struct in bnxt_vf_recv

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   4 +-
 drivers/net/bnxt/bnxt_reps.c | 134 ++++++++++++++++++++++++-----------
 drivers/net/bnxt/bnxt_reps.h |   3 +-
 drivers/net/bnxt/bnxt_rxr.c  |  25 +++----
 drivers/net/bnxt/bnxt_txq.h  |   1 +
 5 files changed, 111 insertions(+), 56 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 32acced60..f16bf3319 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -806,8 +806,10 @@ struct bnxt_vf_representor {
 	uint16_t		fw_fid;
 	uint16_t		dflt_vnic_id;
 	uint16_t		svif;
-	uint16_t		tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	uint16_t		rx_cfa_code;
+	uint32_t		rep2vf_flow_id;
+	uint32_t		vf2rep_flow_id;
 	/* Private data store of associated PF/Trusted VF */
 	struct rte_eth_dev	*parent_dev;
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index ea6f0010f..a37a06184 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -12,6 +12,9 @@
 #include "bnxt_txr.h"
 #include "bnxt_hwrm.h"
 #include "hsi_struct_def_dpdk.h"
+#include "bnxt_tf_common.h"
+#include "ulp_port_db.h"
+#include "ulp_flow_db.h"
 
 static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_infos_get = bnxt_vf_rep_dev_info_get_op,
@@ -29,30 +32,20 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 };
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf)
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf)
 {
 	struct bnxt_sw_rx_bd *prod_rx_buf;
 	struct bnxt_rx_ring_info *rep_rxr;
 	struct bnxt_rx_queue *rep_rxq;
 	struct rte_eth_dev *vfr_eth_dev;
 	struct bnxt_vf_representor *vfr_bp;
-	uint16_t vf_id;
 	uint16_t mask;
 	uint8_t que;
 
-	vf_id = bp->cfa_code_map[cfa_code];
-	/* cfa_code is invalid OR vf_id > MAX REP. Assume normal Rx */
-	if (vf_id == BNXT_VF_IDX_INVALID || vf_id > BNXT_MAX_VF_REPS)
-		return 1;
-	vfr_eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
+	vfr_eth_dev = &rte_eth_devices[port_id];
 	if (!vfr_eth_dev)
 		return 1;
 	vfr_bp = vfr_eth_dev->data->dev_private;
-	if (vfr_bp->rx_cfa_code != cfa_code) {
-		/* cfa_code not meant for this VF rep!!?? */
-		return 1;
-	}
 	/* If rxq_id happens to be > max rep_queue, use rxq0 */
 	que = queue_id < BNXT_MAX_VF_REP_RINGS ? queue_id : 0;
 	rep_rxq = vfr_bp->rx_queues[que];
@@ -127,7 +120,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	pthread_mutex_lock(&parent->rep_info->vfr_lock);
 	ptxq = parent->tx_queues[qid];
 
-	ptxq->tx_cfa_action = vf_rep_bp->tx_cfa_action;
+	ptxq->vfr_tx_cfa_action = vf_rep_bp->vfr_tx_cfa_action;
 
 	for (i = 0; i < nb_pkts; i++) {
 		vf_rep_bp->tx_bytes[qid] += tx_pkts[i]->pkt_len;
@@ -135,7 +128,7 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	}
 
 	rc = bnxt_xmit_pkts(ptxq, tx_pkts, nb_pkts);
-	ptxq->tx_cfa_action = 0;
+	ptxq->vfr_tx_cfa_action = 0;
 	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
 
 	return rc;
@@ -252,10 +245,67 @@ int bnxt_vf_rep_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_compl)
 	return rc;
 }
 
-static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
+static int bnxt_tf_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
+{
+	int rc;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
+	struct rte_eth_dev *parent_dev = vfr->parent_dev;
+	struct bnxt *parent_bp = parent_dev->data->dev_private;
+	uint16_t vfr_port_id = vfr_ethdev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(vfr_port_id >> 8) & 0xff, vfr_port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	ulp_port_db_dev_port_intf_update(parent_bp->ulp_ctx, vfr_ethdev);
+
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VFREP_TO_VF,
+				     &vfr->rep2vf_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VFR->VF failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VFR->VF! ***\n");
+	BNXT_TF_DBG(DEBUG, "rep2vf_flow_id = %d\n", vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_db_cfa_action_get(parent_bp->ulp_ctx,
+						vfr->rep2vf_flow_id,
+						&vfr->vfr_tx_cfa_action);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Failed to get action_ptr for VFR->VF dflt rule\n");
+		return -EIO;
+	}
+	BNXT_TF_DBG(DEBUG, "tx_cfa_action = %d\n", vfr->vfr_tx_cfa_action);
+	rc = ulp_default_flow_create(parent_dev, param_list,
+				     BNXT_ULP_DF_TPL_VF_TO_VFREP,
+				     &vfr->vf2rep_flow_id);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG,
+			    "Default flow rule creation for VF->VFR failed!\n");
+		return -EIO;
+	}
+
+	BNXT_TF_DBG(DEBUG, "*** Default flow rule created for VF->VFR! ***\n");
+	BNXT_TF_DBG(DEBUG, "vfr2rep_flow_id = %d\n", vfr->vf2rep_flow_id);
+
+	return 0;
+}
+
+static int bnxt_vfr_alloc(struct rte_eth_dev *vfr_ethdev)
 {
 	int rc = 0;
-	struct bnxt *parent_bp;
+	struct bnxt_vf_representor *vfr = vfr_ethdev->data->dev_private;
 
 	if (!vfr || !vfr->parent_dev) {
 		PMD_DRV_LOG(ERR,
@@ -263,10 +313,8 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 		return -ENOMEM;
 	}
 
-	parent_bp = vfr->parent_dev->data->dev_private;
-
 	/* Check if representor has been already allocated in FW */
-	if (vfr->tx_cfa_action && vfr->rx_cfa_code)
+	if (vfr->vfr_tx_cfa_action && vfr->rx_cfa_code)
 		return 0;
 
 	/*
@@ -274,24 +322,14 @@ static int bnxt_vfr_alloc(struct bnxt_vf_representor *vfr)
 	 * Otherwise the FW will create the VF-rep rules with
 	 * default drop action.
 	 */
-
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_alloc(parent_bp,
-				     vfr->vf_id,
-				     &vfr->tx_cfa_action,
-				     &vfr->rx_cfa_code);
-	 */
-	if (!rc) {
-		parent_bp->cfa_code_map[vfr->rx_cfa_code] = vfr->vf_id;
+	rc = bnxt_tf_vfr_alloc(vfr_ethdev);
+	if (!rc)
 		PMD_DRV_LOG(DEBUG, "allocated representor %d in FW\n",
 			    vfr->vf_id);
-	} else {
+	else
 		PMD_DRV_LOG(ERR,
 			    "Failed to alloc representor %d in FW\n",
 			    vfr->vf_id);
-	}
 
 	return rc;
 }
@@ -312,7 +350,7 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	struct bnxt_vf_representor *rep_bp = eth_dev->data->dev_private;
 	int rc;
 
-	rc = bnxt_vfr_alloc(rep_bp);
+	rc = bnxt_vfr_alloc(eth_dev);
 
 	if (!rc) {
 		eth_dev->rx_pkt_burst = &bnxt_vf_rep_rx_burst;
@@ -327,6 +365,25 @@ int bnxt_vf_rep_dev_start_op(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
+static int bnxt_tf_vfr_free(struct bnxt_vf_representor *vfr)
+{
+	int rc = 0;
+
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->rep2vf_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed rep2vf flowid: %d\n",
+			    vfr->rep2vf_flow_id);
+	rc = ulp_default_flow_destroy(vfr->parent_dev,
+				      vfr->vf2rep_flow_id);
+	if (rc)
+		PMD_DRV_LOG(ERR,
+			    "default flow destroy failed vf2rep flowid: %d\n",
+			    vfr->vf2rep_flow_id);
+	return 0;
+}
+
 static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 {
 	int rc = 0;
@@ -341,15 +398,10 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp = vfr->parent_dev->data->dev_private;
 
 	/* Check if representor has been already freed in FW */
-	if (!vfr->tx_cfa_action && !vfr->rx_cfa_code)
+	if (!vfr->vfr_tx_cfa_action && !vfr->rx_cfa_code)
 		return 0;
 
-	/*
-	 * This is where we need to replace invoking an HWRM cmd
-	 * with the new TFLIB ULP API to do more/less the same job
-	rc = bnxt_hwrm_cfa_vfr_free(parent_bp,
-				    vfr->vf_id);
-	 */
+	rc = bnxt_tf_vfr_free(vfr);
 	if (rc) {
 		PMD_DRV_LOG(ERR,
 			    "Failed to free representor %d in FW\n",
@@ -360,7 +412,7 @@ static int bnxt_vfr_free(struct bnxt_vf_representor *vfr)
 	parent_bp->cfa_code_map[vfr->rx_cfa_code] = BNXT_VF_IDX_INVALID;
 	PMD_DRV_LOG(DEBUG, "freed representor %d in FW\n",
 		    vfr->vf_id);
-	vfr->tx_cfa_action = 0;
+	vfr->vfr_tx_cfa_action = 0;
 	vfr->rx_cfa_code = 0;
 
 	return rc;
diff --git a/drivers/net/bnxt/bnxt_reps.h b/drivers/net/bnxt/bnxt_reps.h
index 5c2e0a0b9..418b95afc 100644
--- a/drivers/net/bnxt/bnxt_reps.h
+++ b/drivers/net/bnxt/bnxt_reps.h
@@ -13,8 +13,7 @@
 #define BNXT_VF_IDX_INVALID             0xffff
 
 uint16_t
-bnxt_vfr_recv(struct bnxt *bp, uint16_t cfa_code, uint16_t queue_id,
-	      struct rte_mbuf *mbuf);
+bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf);
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params);
 int bnxt_vf_representor_uninit(struct rte_eth_dev *eth_dev);
 int bnxt_vf_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 37b534fc2..64058879e 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -403,9 +403,9 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
-static void
+static uint32_t
 bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
-			  struct rte_mbuf *mbuf)
+			  struct rte_mbuf *mbuf, uint32_t *vfr_flag)
 {
 	uint32_t cfa_code;
 	uint32_t meta_fmt;
@@ -415,8 +415,6 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	uint32_t flags2;
 	uint32_t gfid_support = 0;
 	int rc;
-	uint32_t vfr_flag;
-
 
 	if (BNXT_GFID_ENABLED(bp))
 		gfid_support = 1;
@@ -485,19 +483,21 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 	}
 
 	rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid,
-				  cfa_code, &vfr_flag, &mark_id);
+				  cfa_code, vfr_flag, &mark_id);
 	if (!rc) {
 		/* Got the mark, write it to the mbuf and return */
 		mbuf->hash.fdir.hi = mark_id;
 		mbuf->udata64 = (cfa_code & 0xffffffffull) << 32;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
 		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-		return;
+		return mark_id;
 	}
 
 skip_mark:
 	mbuf->hash.fdir.hi = 0;
 	mbuf->hash.fdir.id = 0;
+
+	return 0;
 }
 
 void bnxt_set_mark_in_mbuf(struct bnxt *bp,
@@ -553,7 +553,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	int rc = 0;
 	uint8_t agg_buf = 0;
 	uint16_t cmp_type;
-	uint32_t flags2_f = 0;
+	uint32_t flags2_f = 0, vfr_flag = 0, mark_id = 0;
 	uint16_t flags_type;
 	struct bnxt *bp = rxq->bp;
 
@@ -632,7 +632,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	}
 
 	if (BNXT_TRUFLOW_EN(bp))
-		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+		mark_id = bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf,
+						    &vfr_flag);
 	else
 		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
@@ -736,10 +737,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 rx:
 	*rx_pkt = mbuf;
 
-	if ((BNXT_VF_IS_TRUSTED(rxq->bp) || BNXT_PF(rxq->bp)) &&
-	    rxq->bp->cfa_code_map && rxcmp1->cfa_code) {
-		if (!bnxt_vfr_recv(rxq->bp, rxcmp1->cfa_code, rxq->queue_id,
-				   mbuf)) {
+	if (BNXT_TRUFLOW_EN(bp) &&
+	    (BNXT_VF_IS_TRUSTED(bp) || BNXT_PF(bp)) &&
+	    vfr_flag) {
+		if (!bnxt_vfr_recv(mark_id, rxq->queue_id, mbuf)) {
 			/* Now return an error so that nb_rx_pkts is not
 			 * incremented.
 			 * This packet was meant to be given to the representor.
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 69ff89aab..a1ab3f39a 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -30,6 +30,7 @@ struct bnxt_tx_queue {
 	int			index;
 	int			tx_wake_thresh;
 	uint32_t                tx_cfa_action;
+	uint32_t		vfr_tx_cfa_action;
 	struct bnxt_tx_ring_info	*tx_ring;
 
 	unsigned int		cp_nr_rings;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 47/51] net/bnxt: add port default rules for ingress and egress
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (45 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
                           ` (5 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

ingress & egress port default rules are needed to send the packet
from port_to_dpdk & dpdk_to_port respectively.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c     | 76 +++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h |  3 ++
 2 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 678aa20e7..b21f85095 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -29,6 +29,7 @@
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
 #include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
@@ -1161,6 +1162,73 @@ static int bnxt_handle_if_change_status(struct bnxt *bp)
 	return rc;
 }
 
+static int32_t
+bnxt_create_port_app_df_rule(struct bnxt *bp, uint8_t flow_type,
+			     uint32_t *flow_id)
+{
+	uint16_t port_id = bp->eth_dev->data->port_id;
+	struct ulp_tlv_param param_list[] = {
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+			.length = 2,
+			.value = {(port_id >> 8) & 0xff, port_id & 0xff}
+		},
+		{
+			.type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+			.length = 0,
+			.value = {0}
+		}
+	};
+
+	return ulp_default_flow_create(bp->eth_dev, param_list, flow_type,
+				       flow_id);
+}
+
+static int32_t
+bnxt_create_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+	int rc;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_PORT_TO_VS,
+					  &cfg_data->port_to_app_flow_id);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to create port to app default rule\n");
+		return rc;
+	}
+
+	BNXT_TF_DBG(DEBUG, "***** created port to app default rule ******\n");
+	rc = bnxt_create_port_app_df_rule(bp, BNXT_ULP_DF_TPL_VS_TO_PORT,
+					  &cfg_data->app_to_port_flow_id);
+	if (!rc) {
+		rc = ulp_default_flow_db_cfa_action_get(bp->ulp_ctx,
+							cfg_data->app_to_port_flow_id,
+							&cfg_data->tx_cfa_action);
+		if (rc)
+			goto err;
+
+		BNXT_TF_DBG(DEBUG,
+			    "***** created app to port default rule *****\n");
+		return 0;
+	}
+
+err:
+	BNXT_TF_DBG(DEBUG, "Failed to create app to port default rule\n");
+	return rc;
+}
+
+static void
+bnxt_destroy_df_rules(struct bnxt *bp)
+{
+	struct bnxt_ulp_data *cfg_data;
+
+	cfg_data = bp->ulp_ctx->cfg_data;
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->port_to_app_flow_id);
+	ulp_default_flow_destroy(bp->eth_dev, cfg_data->app_to_port_flow_id);
+}
+
 static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
@@ -1329,8 +1397,11 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 	rte_eal_alarm_cancel(bnxt_dev_recover, (void *)bp);
 	bnxt_cancel_fc_thread(bp);
 
-	if (BNXT_TRUFLOW_EN(bp))
+	if (BNXT_TRUFLOW_EN(bp)) {
+		if (bp->rep_info != NULL)
+			bnxt_destroy_df_rules(bp);
 		bnxt_ulp_deinit(bp);
+	}
 
 	if (eth_dev->data->dev_started)
 		bnxt_dev_stop_op(eth_dev);
@@ -1580,6 +1651,9 @@ static int bnxt_promiscuous_disable_op(struct rte_eth_dev *eth_dev)
 	if (rc != 0)
 		vnic->flags = old_flags;
 
+	if (BNXT_TRUFLOW_EN(bp) && bp->rep_info != NULL)
+		bnxt_create_df_rules(bp);
+
 	return rc;
 }
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 3563f63fa..4843da562 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,9 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	uint32_t			port_to_app_flow_id;
+	uint32_t			app_to_port_flow_id;
+	uint32_t			tx_cfa_action;
 };
 
 struct bnxt_ulp_context {
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 48/51] net/bnxt: fill cfa action in the Tx descriptor
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (46 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
                           ` (4 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Somnath Kotur

From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

Currently, only vfrep transmit requires cfa_action to be filled
in the tx buffer descriptor. However with truflow, dpdk(non vfrep)
to port also requires cfa_action to be filled in the tx buffer
descriptor.

This patch uses the correct cfa_action pointer while transmitting
the packet. Based on whether the packet is transmitted on non-vfrep
or vfrep, tx_cfa_action or vfr_tx_cfa_action inside txq will be
filled in the tx buffer descriptor.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index d7e193d38..f5884268e 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -131,7 +131,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) || txq->tx_cfa_action)
+				PKT_TX_QINQ_PKT) ||
+	     txq->bp->ulp_ctx->cfg_data->tx_cfa_action ||
+	     txq->vfr_tx_cfa_action)
 		long_bd = true;
 
 	nr_bds = long_bd + tx_pkt->nb_segs;
@@ -184,7 +186,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
 		vlan_tag_flags = 0;
-		cfa_action = txq->tx_cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp)) {
+			if (txq->vfr_tx_cfa_action)
+				cfa_action = txq->vfr_tx_cfa_action;
+			else
+				cfa_action =
+				      txq->bp->ulp_ctx->cfg_data->tx_cfa_action;
+		}
+
 		/* HW can accelerate only outer vlan in QinQ mode */
 		if (tx_buf->mbuf->ol_flags & PKT_TX_QINQ_PKT) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
@@ -212,7 +222,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					&txr->tx_desc_ring[txr->tx_prod];
 		txbd1->lflags = 0;
 		txbd1->cfa_meta = vlan_tag_flags;
-		txbd1->cfa_action = cfa_action;
+
+		if (BNXT_TRUFLOW_EN(txq->bp))
+			txbd1->cfa_action = cfa_action;
 
 		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
 			uint16_t hdr_size;
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 49/51] net/bnxt: add ULP Flow counter Manager
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (47 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
                           ` (3 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

The Flow counter manager allocates memory to hold the software view
of the counters where the on-chip counter data will be accumulated
along with another memory block that will be shadowing the on-chip
counter data i.e where the raw counter data will be DMAed into from
the chip.
It also keeps track of the first HW counter ID as that will be needed
to retrieve the counter data in bulk using a TF API. It issues this cmd
in an rte_alarm thread that keeps running every second.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build          |   1 +
 drivers/net/bnxt/tf_ulp/Makefile      |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h    |   8 +
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c  | 465 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h  | 148 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c |  27 ++
 7 files changed, 685 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 2939857ca..5fb0ed380 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -46,6 +46,7 @@ sources = files('bnxt_cpr.c',
 	'tf_core/ll.c',
 	'tf_core/tf_global_cfg.c',
 	'tf_core/tf_em_host.c',
+	'tf_ulp/ulp_fc_mgr.c',
 
 	'hcapi/hcapi_cfa_p4.c',
 
diff --git a/drivers/net/bnxt/tf_ulp/Makefile b/drivers/net/bnxt/tf_ulp/Makefile
index 3f1b43bae..abb68150d 100644
--- a/drivers/net/bnxt/tf_ulp/Makefile
+++ b/drivers/net/bnxt/tf_ulp/Makefile
@@ -17,3 +17,4 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_port_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_def_rules.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_fc_mgr.c
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index e5e7e5f43..c05861150 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -18,6 +18,7 @@
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "ulp_mark_mgr.h"
+#include "ulp_fc_mgr.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 #include "ulp_port_db.h"
@@ -705,6 +706,12 @@ bnxt_ulp_init(struct bnxt *bp)
 		goto jump_to_error;
 	}
 
+	rc = ulp_fc_mgr_init(bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to initialize ulp flow counter mgr\n");
+		goto jump_to_error;
+	}
+
 	return rc;
 
 jump_to_error:
@@ -752,6 +759,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	/* cleanup the ulp mapper */
 	ulp_mapper_deinit(bp->ulp_ctx);
 
+	/* Delete the Flow Counter Manager */
+	ulp_fc_mgr_deinit(bp->ulp_ctx);
+
 	/* Delete the Port database */
 	ulp_port_db_deinit(bp->ulp_ctx);
 
@@ -963,3 +973,28 @@ bnxt_ulp_cntxt_ptr2_port_db_get(struct bnxt_ulp_context	*ulp_ctx)
 
 	return ulp_ctx->cfg_data->port_db;
 }
+
+/* Function to set the flow counter info into the context */
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->fc_info = ulp_fc_info;
+
+	return 0;
+}
+
+/* Function to retrieve the flow counter info from the context. */
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->fc_info;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 4843da562..a13328426 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -22,6 +22,7 @@ struct bnxt_ulp_data {
 	struct bnxt_ulp_flow_db		*flow_db;
 	void				*mapper_data;
 	struct bnxt_ulp_port_db		*port_db;
+	struct bnxt_ulp_fc_info		*fc_info;
 	uint32_t			port_to_app_flow_id;
 	uint32_t			app_to_port_flow_id;
 	uint32_t			tx_cfa_action;
@@ -154,4 +155,11 @@ int
 bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *error);
 
+int32_t
+bnxt_ulp_cntxt_ptr2_fc_info_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_fc_info *ulp_fc_info);
+
+struct bnxt_ulp_fc_info *
+bnxt_ulp_cntxt_ptr2_fc_info_get(struct bnxt_ulp_context *ulp_ctx);
+
 #endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
new file mode 100644
index 000000000..f70d4a295
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -0,0 +1,465 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_alarm.h>
+#include "bnxt.h"
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "ulp_fc_mgr.h"
+#include "ulp_template_db_enum.h"
+#include "ulp_template_struct.h"
+#include "tf_tbl.h"
+
+static int
+ulp_fc_mgr_shadow_mem_alloc(struct hw_fc_mem_info *parms, int size)
+{
+	/* Allocate memory*/
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("ulp_fc_info",
+				    RTE_CACHE_LINE_ROUNDUP(size),
+				    4096);
+	if (parms->mem_va == NULL) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	rte_mem_lock_page(parms->mem_va);
+
+	parms->mem_pa = (void *)(uintptr_t)rte_mem_virt2phy(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		BNXT_TF_DBG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void
+ulp_fc_mgr_shadow_mem_free(struct hw_fc_mem_info *parms)
+{
+	rte_free(parms->mem_va);
+}
+
+/*
+ * Allocate and Initialize all Flow Counter Manager resources for this ulp
+ * context.
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager.
+ *
+ */
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id, sw_acc_cntr_tbl_sz, hw_fc_mem_info_sz;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i, rc;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	ulp_fc_info = rte_zmalloc("ulp_fc_info", sizeof(*ulp_fc_info), 0);
+	if (!ulp_fc_info)
+		goto error;
+
+	rc = pthread_mutex_init(&ulp_fc_info->fc_lock, NULL);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Failed to initialize fc mutex\n");
+		goto error;
+	}
+
+	/* Add the FC info tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, ulp_fc_info);
+
+	sw_acc_cntr_tbl_sz = sizeof(struct sw_acc_counter) *
+				dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		ulp_fc_info->sw_acc_tbl[i] = rte_zmalloc("ulp_sw_acc_cntr_tbl",
+							 sw_acc_cntr_tbl_sz, 0);
+		if (!ulp_fc_info->sw_acc_tbl[i])
+			goto error;
+	}
+
+	hw_fc_mem_info_sz = sizeof(uint64_t) * dparms->flow_count_db_entries;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_fc_mgr_shadow_mem_alloc(&ulp_fc_info->shadow_hw_tbl[i],
+						 hw_fc_mem_info_sz);
+		if (rc)
+			goto error;
+	}
+
+	return 0;
+
+error:
+	ulp_fc_mgr_deinit(ctxt);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for fc mgr\n");
+
+	return -ENOMEM;
+}
+
+/*
+ * Release all resources in the Flow Counter Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the Flow Counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	int i;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EINVAL;
+
+	ulp_fc_mgr_thread_cancel(ctxt);
+
+	pthread_mutex_destroy(&ulp_fc_info->fc_lock);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		rte_free(ulp_fc_info->sw_acc_tbl[i]);
+
+	for (i = 0; i < TF_DIR_MAX; i++)
+		ulp_fc_mgr_shadow_mem_free(&ulp_fc_info->shadow_hw_tbl[i]);
+
+
+	rte_free(ulp_fc_info);
+
+	/* Safe to ignore on deinit */
+	(void)bnxt_ulp_cntxt_ptr2_fc_info_set(ctxt, NULL);
+
+	return 0;
+}
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	return !!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD);
+}
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!(ulp_fc_info->flags & ULP_FLAG_FC_THREAD)) {
+		rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+				  ulp_fc_mgr_alarm_cb,
+				  (void *)ctxt);
+		ulp_fc_info->flags |= ULP_FLAG_FC_THREAD;
+	}
+
+	return 0;
+}
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	ulp_fc_info->flags &= ~ULP_FLAG_FC_THREAD;
+	rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt);
+}
+
+/*
+ * DMA-in the raw counter data from the HW and accumulate in the
+ * local accumulator table using the TF-Core API
+ *
+ * tfp [in] The TF-Core context
+ *
+ * fc_info [in] The ULP Flow counter info ptr
+ *
+ * dir [in] The direction of the flow
+ *
+ * num_counters [in] The number of counters
+ *
+ */
+static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+				       struct bnxt_ulp_fc_info *fc_info,
+				       enum tf_dir dir, uint32_t num_counters)
+{
+	int rc = 0;
+	struct tf_tbl_get_bulk_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD: Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t *stats = NULL;
+	uint16_t i = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
+	parms.num_entries = num_counters;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.entry_sz_in_bytes = sizeof(uint64_t);
+	stats = (uint64_t *)fc_info->shadow_hw_tbl[dir].mem_va;
+	parms.physical_mem_addr = (uintptr_t)fc_info->shadow_hw_tbl[dir].mem_pa;
+
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Memory not initialized id:0x%x dir:%d\n",
+			    parms.starting_idx, dir);
+		return -EINVAL;
+	}
+
+	rc = tf_tbl_bulk_get(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "BULK: Get failed for id:0x%x rc:%d\n",
+			    parms.starting_idx, rc);
+		return rc;
+	}
+
+	for (i = 0; i < num_counters; i++) {
+		/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+		sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
+		if (!sw_acc_tbl_entry->valid)
+			continue;
+		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i]);
+		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i]);
+	}
+
+	return rc;
+}
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg)
+{
+	int rc = 0, i;
+	struct bnxt_ulp_context *ctxt = arg;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct bnxt_ulp_device_params *dparms;
+	struct tf *tfp;
+	uint32_t dev_id;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return;
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ctxt);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return;
+	}
+
+	/*
+	 * Take the fc_lock to ensure no flow is destroyed
+	 * during the bulk get
+	 */
+	if (pthread_mutex_trylock(&ulp_fc_info->fc_lock))
+		goto out;
+
+	if (!ulp_fc_info->num_entries) {
+		pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
+					     dparms->flow_count_db_entries);
+		if (rc)
+			break;
+	}
+
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	/*
+	 * If cmd fails once, no need of
+	 * invoking again every second
+	 */
+
+	if (rc) {
+		ulp_fc_mgr_thread_cancel(ctxt);
+		return;
+	}
+out:
+	rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER,
+			  ulp_fc_mgr_alarm_cb,
+			  (void *)ctxt);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	/* Assuming start_idx of 0 is invalid */
+	return (ulp_fc_info->shadow_hw_tbl[dir].start_idx != 0);
+}
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+				 uint32_t start_idx)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+
+	if (!ulp_fc_info)
+		return -EIO;
+
+	/* Assuming that 0 is an invalid counter ID ? */
+	if (ulp_fc_info->shadow_hw_tbl[dir].start_idx == 0)
+		ulp_fc_info->shadow_hw_tbl[dir].start_idx = start_idx;
+
+	return 0;
+}
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			    uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->num_entries++;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
+
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			      uint32_t hw_cntr_id)
+{
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	uint32_t sw_cntr_idx;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -EIO;
+
+	pthread_mutex_lock(&ulp_fc_info->fc_lock);
+	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
+	ulp_fc_info->num_entries--;
+	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
new file mode 100644
index 000000000..faa77dd75
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FC_MGR_H_
+#define _ULP_FC_MGR_H_
+
+#include "bnxt_ulp.h"
+#include "tf_core.h"
+
+#define ULP_FLAG_FC_THREAD			BIT(0)
+#define ULP_FC_TIMER	1/* Timer freq in Sec Flow Counters */
+
+/* Macros to extract packet/byte counters from a 64-bit flow counter. */
+#define FLOW_CNTR_BYTE_WIDTH 36
+#define FLOW_CNTR_BYTE_MASK  (((uint64_t)1 << FLOW_CNTR_BYTE_WIDTH) - 1)
+
+#define FLOW_CNTR_PKTS(v) ((v) >> FLOW_CNTR_BYTE_WIDTH)
+#define FLOW_CNTR_BYTES(v) ((v) & FLOW_CNTR_BYTE_MASK)
+
+struct sw_acc_counter {
+	uint64_t pkt_count;
+	uint64_t byte_count;
+	bool	valid;
+};
+
+struct hw_fc_mem_info {
+	/*
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/*
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+	uint32_t start_idx;
+};
+
+struct bnxt_ulp_fc_info {
+	struct sw_acc_counter	*sw_acc_tbl[TF_DIR_MAX];
+	struct hw_fc_mem_info	shadow_hw_tbl[TF_DIR_MAX];
+	uint32_t		flags;
+	uint32_t		num_entries;
+	pthread_mutex_t		fc_lock;
+};
+
+int32_t
+ulp_fc_mgr_init(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Release all resources in the flow counter manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_deinit(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Setup the Flow counter timer thread that will fetch/accumulate raw counter
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+int32_t
+ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Alarm handler that will issue the TF-Core API to fetch
+ * data from the chip's internal flow counters
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ */
+void
+ulp_fc_mgr_alarm_cb(void *arg);
+
+/*
+ * Cancel the alarm handler
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt);
+
+/*
+ * Set the starting index that indicates the first HW flow
+ * counter ID
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * start_idx [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_start_idx_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			     uint32_t start_idx);
+
+/*
+ * Set the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID. Also, keep track of num of active counter enabled
+ * flows.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			uint32_t hw_cntr_id);
+/*
+ * Reset the corresponding SW accumulator table entry based on
+ * the difference between this counter ID and the starting
+ * counter ID.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ * hw_cntr_id [in] The HW flow counter ID
+ *
+ */
+int ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
+			  uint32_t hw_cntr_id);
+/*
+ * Check if the starting HW counter ID value is set in the
+ * flow counter manager.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * dir [in] The direction of the flow
+ *
+ */
+bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
+
+/*
+ * Check if the alarm thread that walks through the flows is started
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ */
+
+bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_FC_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 7696de2a5..a3cfe54bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -10,6 +10,7 @@
 #include "ulp_utils.h"
 #include "ulp_template_struct.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 
 #define ULP_FLOW_DB_RES_DIR_BIT		31
 #define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
@@ -484,6 +485,21 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 		ulp_flow_db_res_params_to_info(fid_resource, params);
 	}
 
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		/* Store the first HW counter ID for this table */
+		if (!ulp_fc_mgr_start_idx_isset(ulp_ctxt, params->direction))
+			ulp_fc_mgr_start_idx_set(ulp_ctxt, params->direction,
+						 params->resource_hndl);
+
+		ulp_fc_mgr_cntr_set(ulp_ctxt, params->direction,
+				    params->resource_hndl);
+
+		if (!ulp_fc_mgr_thread_isstarted(ulp_ctxt))
+			ulp_fc_mgr_thread_start(ulp_ctxt);
+	}
+
 	/* all good, return success */
 	return 0;
 }
@@ -574,6 +590,17 @@ int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
 					nxt_idx);
 	}
 
+	/* Now that the HW Flow counter resource is deleted, reset it's
+	 * corresponding slot in the SW accumulation table in the Flow Counter
+	 * manager
+	 */
+	if (params->resource_type == TF_TBL_TYPE_ACT_STATS_64 &&
+	    params->resource_sub_type ==
+	    BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+		ulp_fc_mgr_cntr_reset(ulp_ctxt, params->direction,
+				      params->resource_hndl);
+	}
+
 	/* all good, return success */
 	return 0;
 }
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 50/51] net/bnxt: add support for count action in flow query
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (48 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 51/51] doc: update release notes Ajit Khaparde
                           ` (2 subsequent siblings)
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Venkat Duvvuru

From: Somnath Kotur <somnath.kotur@broadcom.com>

Use the flow counter manager to fetch the accumulated stats for
a flow.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c |  45 +++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c    | 141 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h    |  17 ++-
 3 files changed, 196 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 7ef306e58..36a014184 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -9,6 +9,7 @@
 #include "ulp_matcher.h"
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
+#include "ulp_fc_mgr.h"
 #include <rte_malloc.h>
 
 static int32_t
@@ -289,11 +290,53 @@ bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
 	return ret;
 }
 
+/* Function to query the rte flows. */
+static int32_t
+bnxt_ulp_flow_query(struct rte_eth_dev *eth_dev,
+		    struct rte_flow *flow,
+		    const struct rte_flow_action *action,
+		    void *data,
+		    struct rte_flow_error *error)
+{
+	int rc = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	struct rte_flow_query_count *count;
+	uint32_t flow_id;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to query flow.");
+		return -EINVAL;
+	}
+
+	flow_id = (uint32_t)(uintptr_t)flow;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		count = data;
+		rc = ulp_fc_mgr_query_count_get(ulp_ctx, flow_id, count);
+		if (rc) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Failed to query flow.");
+		}
+		break;
+	default:
+		rte_flow_error_set(error, -rc, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL, "Unsupported action item");
+	}
+
+	return rc;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
 	.flush = bnxt_ulp_flow_flush,
-	.query = NULL,
+	.query = bnxt_ulp_flow_query,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index f70d4a295..9944e9e5c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -11,6 +11,7 @@
 #include "bnxt_ulp.h"
 #include "bnxt_tf_common.h"
 #include "ulp_fc_mgr.h"
+#include "ulp_flow_db.h"
 #include "ulp_template_db_enum.h"
 #include "ulp_template_struct.h"
 #include "tf_tbl.h"
@@ -226,9 +227,10 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
  * num_counters [in] The number of counters
  *
  */
-static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
+__rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 				       struct bnxt_ulp_fc_info *fc_info,
 				       enum tf_dir dir, uint32_t num_counters)
+/* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
 {
 	int rc = 0;
 	struct tf_tbl_get_bulk_parms parms = { 0 };
@@ -275,6 +277,45 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 
 	return rc;
 }
+
+static int ulp_get_single_flow_stat(struct tf *tfp,
+				    struct bnxt_ulp_fc_info *fc_info,
+				    enum tf_dir dir,
+				    uint32_t hw_cntr_id)
+{
+	int rc = 0;
+	struct tf_get_tbl_entry_parms parms = { 0 };
+	enum tf_tbl_type stype = TF_TBL_TYPE_ACT_STATS_64;  /* TBD:Template? */
+	struct sw_acc_counter *sw_acc_tbl_entry = NULL;
+	uint64_t stats = 0;
+	uint32_t sw_cntr_indx = 0;
+
+	parms.dir = dir;
+	parms.type = stype;
+	parms.idx = hw_cntr_id;
+	/*
+	 * TODO:
+	 * Size of an entry needs to obtained from template
+	 */
+	parms.data_sz_in_bytes = sizeof(uint64_t);
+	parms.data = (uint8_t *)&stats;
+	rc = tf_get_tbl_entry(tfp, &parms);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "Get failed for id:0x%x rc:%d\n",
+			    parms.idx, rc);
+		return rc;
+	}
+
+	/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
+	sw_cntr_indx = hw_cntr_id - fc_info->shadow_hw_tbl[dir].start_idx;
+	sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][sw_cntr_indx];
+	sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats);
+	sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats);
+
+	return rc;
+}
+
 /*
  * Alarm handler that will issue the TF-Core API to fetch
  * data from the chip's internal flow counters
@@ -282,15 +323,18 @@ static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
+
 void
 ulp_fc_mgr_alarm_cb(void *arg)
 {
-	int rc = 0, i;
+	int rc = 0;
+	unsigned int j;
+	enum tf_dir i;
 	struct bnxt_ulp_context *ctxt = arg;
 	struct bnxt_ulp_fc_info *ulp_fc_info;
 	struct bnxt_ulp_device_params *dparms;
 	struct tf *tfp;
-	uint32_t dev_id;
+	uint32_t dev_id, hw_cntr_id = 0;
 
 	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
 	if (!ulp_fc_info)
@@ -325,13 +369,27 @@ ulp_fc_mgr_alarm_cb(void *arg)
 		ulp_fc_mgr_thread_cancel(ctxt);
 		return;
 	}
-
-	for (i = 0; i < TF_DIR_MAX; i++) {
+	/*
+	 * Commented for now till GET_BULK is resolved, just get the first flow
+	 * stat for now
+	 for (i = 0; i < TF_DIR_MAX; i++) {
 		rc = ulp_bulk_get_flow_stats(tfp, ulp_fc_info, i,
 					     dparms->flow_count_db_entries);
 		if (rc)
 			break;
 	}
+	*/
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		for (j = 0; j < ulp_fc_info->num_entries; j++) {
+			if (!ulp_fc_info->sw_acc_tbl[i][j].valid)
+				continue;
+			hw_cntr_id = ulp_fc_info->sw_acc_tbl[i][j].hw_cntr_id;
+			rc = ulp_get_single_flow_stat(tfp, ulp_fc_info, i,
+						      hw_cntr_id);
+			if (rc)
+				break;
+		}
+	}
 
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -425,6 +483,7 @@ int32_t ulp_fc_mgr_cntr_set(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = true;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = hw_cntr_id;
 	ulp_fc_info->num_entries++;
 	pthread_mutex_unlock(&ulp_fc_info->fc_lock);
 
@@ -456,6 +515,7 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 	pthread_mutex_lock(&ulp_fc_info->fc_lock);
 	sw_cntr_idx = hw_cntr_id - ulp_fc_info->shadow_hw_tbl[dir].start_idx;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].valid = false;
+	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].hw_cntr_id = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].pkt_count = 0;
 	ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx].byte_count = 0;
 	ulp_fc_info->num_entries--;
@@ -463,3 +523,74 @@ int32_t ulp_fc_mgr_cntr_reset(struct bnxt_ulp_context *ctxt, enum tf_dir dir,
 
 	return 0;
 }
+
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ctxt,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count)
+{
+	int rc = 0;
+	uint32_t nxt_resource_index = 0;
+	struct bnxt_ulp_fc_info *ulp_fc_info;
+	struct ulp_flow_db_res_params params;
+	enum tf_dir dir;
+	uint32_t hw_cntr_id = 0, sw_cntr_idx = 0;
+	struct sw_acc_counter sw_acc_tbl_entry;
+	bool found_cntr_resource = false;
+
+	ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt);
+	if (!ulp_fc_info)
+		return -ENODEV;
+
+	do {
+		rc = ulp_flow_db_resource_get(ctxt,
+					      BNXT_ULP_REGULAR_FLOW_TABLE,
+					      flow_id,
+					      &nxt_resource_index,
+					      &params);
+		if (params.resource_func ==
+		     BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE &&
+		     (params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT ||
+		      params.resource_sub_type ==
+		      BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_EXT_COUNT)) {
+			found_cntr_resource = true;
+			break;
+		}
+
+	} while (!rc);
+
+	if (rc)
+		return rc;
+
+	if (found_cntr_resource) {
+		dir = params.direction;
+		hw_cntr_id = params.resource_hndl;
+		sw_cntr_idx = hw_cntr_id -
+				ulp_fc_info->shadow_hw_tbl[dir].start_idx;
+		sw_acc_tbl_entry = ulp_fc_info->sw_acc_tbl[dir][sw_cntr_idx];
+		if (params.resource_sub_type ==
+			BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT) {
+			count->hits_set = 1;
+			count->bytes_set = 1;
+			count->hits = sw_acc_tbl_entry.pkt_count;
+			count->bytes = sw_acc_tbl_entry.byte_count;
+		} else {
+			/* TBD: Handle External counters */
+			rc = -EINVAL;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
index faa77dd75..207267049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -23,6 +23,7 @@ struct sw_acc_counter {
 	uint64_t pkt_count;
 	uint64_t byte_count;
 	bool	valid;
+	uint32_t hw_cntr_id;
 };
 
 struct hw_fc_mem_info {
@@ -142,7 +143,21 @@ bool ulp_fc_mgr_start_idx_isset(struct bnxt_ulp_context *ctxt, enum tf_dir dir);
  * ctxt [in] The ulp context for the flow counter manager
  *
  */
-
 bool ulp_fc_mgr_thread_isstarted(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Fill the rte_flow_query_count 'data' argument passed
+ * in the rte_flow_query() with the values obtained and
+ * accumulated locally.
+ *
+ * ctxt [in] The ulp context for the flow counter manager
+ *
+ * flow_id [in] The HW flow ID
+ *
+ * count [out] The rte_flow_query_count 'data' that is set
+ *
+ */
+int ulp_fc_mgr_query_count_get(struct bnxt_ulp_context *ulp_ctx,
+			       uint32_t flow_id,
+			       struct rte_flow_query_count *count);
 #endif /* _ULP_FC_MGR_H_ */
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* [dpdk-dev] [PATCH v5 51/51] doc: update release notes
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (49 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
@ 2020-07-03 21:02         ` Ajit Khaparde
  2020-07-06  1:47         ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
  2020-07-06 10:10         ` Ferruh Yigit
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-03 21:02 UTC (permalink / raw)
  To: dev

Update release notes with enhancements in Broadcom PMD.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/rel_notes/release_20_08.rst | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index f4b858727..17b35e066 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -92,6 +92,16 @@ New Features
   * Added support for DCF datapath configuration.
   * Added support for more PPPoE packet type for switch filter.
 
+* **Updated Broadcom bnxt driver.**
+
+  Updated the Broadcom bnxt driver with new features and improvements, including:
+
+  * Added support for VF representors.
+  * Added support for multiple devices.
+  * Added support for new resource manager API.
+  * Added support for VXLAN encap/decap.
+  * Added support for rte_flow_query for COUNT action.
+
 * **Added support for BPF_ABS/BPF_IND load instructions.**
 
   Added support for two BPF non-generic instructions:
@@ -108,7 +118,6 @@ New Features
   * Dump ``rte_flow`` memory consumption.
   * Measure packet per second forwarding.
 
-
 Removed Items
 -------------
 
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (50 preceding siblings ...)
  2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 51/51] doc: update release notes Ajit Khaparde
@ 2020-07-06  1:47         ` Ajit Khaparde
  2020-07-06 10:10         ` Ferruh Yigit
  52 siblings, 0 replies; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-06  1:47 UTC (permalink / raw)
  To: dpdk-dev

On Fri, Jul 3, 2020 at 2:05 PM Ajit Khaparde <ajit.khaparde@broadcom.com>
wrote:

> v1->v2:
>  - update commit message
>  - rebase patches against latest changes in the tree
>  - fix signed-off-by tags
>  - update release notes
>
> v2->v3:
>  - fix compilation issues
>
> v3->v4:
>  - rebase against latest dpdk-next-net
>
> v4->v5:
>  - fix uninitlalized variable in patch [29/51]
>  - rebase against latest dpdk-next-net
>

Patchset applied to dpdk-next-net-brcm. Thanks

>
> Ajit Khaparde (1):
>   doc: update release notes
>
> Jay Ding (5):
>   net/bnxt: implement support for TCAM access
>   net/bnxt: support two level priority for TCAMs
>   net/bnxt: add external action alloc and free
>   net/bnxt: implement IF tables set and get
>   net/bnxt: add global config set and get APIs
>
> Kishore Padmanabha (8):
>   net/bnxt: integrate with the latest tf core changes
>   net/bnxt: add support for if table processing
>   net/bnxt: disable Tx vector mode if truflow is enabled
>   net/bnxt: add index opcode and operand to mapper table
>   net/bnxt: add support for global resource templates
>   net/bnxt: add support for internal exact match entries
>   net/bnxt: add support for conditional execution of mapper tables
>   net/bnxt: add VF-rep and stat templates
>
> Lance Richardson (1):
>   net/bnxt: initialize parent PF information
>
> Michael Wildt (7):
>   net/bnxt: add multi device support
>   net/bnxt: update multi device design support
>   net/bnxt: multiple device implementation
>   net/bnxt: update identifier with remap support
>   net/bnxt: update RM with residual checker
>   net/bnxt: update table get to use new design
>   net/bnxt: add TF register and unregister
>
> Mike Baucom (1):
>   net/bnxt: add support for internal encap records
>
> Peter Spreadborough (7):
>   net/bnxt: add support for exact match
>   net/bnxt: modify EM insert and delete to use HWRM direct
>   net/bnxt: add HCAPI interface support
>   net/bnxt: support EM and TCAM lookup with table scope
>   net/bnxt: update RM to support HCAPI only
>   net/bnxt: remove table scope from session
>   net/bnxt: add support for EEM System memory
>
> Randy Schacher (2):
>   net/bnxt: add core changes for EM and EEM lookups
>   net/bnxt: align CFA resources with RM
>
> Shahaji Bhosle (2):
>   net/bnxt: support bulk table get and mirror
>   net/bnxt: support two-level priority for TCAMs
>
> Somnath Kotur (7):
>   net/bnxt: add basic infrastructure for VF reps
>   net/bnxt: add support for VF-reps data path
>   net/bnxt: get IDs for VF-Rep endpoint
>   net/bnxt: parse reps along with other dev-args
>   net/bnxt: create default flow rules for the VF-rep
>   net/bnxt: add ULP Flow counter Manager
>   net/bnxt: add support for count action in flow query
>
> Venkat Duvvuru (10):
>   net/bnxt: modify port db dev interface
>   net/bnxt: get port and function info
>   net/bnxt: add support for hwrm port phy qcaps
>   net/bnxt: modify port db to handle more info
>   net/bnxt: enable port MAC qcfg command for trusted VF
>   net/bnxt: enhancements for port db
>   net/bnxt: manage VF to VFR conduit
>   net/bnxt: fill mapper parameters with default rules
>   net/bnxt: add port default rules for ingress and egress
>   net/bnxt: fill cfa action in the Tx descriptor
>
>  config/common_base                            |    1 +
>  doc/guides/rel_notes/release_20_08.rst        |   11 +-
>  drivers/net/bnxt/Makefile                     |    8 +-
>  drivers/net/bnxt/bnxt.h                       |  121 +-
>  drivers/net/bnxt/bnxt_ethdev.c                |  519 +-
>  drivers/net/bnxt/bnxt_hwrm.c                  |  122 +-
>  drivers/net/bnxt/bnxt_hwrm.h                  |    7 +
>  drivers/net/bnxt/bnxt_reps.c                  |  773 +++
>  drivers/net/bnxt/bnxt_reps.h                  |   45 +
>  drivers/net/bnxt/bnxt_rxr.c                   |   39 +-
>  drivers/net/bnxt/bnxt_rxr.h                   |    1 +
>  drivers/net/bnxt/bnxt_txq.h                   |    2 +
>  drivers/net/bnxt/bnxt_txr.c                   |   18 +-
>  drivers/net/bnxt/hcapi/Makefile               |   10 +
>  drivers/net/bnxt/hcapi/cfa_p40_hw.h           |  781 +++
>  drivers/net/bnxt/hcapi/cfa_p40_tbl.h          |  303 +
>  drivers/net/bnxt/hcapi/hcapi_cfa.h            |  276 +
>  drivers/net/bnxt/hcapi/hcapi_cfa_defs.h       |  672 +++
>  drivers/net/bnxt/hcapi/hcapi_cfa_p4.c         |  399 ++
>  drivers/net/bnxt/hcapi/hcapi_cfa_p4.h         |  467 ++
>  drivers/net/bnxt/hsi_struct_def_dpdk.h        | 3091 ++++++++--
>  drivers/net/bnxt/meson.build                  |   21 +-
>  drivers/net/bnxt/tf_core/Makefile             |   29 +-
>  drivers/net/bnxt/tf_core/bitalloc.c           |  107 +
>  drivers/net/bnxt/tf_core/bitalloc.h           |    5 +
>  drivers/net/bnxt/tf_core/cfa_resource_types.h |  293 +
>  drivers/net/bnxt/tf_core/hwrm_tf.h            |  995 +---
>  drivers/net/bnxt/tf_core/ll.c                 |   52 +
>  drivers/net/bnxt/tf_core/ll.h                 |   46 +
>  drivers/net/bnxt/tf_core/lookup3.h            |    1 -
>  drivers/net/bnxt/tf_core/stack.c              |    8 +
>  drivers/net/bnxt/tf_core/stack.h              |   10 +
>  drivers/net/bnxt/tf_core/tf_common.h          |   43 +
>  drivers/net/bnxt/tf_core/tf_core.c            | 1495 +++--
>  drivers/net/bnxt/tf_core/tf_core.h            |  874 ++-
>  drivers/net/bnxt/tf_core/tf_device.c          |  271 +
>  drivers/net/bnxt/tf_core/tf_device.h          |  650 ++
>  drivers/net/bnxt/tf_core/tf_device_p4.c       |  147 +
>  drivers/net/bnxt/tf_core/tf_device_p4.h       |  104 +
>  drivers/net/bnxt/tf_core/tf_em.c              |  515 --
>  drivers/net/bnxt/tf_core/tf_em.h              |  492 +-
>  drivers/net/bnxt/tf_core/tf_em_common.c       | 1048 ++++
>  drivers/net/bnxt/tf_core/tf_em_common.h       |  134 +
>  drivers/net/bnxt/tf_core/tf_em_host.c         |  531 ++
>  drivers/net/bnxt/tf_core/tf_em_internal.c     |  352 ++
>  drivers/net/bnxt/tf_core/tf_em_system.c       |  533 ++
>  drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |   12 +
>  drivers/net/bnxt/tf_core/tf_global_cfg.c      |  199 +
>  drivers/net/bnxt/tf_core/tf_global_cfg.h      |  170 +
>  drivers/net/bnxt/tf_core/tf_identifier.c      |  186 +
>  drivers/net/bnxt/tf_core/tf_identifier.h      |  147 +
>  drivers/net/bnxt/tf_core/tf_if_tbl.c          |  178 +
>  drivers/net/bnxt/tf_core/tf_if_tbl.h          |  236 +
>  drivers/net/bnxt/tf_core/tf_msg.c             | 1681 +++---
>  drivers/net/bnxt/tf_core/tf_msg.h             |  409 +-
>  drivers/net/bnxt/tf_core/tf_resources.h       |  531 --
>  drivers/net/bnxt/tf_core/tf_rm.c              | 3840 +++---------
>  drivers/net/bnxt/tf_core/tf_rm.h              |  554 +-
>  drivers/net/bnxt/tf_core/tf_session.c         |  776 +++
>  drivers/net/bnxt/tf_core/tf_session.h         |  565 +-
>  drivers/net/bnxt/tf_core/tf_shadow_tbl.c      |   63 +
>  drivers/net/bnxt/tf_core/tf_shadow_tbl.h      |  240 +
>  drivers/net/bnxt/tf_core/tf_shadow_tcam.c     |   63 +
>  drivers/net/bnxt/tf_core/tf_shadow_tcam.h     |  239 +
>  drivers/net/bnxt/tf_core/tf_tbl.c             | 1930 +-----
>  drivers/net/bnxt/tf_core/tf_tbl.h             |  469 +-
>  drivers/net/bnxt/tf_core/tf_tcam.c            |  430 ++
>  drivers/net/bnxt/tf_core/tf_tcam.h            |  360 ++
>  drivers/net/bnxt/tf_core/tf_util.c            |  176 +
>  drivers/net/bnxt/tf_core/tf_util.h            |   98 +
>  drivers/net/bnxt/tf_core/tfp.c                |   33 +-
>  drivers/net/bnxt/tf_core/tfp.h                |  153 +-
>  drivers/net/bnxt/tf_ulp/Makefile              |    2 +
>  drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   16 +
>  drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  129 +-
>  drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   35 +
>  drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c       |   84 +-
>  drivers/net/bnxt/tf_ulp/ulp_def_rules.c       |  385 ++
>  drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  596 ++
>  drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |  163 +
>  drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |   42 +-
>  drivers/net/bnxt/tf_ulp/ulp_mapper.c          |  481 +-
>  drivers/net/bnxt/tf_ulp/ulp_mapper.h          |    6 +-
>  drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |   10 +
>  drivers/net/bnxt/tf_ulp/ulp_port_db.c         |  235 +-
>  drivers/net/bnxt/tf_ulp/ulp_port_db.h         |  122 +-
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      |   30 +-
>  drivers/net/bnxt/tf_ulp/ulp_template_db_act.c |  433 +-
>  .../net/bnxt/tf_ulp/ulp_template_db_class.c   | 5217 +++++++++++++----
>  .../net/bnxt/tf_ulp/ulp_template_db_enum.h    |  537 +-
>  .../net/bnxt/tf_ulp/ulp_template_db_field.h   |  463 +-
>  drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c |   85 +-
>  drivers/net/bnxt/tf_ulp/ulp_template_struct.h |   23 +-
>  drivers/net/bnxt/tf_ulp/ulp_utils.c           |    2 +-
>  94 files changed, 28009 insertions(+), 11247 deletions(-)
>  create mode 100644 drivers/net/bnxt/bnxt_reps.c
>  create mode 100644 drivers/net/bnxt/bnxt_reps.h
>  create mode 100644 drivers/net/bnxt/hcapi/Makefile
>  create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_hw.h
>  create mode 100644 drivers/net/bnxt/hcapi/cfa_p40_tbl.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
>  create mode 100644 drivers/net/bnxt/tf_core/cfa_resource_types.h
>  create mode 100644 drivers/net/bnxt/tf_core/ll.c
>  create mode 100644 drivers/net/bnxt/tf_core/ll.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_common.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_device_p4.h
>  delete mode 100644 drivers/net/bnxt/tf_core/tf_em.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_common.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_host.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_internal.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em_system.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_global_cfg.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_identifier.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_if_tbl.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_session.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tbl.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_shadow_tcam.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_tcam.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_util.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_util.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_def_rules.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
>
> --
> 2.21.1 (Apple Git-122.3)
>
>

^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
@ 2020-07-06 10:07           ` Ferruh Yigit
  2020-07-06 14:04             ` Somnath Kotur
  0 siblings, 1 reply; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-06 10:07 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Somnath Kotur, Venkat Duvvuru, Kalesh AP

On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> From: Somnath Kotur <somnath.kotur@broadcom.com>
> 
> Defines data structures and code to init/uninit
> VF representors during pci_probe and pci_remove
> respectively.
> Most of the dev_ops for the VF representor are just
> stubs for now and will be will be filled out in next patch.
> 
> To create a representor using testpmd:
> testpmd -c 0xff -wB:D.F,representor=1 -- -i
> testpmd -c 0xff -w05:02.0,representor=[1] -- -i
> 
> To create a representor using ovs-dpdk:
> 1. Firt add the trusted VF port to a bridge
> ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
> options:dpdk-devargs=0000:06:02.0
> 2. Add the representor port to the bridge
> ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
> options:dpdk-devargs=0000:06:02.0,representor=1
> 
> Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

<...>

>  static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
>  {
> -	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> -		return rte_eth_dev_pci_generic_remove(pci_dev,
> -				bnxt_dev_uninit);
> -	else
> +	struct rte_eth_dev *eth_dev;
> +
> +	eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
> +	if (!eth_dev)
> +		return ENODEV; /* Invoked typically only by OVS-DPDK, by the
> +				* time it comes here the eth_dev is already
> +				* deleted by rte_eth_dev_close(), so returning
> +				* +ve value will atleast help in proper cleanup
> +				*/

Why returning a positive error value? It hides the error since the caller of the
function does a "< 0" check.
Better to be more explicit and return '0' if an error is not intendent in this case.


^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management
  2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
                           ` (51 preceding siblings ...)
  2020-07-06  1:47         ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
@ 2020-07-06 10:10         ` Ferruh Yigit
  52 siblings, 0 replies; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-06 10:10 UTC (permalink / raw)
  To: Ajit Khaparde, dev

On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> v1->v2:
>  - update commit message
>  - rebase patches against latest changes in the tree
>  - fix signed-off-by tags
>  - update release notes
> 
> v2->v3:
>  - fix compilation issues
> 
> v3->v4:
>  - rebase against latest dpdk-next-net
> 
> v4->v5:
>  - fix uninitlalized variable in patch [29/51]
>  - rebase against latest dpdk-next-net
> 
> Ajit Khaparde (1):
>   doc: update release notes
> 
> Jay Ding (5):
>   net/bnxt: implement support for TCAM access
>   net/bnxt: support two level priority for TCAMs
>   net/bnxt: add external action alloc and free
>   net/bnxt: implement IF tables set and get
>   net/bnxt: add global config set and get APIs
> 
> Kishore Padmanabha (8):
>   net/bnxt: integrate with the latest tf core changes
>   net/bnxt: add support for if table processing
>   net/bnxt: disable Tx vector mode if truflow is enabled
>   net/bnxt: add index opcode and operand to mapper table
>   net/bnxt: add support for global resource templates
>   net/bnxt: add support for internal exact match entries
>   net/bnxt: add support for conditional execution of mapper tables
>   net/bnxt: add VF-rep and stat templates
> 
> Lance Richardson (1):
>   net/bnxt: initialize parent PF information
> 
> Michael Wildt (7):
>   net/bnxt: add multi device support
>   net/bnxt: update multi device design support
>   net/bnxt: multiple device implementation
>   net/bnxt: update identifier with remap support
>   net/bnxt: update RM with residual checker
>   net/bnxt: update table get to use new design
>   net/bnxt: add TF register and unregister
> 
> Mike Baucom (1):
>   net/bnxt: add support for internal encap records
> 
> Peter Spreadborough (7):
>   net/bnxt: add support for exact match
>   net/bnxt: modify EM insert and delete to use HWRM direct
>   net/bnxt: add HCAPI interface support
>   net/bnxt: support EM and TCAM lookup with table scope
>   net/bnxt: update RM to support HCAPI only
>   net/bnxt: remove table scope from session
>   net/bnxt: add support for EEM System memory
> 
> Randy Schacher (2):
>   net/bnxt: add core changes for EM and EEM lookups
>   net/bnxt: align CFA resources with RM
> 
> Shahaji Bhosle (2):
>   net/bnxt: support bulk table get and mirror
>   net/bnxt: support two-level priority for TCAMs
> 
> Somnath Kotur (7):
>   net/bnxt: add basic infrastructure for VF reps
>   net/bnxt: add support for VF-reps data path
>   net/bnxt: get IDs for VF-Rep endpoint
>   net/bnxt: parse reps along with other dev-args
>   net/bnxt: create default flow rules for the VF-rep
>   net/bnxt: add ULP Flow counter Manager
>   net/bnxt: add support for count action in flow query
> 
> Venkat Duvvuru (10):
>   net/bnxt: modify port db dev interface
>   net/bnxt: get port and function info
>   net/bnxt: add support for hwrm port phy qcaps
>   net/bnxt: modify port db to handle more info
>   net/bnxt: enable port MAC qcfg command for trusted VF
>   net/bnxt: enhancements for port db
>   net/bnxt: manage VF to VFR conduit
>   net/bnxt: fill mapper parameters with default rules
>   net/bnxt: add port default rules for ingress and egress
>   net/bnxt: fill cfa action in the Tx descriptor

Hi Ajit,

There are checkpatch warnings and some spelling errors (checkpatch will show
them by default when 'codespell' is installed), can you please check them?



^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps
  2020-07-06 10:07           ` Ferruh Yigit
@ 2020-07-06 14:04             ` Somnath Kotur
  2020-07-06 14:14               ` Ajit Khaparde
  0 siblings, 1 reply; 271+ messages in thread
From: Somnath Kotur @ 2020-07-06 14:04 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Ajit Khaparde, dev, Venkat Duvvuru, Kalesh AP

On Mon, Jul 6, 2020 at 3:37 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> > From: Somnath Kotur <somnath.kotur@broadcom.com>
> >
> > Defines data structures and code to init/uninit
> > VF representors during pci_probe and pci_remove
> > respectively.
> > Most of the dev_ops for the VF representor are just
> > stubs for now and will be will be filled out in next patch.
> >
> > To create a representor using testpmd:
> > testpmd -c 0xff -wB:D.F,representor=1 -- -i
> > testpmd -c 0xff -w05:02.0,representor=[1] -- -i
> >
> > To create a representor using ovs-dpdk:
> > 1. Firt add the trusted VF port to a bridge
> > ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
> > options:dpdk-devargs=0000:06:02.0
> > 2. Add the representor port to the bridge
> > ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
> > options:dpdk-devargs=0000:06:02.0,representor=1
> >
> > Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
> > Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
> > Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>
> <...>
>
> >  static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
> >  {
> > -     if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> > -             return rte_eth_dev_pci_generic_remove(pci_dev,
> > -                             bnxt_dev_uninit);
> > -     else
> > +     struct rte_eth_dev *eth_dev;
> > +
> > +     eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
> > +     if (!eth_dev)
> > +             return ENODEV; /* Invoked typically only by OVS-DPDK, by the
> > +                             * time it comes here the eth_dev is already
> > +                             * deleted by rte_eth_dev_close(), so returning
> > +                             * +ve value will atleast help in proper cleanup
> > +                             */
>
> Why returning a positive error value? It hides the error since the caller of the
> function does a "< 0" check.
> Better to be more explicit and return '0' if an error is not intendent in this case.
>
Sure, makes sense Ferruh

^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps
  2020-07-06 14:04             ` Somnath Kotur
@ 2020-07-06 14:14               ` Ajit Khaparde
  2020-07-06 18:35                 ` Ferruh Yigit
  0 siblings, 1 reply; 271+ messages in thread
From: Ajit Khaparde @ 2020-07-06 14:14 UTC (permalink / raw)
  To: Somnath Kotur; +Cc: Ferruh Yigit, dev, Venkat Duvvuru, Kalesh AP

On Mon, Jul 6, 2020 at 7:05 AM Somnath Kotur <somnath.kotur@broadcom.com>
wrote:

> On Mon, Jul 6, 2020 at 3:37 PM Ferruh Yigit <ferruh.yigit@intel.com>
> wrote:
> >
> > On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> > > From: Somnath Kotur <somnath.kotur@broadcom.com>
> > >
> > > Defines data structures and code to init/uninit
> > > VF representors during pci_probe and pci_remove
> > > respectively.
> > > Most of the dev_ops for the VF representor are just
> > > stubs for now and will be will be filled out in next patch.
> > >
> > > To create a representor using testpmd:
> > > testpmd -c 0xff -wB:D.F,representor=1 -- -i
> > > testpmd -c 0xff -w05:02.0,representor=[1] -- -i
> > >
> > > To create a representor using ovs-dpdk:
> > > 1. Firt add the trusted VF port to a bridge
> > > ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
> > > options:dpdk-devargs=0000:06:02.0
> > > 2. Add the representor port to the bridge
> > > ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
> > > options:dpdk-devargs=0000:06:02.0,representor=1
> > >
> > > Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
> > > Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
> > > Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > > Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> >
> > <...>
> >
> > >  static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
> > >  {
> > > -     if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> > > -             return rte_eth_dev_pci_generic_remove(pci_dev,
> > > -                             bnxt_dev_uninit);
> > > -     else
> > > +     struct rte_eth_dev *eth_dev;
> > > +
> > > +     eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
> > > +     if (!eth_dev)
> > > +             return ENODEV; /* Invoked typically only by OVS-DPDK, by
> the
> > > +                             * time it comes here the eth_dev is
> already
> > > +                             * deleted by rte_eth_dev_close(), so
> returning
> > > +                             * +ve value will atleast help in proper
> cleanup
> > > +                             */
> >
> > Why returning a positive error value? It hides the error since the
> caller of the
> > function does a "< 0" check.
> > Better to be more explicit and return '0' if an error is not intendent
> in this case.
> >
> Sure, makes sense Ferruh
>
Ferruh,
Do you want Som to send a fresh patchset?
Or send a fix just this with a new patch?

^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps
  2020-07-06 14:14               ` Ajit Khaparde
@ 2020-07-06 18:35                 ` Ferruh Yigit
  0 siblings, 0 replies; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-06 18:35 UTC (permalink / raw)
  To: Ajit Khaparde, Somnath Kotur; +Cc: dev, Venkat Duvvuru, Kalesh AP

On 7/6/2020 3:14 PM, Ajit Khaparde wrote:
> On Mon, Jul 6, 2020 at 7:05 AM Somnath Kotur <somnath.kotur@broadcom.com
> <mailto:somnath.kotur@broadcom.com>> wrote:
> 
>     On Mon, Jul 6, 2020 at 3:37 PM Ferruh Yigit <ferruh.yigit@intel.com
>     <mailto:ferruh.yigit@intel.com>> wrote:
>     >
>     > On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
>     > > From: Somnath Kotur <somnath.kotur@broadcom.com
>     <mailto:somnath.kotur@broadcom.com>>
>     > >
>     > > Defines data structures and code to init/uninit
>     > > VF representors during pci_probe and pci_remove
>     > > respectively.
>     > > Most of the dev_ops for the VF representor are just
>     > > stubs for now and will be will be filled out in next patch.
>     > >
>     > > To create a representor using testpmd:
>     > > testpmd -c 0xff -wB:D.F,representor=1 -- -i
>     > > testpmd -c 0xff -w05:02.0,representor=[1] -- -i
>     > >
>     > > To create a representor using ovs-dpdk:
>     > > 1. Firt add the trusted VF port to a bridge
>     > > ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
>     > > options:dpdk-devargs=0000:06:02.0
>     > > 2. Add the representor port to the bridge
>     > > ovs-vsctl add-port ovsbr0 vf_rep1 -- set Interface vf_rep1 type=dpdk
>     > > options:dpdk-devargs=0000:06:02.0,representor=1
>     > >
>     > > Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com
>     <mailto:somnath.kotur@broadcom.com>>
>     > > Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com
>     <mailto:venkatkumar.duvvuru@broadcom.com>>
>     > > Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com
>     <mailto:kalesh-anakkur.purayil@broadcom.com>>
>     > > Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com
>     <mailto:ajit.khaparde@broadcom.com>>
>     >
>     > <...>
>     >
>     > >  static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
>     > >  {
>     > > -     if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>     > > -             return rte_eth_dev_pci_generic_remove(pci_dev,
>     > > -                             bnxt_dev_uninit);
>     > > -     else
>     > > +     struct rte_eth_dev *eth_dev;
>     > > +
>     > > +     eth_dev = rte_eth_dev_allocated(pci_dev->device.name
>     <http://device.name>);
>     > > +     if (!eth_dev)
>     > > +             return ENODEV; /* Invoked typically only by OVS-DPDK, by the
>     > > +                             * time it comes here the eth_dev is already
>     > > +                             * deleted by rte_eth_dev_close(), so returning
>     > > +                             * +ve value will atleast help in proper
>     cleanup
>     > > +                             */
>     >
>     > Why returning a positive error value? It hides the error since the caller
>     of the
>     > function does a "< 0" check.
>     > Better to be more explicit and return '0' if an error is not intendent in
>     this case.
>     >
>     Sure, makes sense Ferruh
> 
> Ferruh,
> Do you want Som to send a fresh patchset?
> Or send a fix just this with a new patch?

I will update this as "return 0;" while merging, no new patchset required.

^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete Ajit Khaparde
@ 2020-07-06 18:47           ` Ferruh Yigit
  2020-07-06 19:11           ` Ferruh Yigit
  1 sibling, 0 replies; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-06 18:47 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> From: Peter Spreadborough <peter.spreadborough@broadcom.com>
> 
> Modify Exact Match insert and delete to use the HWRM messages directly.
> Remove tunneled EM insert and delete message types.
> 
> Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  drivers/net/bnxt/tf_core/hwrm_tf.h | 70 ++----------------------------
>  drivers/net/bnxt/tf_core/tf_msg.c  | 66 ++++++++++++++++------------
>  2 files changed, 43 insertions(+), 93 deletions(-)
> 
> diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
> index 439950e02..d342c695c 100644
> --- a/drivers/net/bnxt/tf_core/hwrm_tf.h
> +++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
> @@ -1,5 +1,5 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2019-2020 Broadcom
> + * Copyright(c) 2019 Broadcom
>   * All rights reserved.
>   */

Why '2020' removed from the copyright year, -although many seems feeling similar
about the '2020' :/

^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete Ajit Khaparde
  2020-07-06 18:47           ` Ferruh Yigit
@ 2020-07-06 19:11           ` Ferruh Yigit
  1 sibling, 0 replies; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-06 19:11 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> From: Peter Spreadborough <peter.spreadborough@broadcom.com>
> 
> Modify Exact Match insert and delete to use the HWRM messages directly.
> Remove tunneled EM insert and delete message types.
> 
> Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

<...>

> @@ -351,7 +345,7 @@ typedef struct tf_session_hw_resc_alloc_output {
>  	uint16_t			 range_prof_start;
>  	/* Number range profiles allocated */
>  	uint16_t			 range_prof_stride;
> -	/* Starting index of range entries allocated to the session */
> +	/* Starting index of range enntries allocated to the session */
>  	uint16_t			 range_entries_start;
>  	/* Number of range entries allocated */
>  	uint16_t			 range_entries_stride;
> @@ -453,7 +447,7 @@ typedef struct tf_session_hw_resc_free_input {
>  	uint16_t			 range_prof_start;
>  	/* Number range profiles allocated */
>  	uint16_t			 range_prof_stride;
> -	/* Starting index of range entries allocated to the session */
> +	/* Starting index of range enntries allocated to the session */
>  	uint16_t			 range_entries_start;
>  	/* Number of range entries allocated */
>  	uint16_t			 range_entries_stride;
> @@ -555,7 +549,7 @@ typedef struct tf_session_hw_resc_flush_input {
>  	uint16_t			 range_prof_start;
>  	/* Number range profiles allocated */
>  	uint16_t			 range_prof_stride;
> -	/* Starting index of range entries allocated to the session */
> +	/* Starting index of range enntries allocated to the session */
>  	uint16_t			 range_entries_start;
>  	/* Number of range entries allocated */
>  	uint16_t			 range_entries_stride;

And adding spelling errors back?
Is this patch revert of any previous patch? Would you mind keeping the spelling
fixes?

This set is big and confusing to me ...

^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/51] net/bnxt: add HCAPI interface support
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
@ 2020-07-07  8:03           ` Ferruh Yigit
  0 siblings, 0 replies; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-07  8:03 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Peter Spreadborough, Venkat Duvvuru, Randy Schacher

On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> From: Peter Spreadborough <peter.spreadborough@broadcom.com>
> 
> Add new hardware shim APIs to support multiple
> device generations
> 
> Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  drivers/net/bnxt/hcapi/Makefile           |  10 +
>  drivers/net/bnxt/hcapi/hcapi_cfa.h        | 271 +++++++++
>  drivers/net/bnxt/hcapi/hcapi_cfa_common.c |  92 +++
>  drivers/net/bnxt/hcapi/hcapi_cfa_defs.h   | 672 ++++++++++++++++++++++
>  drivers/net/bnxt/hcapi/hcapi_cfa_p4.c     | 399 +++++++++++++
>  drivers/net/bnxt/hcapi/hcapi_cfa_p4.h     | 451 +++++++++++++++
>  drivers/net/bnxt/meson.build              |   2 +
>  drivers/net/bnxt/tf_core/tf_em.c          |  28 +-
>  drivers/net/bnxt/tf_core/tf_tbl.c         |  94 +--
>  drivers/net/bnxt/tf_core/tf_tbl.h         |  24 +-
>  10 files changed, 1970 insertions(+), 73 deletions(-)
>  create mode 100644 drivers/net/bnxt/hcapi/Makefile
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_common.c
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_defs.h
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.c
>  create mode 100644 drivers/net/bnxt/hcapi/hcapi_cfa_p4.h
> 
> diff --git a/drivers/net/bnxt/hcapi/Makefile b/drivers/net/bnxt/hcapi/Makefile
> new file mode 100644
> index 000000000..65cddd789
> --- /dev/null
> +++ b/drivers/net/bnxt/hcapi/Makefile
> @@ -0,0 +1,10 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2019-2020 Broadcom Limited.
> +# All rights reserved.
> +
> +SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += hcapi/hcapi_cfa_p4.c
> +
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_defs.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/hcapi_cfa_p4.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += hcapi/cfa_p40_hw.h

Why PMD header files added to install?
They should not accessed by the application, so they should not installed.

I will try to remove them.


^ permalink raw reply	[flat|nested] 271+ messages in thread

* Re: [dpdk-dev] [PATCH v5 16/51] net/bnxt: add core changes for EM and EEM lookups
  2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
@ 2020-07-07  8:08           ` Ferruh Yigit
  0 siblings, 0 replies; 271+ messages in thread
From: Ferruh Yigit @ 2020-07-07  8:08 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Randy Schacher, Venkat Duvvuru, Shahaji Bhosle

On 7/3/2020 10:01 PM, Ajit Khaparde wrote:
> From: Randy Schacher <stuart.schacher@broadcom.com>
> 
> - Move External Exact and Exact Match to device module using HCAPI
>   to add and delete entries
> - Make EM active through the device interface.
> 
> Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
> Reviewed-by: Shahaji Bhosle <shahaji.bhosle@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

<...>

> diff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile
> index 2c02e29e7..5ed32f12a 100644
> --- a/drivers/net/bnxt/tf_core/Makefile
> +++ b/drivers/net/bnxt/tf_core/Makefile
> @@ -24,3 +24,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c
>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c
>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c
>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c
> +
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_project.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_device.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_identifier.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tbl.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/stack.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_tcam.h

Same comment here, these header files should keep PMD local and shouldn't installed.
Installing a header file as generic as "stack.h" is breaking some sample apps
but anyway I assume these are added by mistake and I will drop them.


^ permalink raw reply	[flat|nested] 271+ messages in thread

end of thread, other threads:[~2020-07-07  8:08 UTC | newest]

Thread overview: 271+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-12 13:28 [dpdk-dev] [PATCH 00/50] add features for host-based flow management Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 01/50] net/bnxt: Basic infrastructure support for VF representors Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 02/50] net/bnxt: Infrastructure support for VF-reps data path Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 03/50] net/bnxt: add support to get FID, default vnic ID and svif of VF-Rep Endpoint Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 04/50] net/bnxt: initialize parent PF information Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 05/50] net/bnxt: modify ulp_port_db_dev_port_intf_update prototype Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 06/50] net/bnxt: get port & function related information Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 07/50] net/bnxt: add support for bnxt_hwrm_port_phy_qcaps Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 08/50] net/bnxt: modify port_db to store & retrieve more info Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 09/50] net/bnxt: add support for Exact Match Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 10/50] net/bnxt: modify EM insert and delete to use HWRM direct Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 11/50] net/bnxt: add multi device support Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 12/50] net/bnxt: support bulk table get and mirror Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 13/50] net/bnxt: update multi device design support Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 14/50] net/bnxt: support two-level priority for TCAMs Somnath Kotur
2020-06-12 13:28 ` [dpdk-dev] [PATCH 15/50] net/bnxt: add HCAPI interface support Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 16/50] net/bnxt: add core changes for EM and EEM lookups Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 17/50] net/bnxt: implement support for TCAM access Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 18/50] net/bnxt: multiple device implementation Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 19/50] net/bnxt: update identifier with remap support Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 20/50] net/bnxt: update RM with residual checker Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 21/50] net/bnxt: support two level priority for TCAMs Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 22/50] net/bnxt: support EM and TCAM lookup with table scope Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 23/50] net/bnxt: update table get to use new design Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 24/50] net/bnxt: update RM to support HCAPI only Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 25/50] net/bnxt: remove table scope from session Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 26/50] net/bnxt: add external action alloc and free Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 27/50] net/bnxt: align CFA resources with RM Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 28/50] net/bnxt: implement IF tables set and get Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 29/50] net/bnxt: add TF register and unregister Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 30/50] net/bnxt: add global config set and get APIs Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 31/50] net/bnxt: add support for EEM System memory Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 32/50] net/bnxt: integrate with the latest tf_core library Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 33/50] net/bnxt: add support for internal encap records Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 34/50] net/bnxt: add support for if table processing Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 35/50] net/bnxt: disable vector mode in tx direction when truflow is enabled Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 36/50] net/bnxt: add index opcode and index operand mapper table Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 37/50] net/bnxt: add support for global resource templates Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 38/50] net/bnxt: add support for internal exact match entries Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 39/50] net/bnxt: add support for conditional execution of mapper tables Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 40/50] net/bnxt: enable HWRM_PORT_MAC_QCFG for trusted vf Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 41/50] net/bnxt: enhancements for port db Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 42/50] net/bnxt: fix for VF to VFR conduit Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 43/50] net/bnxt: fix to parse representor along with other dev-args Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 44/50] net/bnxt: fill mapper parameters with default rules info Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 45/50] net/bnxt: add support for vf rep and stat templates Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 46/50] net/bnxt: create default flow rules for the VF-rep conduit Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 47/50] net/bnxt: add ingress & egress port default rules Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 48/50] net/bnxt: fill cfa_action in the tx buffer descriptor properly Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 49/50] net/bnxt: support for ULP Flow counter Manager Somnath Kotur
2020-06-12 13:29 ` [dpdk-dev] [PATCH 50/50] net/bnxt: Add support for flow query with action_type COUNT Somnath Kotur
2020-07-01  6:51 ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 01/51] net/bnxt: add basic infrastructure for VF representors Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 23/51] net/bnxt: update table get to use new design Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-01  6:51   ` [dpdk-dev] [PATCH v2 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 43/51] net/bnxt: parse representor along with other dev-args Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 44/51] net/bnxt: fill mapper parameters with default rules info Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 46/51] net/bnxt: create default flow rules for the VF-rep conduit Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-01  6:52   ` [dpdk-dev] [PATCH v2 51/51] doc: update release notes Ajit Khaparde
2020-07-01 14:26   ` [dpdk-dev] [PATCH v2 00/51] add features for host-based flow management Ajit Khaparde
2020-07-01 21:31     ` Ferruh Yigit
2020-07-02  4:10       ` [dpdk-dev] [PATCH v3 " Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-02  4:10         ` [dpdk-dev] [PATCH v3 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 23/51] net/bnxt: update table get to use new design Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 40/51] net/bnxt: enable port MAC qcfg for trusted VF Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-02  4:11         ` [dpdk-dev] [PATCH v3 51/51] doc: update release notes Ajit Khaparde
2020-07-02 23:27       ` [dpdk-dev] [PATCH v4 00/51] add features for host-based flow management Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 10/51] net/bnxt: modify EM insert and delete to use HWRM direct Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-02 23:27         ` [dpdk-dev] [PATCH v4 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 22/51] net/bnxt: support EM and TCAM lookup with table scope Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 23/51] net/bnxt: update table get to use new design Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 35/51] net/bnxt: disable Tx vector mode if truflow is enabled Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 38/51] net/bnxt: add support for internal exact match entries Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 39/51] net/bnxt: add support for conditional execution of mapper tables Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 40/51] net/bnxt: enable port MAC qcfg command for trusted VF Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-02 23:28         ` [dpdk-dev] [PATCH v4 51/51] doc: update release notes Ajit Khaparde
2020-07-03 21:01       ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 01/51] net/bnxt: add basic infrastructure for VF reps Ajit Khaparde
2020-07-06 10:07           ` Ferruh Yigit
2020-07-06 14:04             ` Somnath Kotur
2020-07-06 14:14               ` Ajit Khaparde
2020-07-06 18:35                 ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 02/51] net/bnxt: add support for VF-reps data path Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 03/51] net/bnxt: get IDs for VF-Rep endpoint Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 04/51] net/bnxt: initialize parent PF information Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 05/51] net/bnxt: modify port db dev interface Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 06/51] net/bnxt: get port and function info Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 07/51] net/bnxt: add support for hwrm port phy qcaps Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 08/51] net/bnxt: modify port db to handle more info Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 09/51] net/bnxt: add support for exact match Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 10/51] net/bnxt: use HWRM direct for EM insert and delete Ajit Khaparde
2020-07-06 18:47           ` Ferruh Yigit
2020-07-06 19:11           ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 11/51] net/bnxt: add multi device support Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 12/51] net/bnxt: support bulk table get and mirror Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 13/51] net/bnxt: update multi device design support Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 14/51] net/bnxt: support two-level priority for TCAMs Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 15/51] net/bnxt: add HCAPI interface support Ajit Khaparde
2020-07-07  8:03           ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 16/51] net/bnxt: add core changes for EM and EEM lookups Ajit Khaparde
2020-07-07  8:08           ` Ferruh Yigit
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 17/51] net/bnxt: implement support for TCAM access Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 18/51] net/bnxt: multiple device implementation Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 19/51] net/bnxt: update identifier with remap support Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 20/51] net/bnxt: update RM with residual checker Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 21/51] net/bnxt: support two level priority for TCAMs Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 22/51] net/bnxt: use table scope for EM and TCAM lookup Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 23/51] net/bnxt: update table get to use new design Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 24/51] net/bnxt: update RM to support HCAPI only Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 25/51] net/bnxt: remove table scope from session Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 26/51] net/bnxt: add external action alloc and free Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 27/51] net/bnxt: align CFA resources with RM Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 28/51] net/bnxt: implement IF tables set and get Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 29/51] net/bnxt: add TF register and unregister Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 30/51] net/bnxt: add global config set and get APIs Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 31/51] net/bnxt: add support for EEM System memory Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 32/51] net/bnxt: integrate with the latest tf core changes Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 33/51] net/bnxt: add support for internal encap records Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 34/51] net/bnxt: add support for if table processing Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 35/51] net/bnxt: disable Tx vector mode if truflow is set Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 36/51] net/bnxt: add index opcode and operand to mapper table Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 37/51] net/bnxt: add support for global resource templates Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 38/51] net/bnxt: add support for internal exact match Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 39/51] net/bnxt: add conditional execution of mapper tables Ajit Khaparde
2020-07-03 21:01         ` [dpdk-dev] [PATCH v5 40/51] net/bnxt: allow port MAC qcfg command for trusted VF Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 41/51] net/bnxt: enhancements for port db Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 42/51] net/bnxt: manage VF to VFR conduit Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 43/51] net/bnxt: parse reps along with other dev-args Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 44/51] net/bnxt: fill mapper parameters with default rules Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 45/51] net/bnxt: add VF-rep and stat templates Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 46/51] net/bnxt: create default flow rules for the VF-rep Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 47/51] net/bnxt: add port default rules for ingress and egress Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 48/51] net/bnxt: fill cfa action in the Tx descriptor Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 49/51] net/bnxt: add ULP Flow counter Manager Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 50/51] net/bnxt: add support for count action in flow query Ajit Khaparde
2020-07-03 21:02         ` [dpdk-dev] [PATCH v5 51/51] doc: update release notes Ajit Khaparde
2020-07-06  1:47         ` [dpdk-dev] [PATCH v5 00/51] net/bnxt: add features for host-based flow management Ajit Khaparde
2020-07-06 10:10         ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).